text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To interact with a project on the command line, the current working directory needs to be within or below the project root directory.
Step2: Next we switch the current working directory to the project root directory.
Step3: Then we initialize the project.
Step4: We can verify the project configuration.
Step5: We access the job handle by providing the state point on the command line in JSON format.
Step6: The statepoints could also be read from STDIN, e.g., by reading it from a file.
Step7: We can pipe the content of this file into the signac CLI to get the corresponding job id.
Step8: Instead of the job id, we can directly obtain the path to the job workspace.
Step9: That's specifically useful in conjunction with external tools.
Step10: We can store the result in a file based on the input arguments using the -cw argument, short for --create --workspace, which returns the workspace path and creates it if necessary.
Step11: Obviously, we wouldn't write these commands all manually, but use a script instead.
Step12: We can then execute this script...
Step13: ... and examine the results.
|
<ASSISTANT_TASK:>
Python Code:
%%bash
signac --help
% pwd
% rm -rf projects/tutorial/cli
% mkdir -p projects/tutorial/cli
% cp idg projects/tutorial/cli
% cd projects/tutorial/cli
%%bash
signac init TutorialCLIProject
%%bash
signac project
signac project --workspace
%%bash
signac job '{"kT": 1.0, "p": 1.0, "N": 1000}'
%%bash
echo '{"kT": 1.0, "p": 0.1, "N": 1000}' > statepoints.txt
echo '{"kT": 1.0, "p": 1.0, "N": 1000}' >> statepoints.txt
echo '{"kT": 1.0, "p": 10.0, "N": 1000}' >> statepoints.txt
cat statepoints.txt
%%bash
head -n 1 statepoints.txt | signac job
%%bash
head -n 1 statepoints.txt | signac job --workspace
%%bash
./idg -p 1.0 -N 1000 --kT 1.0
%%bash
./idg -p 1.0 --kT 1.0 -N 1000 > $(signac job -cw '{"p": 1.0, "kT": 1.0, "N": 1000}')/V.txt
import signac
import json
project = signac.get_project()
for p in 0.1, 1.0, 10.0:
job = project.open_job({'N': 1000, 'p': p, 'kT': 1.0})
cmd = './idg -p {p} --kT {kT} -N {N}'.format(**job.statepoint())
cmd += " > $(signac job -cw '{}')/V.txt".format(json.dumps(job.statepoint()))
print(cmd)
%%bash
./idg -p 0.1 --kT 1.0 -N 1000 > $(signac job -cw '{"kT": 1.0, "p": 0.1, "N": 1000}')/V.txt
./idg -p 1.0 --kT 1.0 -N 1000 > $(signac job -cw '{"kT": 1.0, "p": 1.0, "N": 1000}')/V.txt
./idg -p 10.0 --kT 1.0 -N 1000 > $(signac job -cw '{"kT": 1.0, "p": 10.0, "N": 1000}')/V.txt
% cat `signac project -w`/*/V.txt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cargo los datos que vienen de <a href='https
Step2: Propiedades en Bogotá
Step3: Propiedades que el precio esta en COP
Step4: Propiedades en venta
Step5: Me voy a quedar con casas (tarea hacerlo con los dos)
Step6: Nos quedamos con las observaciones completas, si tiene NaN, las tiro (paso debatible)
Step7: Enfoque de validación
Step8: Lasso
|
<ASSISTANT_TASK:>
Python Code:
#cargar librerias
import pandas as pd
import numpy as np
df = pd.read_csv('co_properties.csv.gz', compression='gzip', header=0, sep=',', quotechar='"')
df.head()
df.l3.value_counts()
df=df[df['l3']=='Bogotá D.C']
df.l3.value_counts()
df.groupby('currency').agg({'id':'count'})
df=df[df['currency']=='COP']
df.operation_type.value_counts()
df=df[df['operation_type']=='Venta']
df.property_type.value_counts()
#df=df[(df['property_type']=='Casa') | (df['property_type']=='Apartamento')]
df=df[df['property_type']=='Casa']
df.property_type.value_counts()
df=df[[ 'price', 'rooms', 'bedrooms', 'bathrooms', 'surface_total', 'surface_covered','lat','lon']]
df.head()
df=df.dropna()
df.head()
df.shape
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.2,random_state=123)
y_train= train['price']
X_train=train.drop(columns=['price'])
y_test= test['price']
X_test=test.drop(columns=['price'])
from sklearn.linear_model import LinearRegression
model1 = LinearRegression().fit(X_train[['rooms']],y_train)
y_pred = model1.predict(X_test[['rooms']])
y_pred[0:10]
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test,y_pred)
#llamamos la libreria
from sklearn.linear_model import Lasso
# Definimos el modelo
model1 = Lasso(alpha=0.01).fit(X_train[['rooms']],y_train)
print(np.round_(model1.coef_,decimals=2))
# Aumentamos la penalidad a 1000000
model2 = Lasso(alpha=1000000).fit(X_train[['rooms']],y_train)
print(np.round_(model2.coef_,decimals=2))
# Aumentamos la penalidad a 10000000
model3 = Lasso(alpha=10000000).fit(X_train[['rooms']],y_train)
print(np.round_(model3.coef_,decimals=2))
# Aumentamos la penalidad a 100000000000000000
model4 = Lasso(alpha=100000000000000000).fit(X_train[['rooms']],y_train)
print(np.round_(model4.coef_,decimals=2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. TEXT PROCESSING
Step2: 2. TOPIC MODELING
Step4: Generate Topics for a new Article
Step5: Describing parameters
|
<ASSISTANT_TASK:>
Python Code:
PATH_NEWS_ARTICLES = ""
from nltk.corpus import stopwords
from nltk.tokenize import TweetTokenizer
from nltk.stem.snowball import SnowballStemmer
import re
import pickle
import pandas as pd
import gensim
from gensim import corpora, models
df=pd.read_csv(PATH_NEWS_ARTICLES)
df.head(5)
stop_words = set(stopwords.words('english'))
tknzr = TweetTokenizer()
stemmer = SnowballStemmer("english")
def clean_text(text):
cleaned_text=re.sub('[^\w_\s-]', ' ', text) #remove punctuation marks
return cleaned_text #and other symbols
def tokenize(text):
word = tknzr.tokenize(text) #tokenization
filtered_sentence = [w for w in word if not w.lower() in stop_words] #removing stop words
stemmed_filtered_tokens = [stemmer.stem(plural) for plural in filtered_sentence] #stemming
tokens = [i for i in stemmed_filtered_tokens if i.isalpha() and len(i) not in [0, 1]]
return tokens
# Cleaning all articles
# Returns a list containing list of words of each article
def text_processing():
news_articles = df['Content'].tolist()
cleaned_text = list(map(clean_text, news_articles))
article_vocabulary = list(map(tokenize, cleaned_text))
return article_vocabulary
article_vocabulary = text_processing()
#Parameters for LDA :-
#NUMBER_OF_TOPICS is the number of requested latent topics to be extracted from the training corpus.
NUMBER_OF_TOPICS = 5
#PASSES refers to number of iterations
PASSES = 1
#NUMBER_OF_WORDS is the number of words for which you want to check the topic-word distribution
NUMBER_OF_WORDS = 10
#Mapping vocabulary with IDs
dictionary = corpora.Dictionary(article_vocabulary)
pickle.dump(dictionary, open("dictionary_of_vocabulary.p", "wb"))
zip(dictionary.keys(),dictionary.values())
#Mapping Vocabulary to Corpus
corpus = [dictionary.doc2bow(text) for text in article_vocabulary]
#Training LDA Model
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=NUMBER_OF_TOPICS,passes=PASSES)
lda.save('lda.model')
#Topic-Word Distribution
topic_words = lda.show_topics(num_topics=NUMBER_OF_TOPICS, num_words=NUMBER_OF_WORDS) # narray of Shape: n_topics*vocab
topic_words
#Article - Topic Distribution for first Article
def get_article_topic_distribution(article):
return lda.get_document_topics(article)
#Returns a list containing a list of tuple
#Each inner list corresponds to an article and each tuple refers to topicID and its corresponding probability
map(get_article_topic_distribution, corpus)
new_article = At the dawn of history India started on her unending quest, and trackless centuries are filled with her
striving and the grandeur of her success and her failures. Through good and ill fortune alike she has
never lost sight of that quest or forgotten the ideals which gave her strength. We end today a period of
ill fortune and India discovers herself again. The achievement we celebrate today is but a step, an opening
of opportunity, to the greater triumphs and achievements that await us.
Are we brave enough and wise enough to grasp this opportunity and accept the challenge of the future?
DICTIONARY_PATH = "dictionary_of_vocabulary.p"
LDA_MODEL_PATH = "lda.model"
#Cleaning the article
cleaned_text = clean_text(new_article)
article_vocabulary = tokenize(cleaned_text)
#Load model dictionary
model_dictionary = pickle.load(open(DICTIONARY_PATH,"rb"))
#Generate article maping using IDs associated with vocab
corpus = [model_dictionary.doc2bow(text) for text in [article_vocabulary]]
#Load LDA Model
lda = models.LdaModel.load(LDA_MODEL_PATH)
#Article-Topic Distribution
article_topic_distribution=lda.get_document_topics(corpus[0])
article_topic_distribution
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
from mne.viz import tight_layout
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=3),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv.split(epochs, labels):
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
plt.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Normalized Confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Insepct Input Data
Step4: Clean up the existing output files (optional)
Step5: Example pipeline that uses the component
Step6: Compile the pipeline
Step7: Submit the pipeline for execution
Step8: Inspect the output
|
<ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
import kfp.components as comp
dataproc_submit_hadoop_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_hadoop_job/component.yaml')
help(dataproc_submit_hadoop_job_op)
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
OUTPUT_GCS_PATH = '<Please put your output GCS path here>'
REGION = 'us-central1'
MAIN_CLASS = 'org.apache.hadoop.examples.WordCount'
INTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'
EXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'
!gsutil cat $INTPUT_GCS_PATH
!gsutil rm $OUTPUT_GCS_PATH/**
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Hadoop job pipeline',
description='Dataproc submit Hadoop job pipeline'
)
def dataproc_submit_hadoop_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps([
INTPUT_GCS_PATH,
OUTPUT_GCS_PATH
]),
hadoop_job='',
job='{}',
wait_interval='30'
):
dataproc_submit_hadoop_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
hadoop_job=hadoop_job,
job=job,
wait_interval=wait_interval)
pipeline_func = dataproc_submit_hadoop_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
!gsutil cat $OUTPUT_GCS_PATH/*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image classification with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: Simple End-to-End Example
Step4: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
Step5: Step 2. Customize the TensorFlow model.
Step6: Step 3. Evaluate the model.
Step7: Step 4. Export to TensorFlow Lite model.
Step8: After these simple 4 steps, we could further use TensorFlow Lite model file in on-device applications like in image classification reference app.
Step 1
Step9: Use DataLoader class to load data.
Step10: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
Step11: Show 25 image examples with labels.
Step12: Step 2
Step13: Have a look at the detailed model structure.
Step14: Step 3
Step15: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
Step16: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4
Step17: See example applications and guides of image classification for more details about how to integrate the TensorFlow Lite model into mobile apps.
Step18: You can also evaluate the tflite model with the evaluate_tflite method.
Step19: Advanced Usage
Step20: Then we export the TensorFlow Lite model with such configuration.
Step21: In Colab, you can download the model named model_fp16.tflite from the left sidebar, same as the uploading part mentioned above.
Step22: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
Step23: Change to the model in TensorFlow Hub
Step24: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
Step25: Evaluate the newly retrained model with 10 training epochs.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker
import os
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
import matplotlib.pyplot as plt
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
model.export(export_dir='.')
image_path = tf.keras.utils.get_file(
'flower_photos.tgz',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')
data = DataLoader.from_folder(image_path)
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.gen_dataset().unbatch().take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
model = image_classifier.create(train_data, validation_data=validation_data)
model.summary()
loss, accuracy = model.evaluate(test_data)
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.gen_dataset().unbatch().take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
model.export(export_dir='.')
model.export(export_dir='.', export_format=ExportFormat.LABEL)
model.evaluate_tflite('model.tflite', test_data)
config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)
model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data)
loss, accuracy = model.evaluate(test_data)
inception_v3_spec = image_classifier.ModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
loss, accuracy = model.evaluate(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inspecting Array objects
Step2: Get axes
Step3: Get axis names
Step4: Get number of dimensions
Step5: Get length of each dimension
Step6: Get total number of elements of the array
Step7: Get type of internal data (int, float, ...)
Step8: Get size in memory
Step9: Some Useful Functions
Step10: See with_total for more details and examples.
Step11: See where for more details and examples.
Step12: See clip for more details and examples.
Step13: See divnot0 for more details and examples.
Step14: See ratio and rationot0 for more details and examples.
Step15: See percent for more details and examples.
Step16: See diff for more details and examples.
Step17: See growth_rate for more details and examples.
|
<ASSISTANT_TASK:>
Python Code:
from larray import *
# load 'demography_eurostat' dataset
demography_eurostat = load_example_data('demography_eurostat')
# extract the 'population' array from the dataset
population = demography_eurostat.population
population
# Array summary: metadata + dimensions + description of axes
population.info
population.axes
population.axes.names
population.ndim
population.shape
population.size
population.dtype
population.memory_used
population.with_total('gender', label='Total')
# where(condition, value if true, value if false)
where(population < population.mean('time'), -population, population)
# values below 10 millions are set to 10 millions
population.clip(minval=10**7)
# values above 40 millions are set to 40 millions
population.clip(maxval=4*10**7)
# values below 10 millions are set to 10 millions and
# values above 40 millions are set to 40 millions
population.clip(10**7, 4*10**7)
# Using vectors to define the lower and upper bounds
lower_bound = sequence(population.time, initial=5_500_000, inc=50_000)
upper_bound = sequence(population.time, 41_000_000, inc=100_000)
print(lower_bound, '\n')
print(upper_bound, '\n')
population.clip(lower_bound, upper_bound)
divisor = ones(population.axes, dtype=int)
divisor['Male'] = 0
divisor
population / divisor
# we use astype(int) since the divnot0 method
# returns a float array in this case while
# we want an integer array
population.divnot0(divisor).astype(int)
population.ratio('gender')
# which is equivalent to
population / population.sum('gender')
# or, if you want the previous ratios in percents
population.percent('gender')
# calculates 'diff[year+1] = population[year+1] - population[year]'
population.diff('time')
# calculates 'diff[year+2] = population[year+2] - population[year]'
population.diff('time', d=2)
# calculates 'diff[year] = population[year+1] - population[year]'
population.diff('time', label='lower')
population.growth_rate('time')
population.shift('time')
# when shift is applied on an (increasing) time axis,
# it effectively brings "past" data into the future
population_shifted = population.shift('time')
stack({'population_shifted_2014': population_shifted[2014], 'population_2013': population[2013]}, 'array')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To iterate over an object x, Python automatically calls iter(x) (i.e. x.__iter__).
Step2: What's actually going on here?
Step3: We can completely abstract away a sequence in favor of an iterable (i.e. we don't need to support indexing anymore)
Step4: While we could have implemented __next__ in Sentence itself, making it an iterator, we will run into the problem of "exhausting an iterator".
|
<ASSISTANT_TASK:>
Python Code:
import reprlib
class Sentence:
def __init__(self, text):
self.text = text
self.words = text.split()
def __getitem__(self, index):
return self.words[index]
def __len__(self):
#completes sequence protocol, but not needed for iterable
return len(self.words)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
# Sequence'
s = Sentence("Dogs will save the world.")
print(len(s), " ", s[3], " ", s)
min(s), max(s)
list(s)
for i in s:
print(i)
it = iter(s) # Build an iterator from an iterable
while True:
try:
nextval = next(it) # Get the next item in the iterator
print(nextval)
except StopIteration:
del it # Iterator is exhausted. Release reference and discard.
break
class SentenceIterator: # has __next__ and __iter__
def __init__(self, words):
self.words = words
self.index = 0
def __next__(self):
try:
word = self.words[self.index]
except IndexError:
raise StopIteration()
self.index += 1
return word
def __iter__(self):
return self
class Sentence: # An iterable b/c it has __iter__
def __init__(self, text):
self.text = text
self.words = text.split()
def __iter__(self):
return SentenceIterator(self.words) # Returns an instance of the iterator
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
s2 = Sentence("What is data science?")
for i in s2:
print(i)
s2it=iter(s2) # Make the iterable an iterator
print(next(s2it), "\n\n") # Get the next entry
s2it2=iter(s2) # Reset the iterator
print(next(s2it), " ", next(s2it2)) # Get the next entry of s2it and s2it2
min(s2), max(s2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Step3: Working with the images
Step4: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
Step5: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Step6: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Step8: Indeed, the first label of the test set is 7.
Step9: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Step11: Looks good. Now we know how to index our full set of training and test images.
Step12: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
Step13: The 1-hot encoding looks reasonable.
Step14: Defining the model
Step16: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
Step17: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Step18: Training and visualizing results
Step19: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
Step20: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
Step21: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
Step22: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
Step23: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
Step25: Now let's wrap this up into our scoring function.
Step26: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Step27: The error seems to have gone down. Let's evaluate the results using the test set.
Step28: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True)
import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
A helper to download the data files if not present.
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Succesfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10])
%matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]);
# Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]);
with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0])
IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);
NUM_LABELS = 10
def extract_labels(filename, num_images):
Extract the labels into a 1-hot matrix [image index, label index].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000)
print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1])
VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size)
import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done')
def model(data, train=False):
The Model definition.
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done')
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits, train_labels_node))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done')
# Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run()
BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done')
print(predictions[0])
# The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1))
print('Batch labels', numpy.argmax(batch_labels, 1))
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
def error_rate(predictions, labels):
Return the error rate and confusions.
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done')
# Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')
plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can get some suggested parameters and plot some data! In experiments, typically it is not easy to have high sampling rate (time points), so our suggested time points match what is feasible to see in actual experiments. However, it is also good to have a look at what the model does by giving it a higher sampling rate (time points).
Step2: In this set-up, we only see one of the state variables representing the Hes1 concentration. However, it is also interesting to see the dynamics of the whole system (i.e. by inspecting all state variables). This can be done by simulate_all_states(parameters, times) provided in this model.
Step3: We can explore the model further by varying the initial conditions
Step4: Then by varying the parameters
Step5: We can then try to set up and run a MCMC routine with suggested values which origianlly obtained from real experimental results (taken from [1])
Step6: We can also compare the predictions with these values to what we found
Step7: This is interesting, as this shows that with the sparse sampling points, we cannot retreive the same dynamical behaviour as the literature model. Our model predictions from the MCMC posterior actually give a closer alignment to the data points.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pints
import pints.toy
model = pints.toy.Hes1Model()
print('Outputs: ' + str(model.n_outputs()))
print('Parameters: ' + str(model.n_parameters()))
times = model.suggested_times()
smooth_times = np.linspace(times[0], times[-1], 1000)
parameters = model.suggested_parameters()
print(parameters)
plt.figure()
plt.xlabel('Time [minute]')
plt.ylabel('Hes1 concentration')
plt.plot(times, model.simulate(parameters, times), 'o', label='Sparse sampling')
plt.plot(smooth_times, model.simulate(parameters, smooth_times), '--', label='Underlying model')
plt.legend()
plt.show()
all_states = model.simulate_all_states(parameters, smooth_times)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('m')
ax.set_ylabel('p1')
ax.set_zlabel('p2')
plt.plot(all_states[:, 0], all_states[:, 1], all_states[:, 2])
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('m')
ax.set_ylabel('p1')
ax.set_zlabel('p2')
suggested_init = np.array(model.initial_conditions())
suggested_hidden_param = np.array(model.fixed_parameters())
for x in [1.5, 1.6, 2, 2.5, 3, 4, 5]:
model.set_initial_conditions(suggested_init * x)
model.set_fixed_parameters(list(suggested_hidden_param[0:-1] / x) + [suggested_hidden_param[-1]])
all_states = model.simulate_all_states(parameters, smooth_times)
plt.plot(all_states[:, 0], all_states[:, 1], all_states[:, 2])
plt.show()
# Reset initial conditions
model.set_initial_conditions(suggested_init)
model.set_fixed_parameters(suggested_hidden_param)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('m')
ax.set_ylabel('p1')
ax.set_zlabel('p2')
# Try some different parameter sets {P0, v, k1, h}
Try_params = [[0.5, 0.5, 0.6, 4.0],
[0.5, 0.1, 0.01, 5.5],
[0.4, 0.01, 0.01, 6.0],
[0.05, 0.05, 0.01, 7.0]]
# Compare with suggested parameters
all_states = model.simulate_all_states(parameters, smooth_times)
plt.plot(all_states[:, 0], all_states[:, 1], all_states[:, 2], label='Suggested parameters')
for i, try_param in enumerate(Try_params):
all_states = model.simulate_all_states(try_param, smooth_times)
plt.plot(all_states[:, 0], all_states[:, 1], all_states[:, 2], label='Trial parameters %d'%(i+1))
plt.legend()
plt.show()
# Get suggested values
values = model.suggested_values()
plt.figure()
plt.xlabel('Time [minute]')
plt.ylabel('Hes1 concentration')
plt.plot(smooth_times, model.simulate(parameters, smooth_times), label='Literature model')
plt.plot(times, values, '*', label='Suggested values')
plt.legend()
plt.show()
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log posterior
lower = list(parameters * 0.1) + [0.1]
upper = list(parameters * 10) + [5]
log_prior = pints.UniformLogPrior(lower, upper)
log_likelihood = pints.GaussianLogLikelihood(problem)
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Run MCMC on the noisy data
x0 = [list(parameters*1.2) + [0.5]]*3
mcmc = pints.MCMCController(log_posterior, 3, x0)
mcmc.set_max_iterations(10000)
chains = mcmc.run()
results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=["P0", "nu", "k_1", "h", "sigma"])
print(results)
import pints.plot
pints.plot.trace(chains, ref_parameters=list(parameters)+[0.75])
plt.show()
# Select first chain
chain1 = chains[0]
# Remove burn-in
chain1 = chain1[5000:]
# Plot some predictions with these samples
plt.figure()
plt.xlabel('Time [minute]')
plt.ylabel('Hes1 concentration')
for posterior_param in chain1[::25]:
model_prediction = model.simulate(posterior_param[:-1], smooth_times)
plt.plot(smooth_times, model_prediction, c='Gray', alpha=0.3)
plt.plot(smooth_times, model_prediction, c='Gray', alpha=0.3, label='Model prediction')
plt.plot(times, values, 'kx', label='Original data')
plt.plot(smooth_times, model.simulate(parameters, smooth_times), label='Literature model')
plt.legend()
plt.show()
transformation = pints.RectangularBoundariesTransformation(lower, upper)
x0 = [list(parameters) + [0.5]]*3
mcmc = pints.MCMCController(log_posterior, 3, x0,method=pints.HamiltonianMCMC,
transformation=transformation)
mcmc.set_max_iterations(100)
chains = mcmc.run()
results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=["P0", "nu", "k_1", "h", "sigma"])
print(results)
pints.plot.trace(chains, ref_parameters=list(parameters)+[0.75])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercício 1
Step2: Exercício 2
Step3: Praticar tarefa de programação
Step4: Exercício 2
|
<ASSISTANT_TASK:>
Python Code:
def cria_matriz(num_linhas, num_colunas):
matriz = [] #lista vazia
for i in range(num_linhas):
linha = []
for j in range(num_colunas):
linha.append(0)
matriz.append(linha)
for i in range(num_colunas):
for j in range(num_linhas):
matriz[j][i] = int(input("Digite o elemento [" + str(j) + "][" + str(i) + "]: "))
return matriz
x = cria_matriz(2, 3)
x
def tarefa(mat):
dim = len(mat)
for i in range(dim):
print(mat[i][dim-1-i], end=" ")
mat = [[1,2,3],[4,5,6],[7,8,9]]
tarefa(mat)
# Observação: o trecho do print (end = " ") irá mudar a finalização padrão do print
# que é pular para a próxima linha. Com esta mudança, o cursor permanecerá na mesma
# linha aguardando a impressão seguinte.
def dimensoes(A):
'''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.
Obs: i = colunas, j = linhas
Exemplo:
>>> minha_matriz = [[1],
[2],
[3]
]
>>> dimensoes(minha_matriz)
>>> 3X1
'''
lin = len(A)
col = len(A[0])
return print("%dX%d" % (lin, col))
matriz1 = [[1], [2], [3]]
dimensoes(matriz1)
matriz2 = [[1, 2, 3], [4, 5, 6]]
dimensoes(matriz2)
def soma_matrizes(m1, m2):
def dimensoes(A):
lin = len(A)
col = len(A[0])
return ((lin, col))
if dimensoes(m1) != dimensoes(m2):
return False
else:
matriz = []
for i in range(len(m1)):
linha = []
for j in range(len(m1[0])):
linha.append(m1[i][j] + m2[i][j])
matriz.append(linha)
return matriz
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
def imprime_matriz(A):
for i in range(len(A)):
for j in range(len(A[i])):
print(A[i][j])
minha_matriz = [[1], [2], [3]]
imprime_matriz(minha_matriz)
minha_matriz = [[1, 2, 3], [4, 5, 6]]
imprime_matriz(minha_matriz)
def sao_multiplicaveis(m1, m2):
'''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas
da primeira é igual ao número de linhs da segunda). False se não forem
'''
if len(m1) == len(m2[0]):
return True
else:
return False
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
sao_multiplicaveis(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[1, 2, 3]]
sao_multiplicaveis(m1, m2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Geometry comparison of LSLGA galaxies
Step2: Compare the various sizes
|
<ASSISTANT_TASK:>
Python Code:
import os, glob
import fitsio
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import vstack, Table, hstack
import seaborn as sns
sns.set(context='talk', style='ticks', font_scale=1.0)
%matplotlib inline
lslgaver = b'L6'
lslgafile = '/global/cfs/cdirs/desi/users/ioannis/LSLGA-data-DR9fg/LSLGA-model-v6.0.fits'
#lslgafile = '/global/project/projectdirs/cosmo/staging/largegalaxies/v6.0/LSLGA-model-v6.0.fits'
#lslgafile = os.getenv('LARGEGALAXIES_CAT')
lslga = Table(fitsio.read(lslgafile))
print('Read {} galaxies from {}'.format(len(lslga), lslgafile))
#lslga
plslga = lslga[lslga['REF_CAT'] == lslgaver]
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(6, 14))
ax1.hexbin(np.log10(plslga['D25']), np.log10(plslga['D25_ORIG']), mincnt=1,
cmap=plt.cm.Blues_r)
#ax1.scatter(np.log10(lslga['D25']), np.log10(lslga['D25_ORIG']), s=15)
ax1.set_xlim(-0.7, 1)
ax1.set_ylim(-0.7, 1)
ax1.plot([-1, 1], [-1, 1], color='k', ls='--', lw=3, alpha=0.7)
ax1.set_xlabel(r'$\log_{10}\ D_{25}$ [ellipse-fit, arcmin]')
ax1.set_ylabel(r'$\log_{10}\ D_{25}$ [Hyperleda, arcmin]')
ax2.hexbin(plslga['PA'], plslga['PA_ORIG'], mincnt=1,
cmap=plt.cm.Blues_r)
ax2.set_xlabel('PA [ellipse-fit, degrees]')
ax2.set_ylabel('PA [Hyperleda, degrees]')
ax2.plot([0, 180], [0, 180], color='k', ls='--', lw=3, alpha=0.7)
ax3.hexbin(plslga['BA'], plslga['BA_ORIG'], mincnt=1,
cmap=plt.cm.Blues_r)
ax3.set_xlabel('b/a [ellipse-fit]')
ax3.set_ylabel('b/a [Hyperleda]')
ax3.plot([0, 1], [0, 1], color='k', ls='--', lw=3, alpha=0.7)
fig.subplots_adjust(hspace=0.3)
lslga[(lslga['LSLGA_ID'] > 0) * (lslga['D25'] == 0)]
ff = Table.read('/global/cfs/cdirs/desi/users/ioannis/LSLGA-data-DR9fg/LSLGA-reject-v6.0.fits')
ff
rr = Table.read('/global/cfs/cdirs/desi/users/ioannis/LSLGA-data-DR9fg/LSLGA-inspect-v6.0.fits')
rr
plslga.colnames
def qa_radii(ax, xx, yy, xlim, ylim):
ax.hexbin(np.log10(xx), np.log10(yy), mincnt=1, cmap=plt.cm.Blues_r)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
#ax.plot([-1, 1], [-1, 1], color='k', ls='--', lw=3, alpha=0.7)
#ax.set_xlabel(r'$\log_{10}\ D_{25}$ [ellipse-fit, arcmin]')
#ax.set_ylabel(r'$\log_{10}\ D_{25}$ [Hyperleda, arcmin]')
fig, ax = plt.subplots(2, 2, figsize=(12, 8), sharey=True, sharex=True)
for ax1, typ in zip(ax.flat, ['REX', 'DEV', 'EXP', 'SER']):
ww = np.where(plslga['TYPE'] == typ)[0]
xx = plslga['RADIUS_SB26'][ww] / 60
yy = plslga['RADIUS_SB26'][ww] / plslga['RADIUS_SB25'][ww]
print(typ, np.median(yy))
ax1.set_xlim(-1, 0.7)
ax1.set_ylim(-0.1, 0.3)
ax1.axhline(y=0, ls='-', lw=2, color='gray')
ax1.axhline(y=np.log10(1.25), ls='--', lw=2, color='red')
if typ == 'SER':
ax1.hexbin(np.log10(xx), np.log10(yy), mincnt=1, cmap=plt.cm.Blues_r)
else:
ax1.scatter(np.log10(xx), np.log10(yy), s=5, color='cornflowerblue')
ax1.text(0.9, 0.9, '{} (N={})'.format(typ, len(ww)),
transform=ax1.transAxes, ha='right', va='center',
fontsize=14)
ax[0, 0].set_ylabel(r'$\log_{10}\ R(26) / R(25)$')
ax[1, 0].set_ylabel(r'$\log_{10}\ R(26) / R(25)$')
ax[1, 0].set_xlabel(r'$\log_{10}\ R(26)$ (arcmin)')
ax[1, 1].set_xlabel(r'$\log_{10}\ R(26)$ (arcmin)')
fig.subplots_adjust(hspace=0.1, wspace=0.1)
print('Median over all types: {:.4f}'.format(np.median(plslga['RADIUS_SB26'] / plslga['RADIUS_SB25'])))
fig, ax = plt.subplots(2, 2, figsize=(12, 8), sharey=True, sharex=True)
for ax1, typ in zip(ax.flat, ['REX', 'DEV', 'EXP', 'SER']):
ww = np.where(plslga['TYPE'] == typ)[0]
xx = plslga['RADIUS_SB26'][ww] / 60
yy = plslga['RADIUS_SB26'][ww] / plslga['RADIUS_SB24'][ww]
print(typ, np.median(yy))
ax1.set_xlim(-1, 0.7)
ax1.set_ylim(-0.1, 0.6)
ax1.axhline(y=0, ls='-', lw=2, color='gray')
ax1.axhline(y=np.log10(1.25), ls='--', lw=2, color='red')
if typ == 'SER':
ax1.hexbin(np.log10(xx), np.log10(yy), mincnt=1, cmap=plt.cm.Blues_r)
else:
ax1.scatter(np.log10(xx), np.log10(yy), s=5, color='cornflowerblue')
ax1.text(0.9, 0.9, '{} (N={})'.format(typ, len(ww)),
transform=ax1.transAxes, ha='right', va='center',
fontsize=14)
ax[0, 0].set_ylabel(r'$\log_{10}\ R(26) / R(24)$')
ax[1, 0].set_ylabel(r'$\log_{10}\ R(26) / R(24)$')
ax[1, 0].set_xlabel(r'$\log_{10}\ R(26)$ (arcmin)')
ax[1, 1].set_xlabel(r'$\log_{10}\ R(26)$ (arcmin)')
fig.subplots_adjust(hspace=0.1, wspace=0.1)
print('Median over all types: {:.4f}'.format(np.median(plslga['RADIUS_SB26'] / plslga['RADIUS_SB24'])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The park's recreational visits are highly seasonable with the peak season in July. The park tracks monthly averages from the last four years on it's web site. A simple approach to predict the next years visitors, is to use these averages.
Step2: ## Monthly Average Accuracy
|
<ASSISTANT_TASK:>
Python Code:
# load and plot dataset
from pandas import read_csv
from pandas import datetime
from matplotlib import pyplot
# load dataset
def parser(x):
return datetime.strptime(x, '%Y-%m-%d')
series = read_csv('../data/yellowstone-visitors.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
# summarize first few rows
print(series.head())
# line plot
series.plot()
pyplot.show()
prev_4_years = series[-60:-12]
last_year = series[12:]
pred = prev_4_years.groupby(by=prev_4_years.index.month).mean()
pred.plot()
act = last_year.groupby(by=last_year.index.month).mean()
act.plot()
pyplot.show()
from math import sqrt
from sklearn.metrics import mean_squared_error
rmse = sqrt(mean_squared_error(act, pred))
print('Test RMSE: %.3f' % rmse)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write a little function to load HiCData obeject
Step2: Load data
Step3: DRY normalized
Step4: Plot correlations
Step5: DRY
Step6: Repeat at 1 Mb resolution
Step7: ICE
Step8: DRY
Step9: Compare eigenvectors
Step10: DRY
Step11: Merge Hi-C experiments
Step12: Normalizing merged data
Step13: All in one loop to
|
<ASSISTANT_TASK:>
Python Code:
from pytadbit.mapping.analyze import eig_correlate_matrices, correlate_matrices
from pytadbit import load_hic_data_from_reads
from cPickle import load
from matplotlib import pyplot as plt
reso = 1000000
base_path = 'results/{0}/03_filtering/valid_reads12_{0}.tsv'
bias_ice_path = 'results/{1}/04_normalizing/biases_{0}_{1}.pick'
bias_dry_path = 'results/{1}/04_normalizing/biases_dryhic_{0}_{1}.tsv'
bads_path = 'results/{1}/04_normalizing/bad_columns_{0}_{1}.pick'
def my_load_hic_data(renz, reso, which='ice'):
hic_data = load_hic_data_from_reads(base_path.format(renz), resolution=reso)
if which=='ice':
hic_data.bias = load(open(bias_ice_path.format(reso, renz)))
else:
hic_data.bias = dict([(int(l.split()[0]), float(l.split()[1]))
for l in open(bias_dry_path.format(reso, renz))])
hic_data.bads = load(open(bads_path.format(reso, renz)))
return hic_data
renz1 = 'HindIII'
renz2 = 'MboI'
hic_data1_ice = my_load_hic_data(renz1, reso, which='ice')
hic_data2_ice = my_load_hic_data(renz2, reso, which='ice')
hic_data1_dry = my_load_hic_data(renz1, reso, which='dry')
hic_data2_dry = my_load_hic_data(renz2, reso, which='dry')
%matplotlib inline
## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
_ = correlate_matrices(hic_data1_ice, hic_data2_ice, max_dist=50, show=False, axe=axe, normalized=True)
## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
_ = correlate_matrices(hic_data1_dry, hic_data2_dry, max_dist=50, show=False, axe=axe, normalized=True)
reso = 1000000
hic_data1_ice = my_load_hic_data(renz1, reso, which='ice')
hic_data2_ice = my_load_hic_data(renz2, reso, which='ice')
hic_data1_dry = my_load_hic_data(renz1, reso, which='dry')
hic_data2_dry = my_load_hic_data(renz2, reso, which='dry')
## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
_ = correlate_matrices(hic_data1_ice, hic_data2_ice, max_dist=50, show=False, axe=axe, normalized=True)
## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
_ = correlate_matrices(hic_data1_dry, hic_data2_dry, max_dist=50, show=False, axe=axe, normalized=True)
corrs = eig_correlate_matrices(hic_data1_ice, hic_data2_ice, show=True, aspect='auto', normalized=True)
for cor in corrs:
print ' '.join(['%5.3f' % (c) for c in cor]) + '\n'
corrs = eig_correlate_matrices(hic_data1_dry, hic_data2_dry, show=True, aspect='auto', normalized=True)
for cor in corrs:
print ' '.join(['%5.3f' % (c) for c in cor]) + '\n'
from pytadbit.mapping import merge_2d_beds
! mkdir -p results/both
! mkdir -p results/both/
! mkdir -p results/both03_filtering/
! mkdir -p results/both/03_filtering/
renz1 = 'HindIII'
renz2 = 'MboI'
hic_data1 = 'results/{0}/03_filtering/valid_reads12_{0}.tsv'.format(renz1)
hic_data2 = 'results/{0}/03_filtering/valid_reads12_{0}.tsv'.format(renz2)
hic_data = 'results/both/03_filtering/valid_reads12.tsv'
merge_2d_beds(hic_data1, hic_data2, hic_data)
from pytadbit.mapping.analyze import hic_map
from cPickle import dump
! mkdir -p results/both/04_normalizing
for reso in [1000000, 300000, 100000]:
print ' *', reso
# load hic_data
hic_data = load_hic_data_from_reads(
'results/both/03_filtering/valid_reads12.tsv', reso)
# filter columns
hic_data.filter_columns(draw_hist=False, min_count=10, by_mean=True)
# normalize
hic_data.normalize_hic(iterations=0)
# save biases to reconstruct normalization
out = open('results/both/04_normalizing/biases_{0}.pick'.format(reso), 'w')
dump(hic_data.bias, out)
out.close()
# save filtered out columns
out = open('results/both/04_normalizing/bad_columns_{0}.pick'.format(reso), 'w')
dump(hic_data.bads, out)
out.close()
# save data as raw matrix per chromsome
hic_map(hic_data, by_chrom='intra', normalized=False,
savedata='results/both/04_normalizing/{0}_raw'.format(reso))
# save data as normalized matrix per chromosome
hic_map(hic_data, by_chrom='intra', normalized=True,
savedata='results/both/04_normalizing/{0}_norm'.format(reso))
# if the resolution is low save the full genomic matrix
if reso > 500000:
hic_map(hic_data, by_chrom=False, normalized=False,
savefig ='results/both/04_normalizing/{0}_raw.png'.format(reso),
savedata='results/both/04_normalizing/{0}_raw.mat'.format(reso))
hic_map(hic_data, by_chrom=False, normalized=True,
savefig ='results/both/04_normalizing/{0}_norm.png'.format(reso) ,
savedata='results/both/04_normalizing/{0}_norm.mat'.format(reso))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Precompute convolutional output
Step2: Fully convolutional net (FCN)
Step3: Bounding boxes and Multi-Output
|
<ASSISTANT_TASK:>
Python Code:
(val_classes, train_classes,
val_labels, train_labels,
val_filenames, train_filenames,
test_filenames) = get_classes('data/fish/')
print(val_classes)
print(train_classes)
print(val_labels)
print(train_labels)
print(val_filenames)
print(train_filenames)
print(test_filenames)
# removing path
remove_path = lambda y: [x.split('/')[-1] for x in y]
raw_train_filenames = remove_path(train_filenames)
raw_val_filenames = remove_path(val_filenames)
raw_test_filenames = remove_path(test_filenames)
train_data = get_data('data/fish/train', (360, 640))
val_data = get_data('data/fish/valid', (360, 640))
test_data = get_data('data/fish/test', (360, 640))
from vgg16bn import Vgg16BN
model = Vgg16BN((360, 640)).model
model.pop()
model.input_shape
model.output_shape
model.summary()
model.compile(optimizer=Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
conv_train_features = model.predict(train_data)
conv_test_features = model.predict(test_data)
conv_val_features = model.predict(val_data)
lrg_model = Sequential([
BatchNormalization(axis=1, input_shape=model.output_shape[1:]),
Convolution2D(128, 3, 3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128, 3, 3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128, 3, 3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
MaxPooling2D((1,2)),
Convolution2D(8, 3, 3, border_mode='same'),
Dropout(0.),
GlobalAveragePooling2D(),
Activation('softmax')
])
lrg_model.summary()
lrg_model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
lrg_model.fit(conv_train_features, train_labels,
batch_size=64,
nb_epoch=2,
validation_data=(conv_val_features, val_labels),
verbose=2)
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_train_features, train_labels,
batch_size=64,
nb_epoch=6,
validation_data=(conv_val_features, val_labels),
verbose=2)
import ujson as json
anno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft']
bb_json = {}
for c in anno_classes:
j = json.load(open('{}annos/{}_labels.json'.format('data/fish/', c), 'r'))
for l in j:
if 'annotations' in l.keys() and len(l['annotations'])>0:
bb_json[l['filename'].split('/')[-1]] = sorted(
l['annotations'], key=lambda x: x['height']*x['width'])[-1]
bb_json['img_04908.jpg']
train_file2idx = {o:i for i,o in enumerate(raw_train_filenames)}
val_file2idx = {o:i for i,o in enumerate(raw_val_filenames)}
train_file2idx
# for any images that have no annotations, we'll create an empty bounding box
empty_bbox = {'height': 0., 'width': 0., 'x':0., 'y': 0.}
for x in raw_train_filenames:
if not x in bb_json.keys(): bb_json[x] = empty_bbox
for x in raw_val_filenames:
if not x in bb_json.keys(): bb_json[x] = empty_bbox
# convert the coordinates to our resized 224x224 images
def convert_bb(bb, size):
bb = [bb[p] for p in ['height', 'width', 'x', 'y']]
conv_x = (224. / size[0])
conv_y = (224. / size[1])
bb[0] = bb[0]*conv_y
bb[1] = bb[1]*conv_x
bb[2] = max(bb[2]*conv_x, 0)
bb[3] = max(bb[3]*conv_y, 0)
return bb
raw_train_sizes = [PIL.Image.open('data/fish/train/' + x).size for x in train_filenames]
raw_val_sizes = [PIL.Image.open('data/fish/valid/' + x).size for x in val_filenames]
train_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_train_filenames, raw_train_sizes)])
val_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_val_filenames, raw_val_sizes)])
def plot_bb(i):
bb = val_bbox[i]
plot(val_data[i])
plt.gca().add_patch(
plt.Rectangle((bb[2], bb[3]), bb[1], bb[0], color='red', fill=False, lw=3)
)
%matplotlib inline
plot_bb(0)
# functional api
inp = Input(conv_layers[-1].output_shape[1:])
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(0.6/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.6)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.6/2)(x)
y_bb= Dense(4, name='bb')(x)
y_c = Dense(8, activation='softmax', name='class')(x)
# multi-output
model = Model([inp], [y_bb, y_c])
model.compile(optimizer=Adam(lr=0.001),
loss=['mse', 'categorical_crossentropy'],
metrics=['accuracy'],
loss_weights=[0.001, 1.])
model.fit(conv_train_features,
[train_bbox, train_labels],
batch_size=64,
nb_epoch=3,
validation_data=(conv_val_features, [val_bbox, val_labels]),
verbose=2)
model.optimizer.lr = 1e-5
model.fit(conv_train_features,
[train_bbox, train_labels],
batch_size=64,
nb_epoch=10,
validation_data=(conv_val_features, [val_bbox, val_labels]),
verbose=2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and visualize data
Step2: Extract Features
Step3: Train and evaluate classifier
Step4: Compare against naive classification accuracy
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pylab as plt
from tsfresh.examples.har_dataset import download_har_dataset, load_har_dataset, load_har_classes
import seaborn as sns
from tsfresh import extract_features, extract_relevant_features, select_features
from tsfresh.utilities.dataframe_functions import impute
from tsfresh.feature_extraction import FeatureExtractionSettings
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
import pandas as pd
import numpy as np
# fetch dataset from uci
download_har_dataset()
df = load_har_dataset()
df.head()
df.shape
plt.title('accelerometer reading')
plt.plot(df.ix[0,:])
plt.show()
extraction_settings = FeatureExtractionSettings()
extraction_settings.IMPUTE = impute # Fill in Infs and NaNs
# transpose since tsfresh reads times series data column-wise, not row-wise
df_t = df.copy().transpose()
df_t.shape
# rearrange sensor readings column-wise, not row-wise
master_df = pd.DataFrame(df_t[0])
master_df['id'] = 0
# grab first 500 readings to save time
for i in range(1, 500):
temp_df = pd.DataFrame(df_t[i])
temp_df['id'] = i
master_df = pd.DataFrame(np.vstack([master_df, temp_df]))
print(master_df.shape)
master_df.head()
%time X = extract_features(master_df, column_id=1, feature_extraction_settings=extraction_settings);
# 206 features are extracted for each reading
X.shape
y = load_har_classes()[:500]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
cl = DecisionTreeClassifier()
cl.fit(X_train, y_train)
print(classification_report(y_test, cl.predict(X_test)))
X_1 = df.ix[:499,:]
X_1.shape
X_train, X_test, y_train, y_test = train_test_split(X_1, y, test_size=.2)
cl = DecisionTreeClassifier()
cl.fit(X_train, y_train)
print(classification_report(y_test, cl.predict(X_test)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import time as tm
import matplotlib.pyplot as plt
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
kx=np.arange(5,nx-4)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Update velocity
for kx in range(6,nx-5):
# Calculating spatial derivative
p_x=i_dx*(1225.0/1024.0)*(p[kx+1]-p[kx])+i_dx*(-245.0/3072.0)*(p[kx+2]-p[kx-1])+i_dx*(49.0/5120.0)*(p[kx+3]-p[kx-2])+i_dx*(-5.0/7168.0)*(p[kx+4]-p[kx-3])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*p_x
# Update pressure
for kx in range(6,nx-5):
# Calculating spatial derivative
vx_x=i_dx*(1225.0/1024.0)*(vx[kx]-vx[kx-1])+i_dx*(-245.0/3072.0)*(vx[kx+1]-vx[kx-2])+i_dx*(49.0/5120.0)*(vx[kx+2]-vx[kx-3])+i_dx*(-5.0/7168.0)*(vx[kx+3]-vx[kx-4])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(vx_x);
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
## Save seismograms
np.save("Seismograms/FD_1D_DX8_DT2",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Display the same with Python (matplotlib)
Step2: Apply moveout with incorrect velocity
Step3: Slope estimation
Step4: Non-physical flattening by predictive painting
Step5: Velocity estimation by time warping
Step6: We now want to fit
Step7: Last step - physical flattening
|
<ASSISTANT_TASK:>
Python Code:
%%file data.scons
Flow('trace',None,'spike n1=2001 d1=0.001 k1=1001 | ricker1 frequency=30')
Flow('gather','trace','spray axis=2 n=49 d=25 o=0 label=Offset unit=m | nmostretch inv=y half=n v0=2000')
Result('gather','window f1=888 n1=392 | grey title=Gather')
from m8r import view
view('gather')
import m8r
gather = m8r.File('gather.rsf')
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
plt.imshow(np.transpose(gather[:,888:1280]),aspect='auto')
%%file nmo.scons
Flow('nmo','gather','nmostretch half=n v0=1800')
Result('nmo','window f1=888 n1=200 | grey title=NMO')
view('nmo')
%%file slope.scons
Flow('slope','nmo','dip rect1=100 rect2=5 order=2')
Result('slope','grey color=linearlfb mean=y scalebar=y title=Slope')
view('slope')
%%file flat.scons
Flow('paint','slope','pwpaint order=2')
Result('paint','window f1=888 n1=200 | contour title=Painting')
Flow('flat','nmo paint','iwarp warp=${SOURCES[1]}')
Result('flat','window f1=888 n1=200 | grey title=Flattening')
view('paint')
view('flat')
%%file twarp.scons
Flow('twarp','paint','math output=x1 | iwarp warp=$SOURCE')
Result('twarp','window j1=20 | transp | graph yreverse=y min2=0.888 max2=1.088 pad=n title="Time Warping" ')
view('twarp')
%%file lsfit.scons
Flow('num','twarp','math output="(input*input-x1*x1)*x2^2" | stack norm=n')
Flow('den','twarp','math output="x2^4" | stack norm=n')
Flow('vel','num den','div ${SOURCES[1]} | math output="1800/sqrt(1800*1800*input+1)" ')
Result('vel',
'''
window f1=888 n1=200 |
graph yreverse=y transp=y title="Estimated Velocity" label2=Velocity unit2=m/s grid2=y pad=n min2=1950 max2=2050
''')
view('vel')
%%file nmo2.scons
Flow('nmo2','gather vel','nmo half=n velocity=${SOURCES[1]}')
Result('nmo2','window f1=888 n1=200 | grey title="Physical Flattening" ')
view('nmo2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We read in the anaonymised data to a pands dataframe and make a second dataframe of only commits to OHW20 repos
Step2: Commits in time
Step3: How about zooming in on hackweek?
Step4: Zoom closer?
Step5: Hopefuly an indication of Oceanhackweek's global participation, not some sorely sleep deprived coders!
Step6: What is the distribution of commits amongst participants like?
Step7: Commit mesages and extensions
Step8: This needs a bit of cleaning
Step9: Jupyter notebooks (ipynb) overwhelmingly popular
Step10: How about git commit messages?
Step11: That's it! (for now)
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('ohw_anonymised.csv', index_col='datetime', parse_dates=True)
df_ohw20 = df[df.ohw20_repo]
df
weekly_commits = df.author.groupby(df.index.week).count()
df_hw = df[df.index.week==33]
hw_commits = df_hw.author.groupby(df_hw.index.week).count()
plt.rcParams.update({'font.size': 18})
fig, ax = plt.subplots(figsize=(15,10))
ax.bar(weekly_commits.index, weekly_commits.values, label='Weeks of 2020')
ax.bar(hw_commits.index, hw_commits.values, label='OHW20')
ax.legend()
ax.set(xlabel='week of 2020', ylabel='Commits by OHW participants')
daily_commits = df_hw.author.groupby(df_hw.index.day).count()
fig, ax = plt.subplots(figsize=(12,8))
ax.bar(daily_commits.index, daily_commits.values)
ax.set(xlabel='day of hackweek', ylabel='Commits by OHW participants')
ax.set_xticks(np.arange(10,17))
ax.set_xticklabels(['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']);
df_friday = df_hw[df_hw.index.day==14]
friday_commits = df_friday.author.groupby(df_friday.index.hour).count()
fig, ax = plt.subplots(figsize=(12,8))
ax.bar(friday_commits.index, friday_commits.values)
ax.set(xlabel='Hour of Friday (UTC)', ylabel='Commits by OHW participants')
# severley unpythonic but it works
week_commits = []
unique_users = []
for week in weekly_commits.index:
sub_df = df[df.index.week==week]
unique_users.append(len(sub_df.groupby('author').count()))
fig, ax = plt.subplots(figsize=(15,10))
ax.bar(weekly_commits.index, unique_users)
ax.bar(33, max(unique_users))
ax.set(xlabel='week of 2020', ylabel='Unique OHW20 participants with commits')
from collections import Counter
c = Counter(df_ohw20[df_ohw20.ohw20_repo].author)
MVP = c.most_common()
mvp_names, mvp_nums = [], []
for tup in MVP:
mvp_names.append(tup[0])
mvp_nums.append(tup[1])
fig, ax = plt.subplots(figsize=(14,8))
ax.bar(mvp_names, mvp_nums)
ax.set(xticks=[], ylabel='Number of commits per participant');
from collections import Counter
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
all_commit_strings = ''.join(df_ohw20.message)
extensions = ''.join(str(df_ohw20.extensions.values))
extensions
extensions_no_newline = extensions.replace('\n','')
extensions_no_quotes = extensions_no_newline.replace("'", "")
extensions_no_commas = extensions_no_quotes.replace(",", "")
extensions_no_commas
extensions_list = extensions_no_commas.split(" ")
c = Counter(extensions_list)
top16 = c.most_common(16)
word, counts = [], []
for tup in top16:
word.append(tup[0])
counts.append(tup[1])
fig, ax = plt.subplots(figsize=(12,8))
ax.bar(word,counts)
plt.setp(ax.get_xticklabels(), ha="right", rotation=45);
ax.set_title("Occurence of filetypes in commits to OHW20 repos")
stopwords = set(STOPWORDS)
wc = WordCloud(background_color="white", max_words=16,collocations=False,
stopwords=stopwords, contour_width=3, contour_color='steelblue')
wordcloud = wc.generate(extensions_no_commas)
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(wordcloud, interpolation='bilinear')
ax.axis("off")
wc = WordCloud(background_color="white", max_words=33,collocations=False,
stopwords=stopwords, contour_width=3, contour_color='steelblue')
wordcloud = wc.generate(all_commit_strings)
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(wordcloud, interpolation='bilinear')
ax.axis("off")
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are a number of different features here. We'll focus on the first two
Step2: Standarization scaling
Step3: We've accomplished what we set out to do
Step4: It looks like we've re-scaled the data without changing the basic shape or the relationships between the points. That's good. The standardized data can now be used as inputs for the machine learning algorithms.
Step5: As you can see, we've really shifted things around. The means are not very pretty (because the dataset has been shrunk to fit between 0 and 1). Additionally the standard deviation has changed and isn't very pretty, either. Let's plot it to see if the shape has changed.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('Class10_wine_data.csv')
df.head()
# Plot the first two feature columns
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
#plt.figure(figsize=(8,6))
plt.scatter(df['Alcohol'], df['Malic acid'])
plt.xlabel('Alcohol (%/L)')
plt.ylabel('Malic Acid (g/L)')
plt.xlim(0,16)
plt.ylim(0,16)
plt.axes().set_aspect('equal')
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler().fit(df[['Alcohol', 'Malic acid']])
df_std = std_scaler.transform(df[['Alcohol', 'Malic acid']])
print('Mean before standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].mean(), df['Malic acid'].mean()))
print('\nMean after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_std[:,0].mean(), df_std[:,1].mean()))
print('\nStandard deviation before standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].std(), df['Malic acid'].std()))
print('\nStandard deviation after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_std[:,0].std(), df_std[:,1].std()))
fig, ax = plt.subplots(1,2)
for a,d,l in zip(range(len(ax)),
(df[['Alcohol', 'Malic acid']].values, df_std),
('Input scale',
'Standardized [$N (\mu=0, \; \sigma=1)$]')
):
for i,c in zip(range(1,4), ('red', 'blue', 'green')):
ax[a].scatter(d[df['Wine Class'].values == i, 0],
d[df['Wine Class'].values == i, 1],
alpha=0.5,
color=c,
label='Class %s' %i
)
ax[a].set_aspect('equal')
ax[a].set_title(l)
ax[a].set_xlabel('Alcohol')
ax[a].set_ylabel('Malic Acid')
ax[a].legend(loc='upper left')
ax[a].grid()
plt.tight_layout()
from sklearn.preprocessing import MinMaxScaler
minmax_scaler = MinMaxScaler().fit(df[['Alcohol', 'Malic acid']])
df_minmax = minmax_scaler.transform(df[['Alcohol', 'Malic acid']])
print('Mean before min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].mean(), df['Malic acid'].mean()))
print('\nMean after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_minmax[:,0].mean(), df_minmax[:,1].mean()))
print('\nStandard deviation before min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].std(), df['Malic acid'].std()))
print('\nStandard deviation after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_minmax[:,0].std(), df_minmax[:,1].std()))
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.2, random_state=23)
minmax_scaler = MinMaxScaler().fit(train[['Alcohol', 'Malic acid']])
train_features = minmax_scaler.transform(train[['Alcohol', 'Malic acid']])
train_features[0:2]
test[['Alcohol' ,'Malic acid']].head(2)
test_features = minmax_scaler.transform(test[['Alcohol', 'Malic acid']])
test_features[0:2]
fig, ax = plt.subplots(1,2)
for a,d,l in zip(range(len(ax)),
(df[['Alcohol', 'Malic acid']].values, df_minmax),
('Input scale',
'Min-max scale')
):
for i,c in zip(range(1,4), ('red', 'blue', 'green')):
ax[a].scatter(d[df['Wine Class'].values == i, 0],
d[df['Wine Class'].values == i, 1],
alpha=0.5,
color=c,
label='Class %s' %i
)
ax[a].set_aspect('equal')
ax[a].set_title(l)
ax[a].set_xlabel('Alcohol')
ax[a].set_ylabel('Malic Acid')
ax[a].legend(loc='upper left')
ax[a].grid()
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.0 Basic parameters
Step4: Set up basic functions
Step6: Set up functions for RSA analysis (instead of SVM decoding)
Step7: Run statistics with FDR-correction
Step8: Show results as dataframe
Step9: Run statistics with FWER-correction (Permutation testing)
Step10: Show results as dataframe
|
<ASSISTANT_TASK:>
Python Code:
import sys
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import multiprocessing as mp
%matplotlib inline
import os
os.environ['OMP_NUM_THREADS'] = str(1)
import warnings
warnings.filterwarnings('ignore')
import pandas
import networkinformationtransfer as n2n
from IPython.display import display, HTML
# Set basic parameters
datadir = './data/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
# Load in network array
networkdef = np.loadtxt(datadir + 'network_array.csv', delimiter=',')
# Load in network keys (each network associated with a number in network array)
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1 (merging two auditory networks)
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Redefine new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
def loadBetas(subj, net='all'):
Loads in task betas
datafile = datadir + 'MiniblockActivations/' + subj + '_miniblock_taskBetas_Glasser.csv'
betas = np.loadtxt(datafile, delimiter=',')
betas = betas[:,17:]
if net == 'all':
return betas
else:
net_ind = np.where(networkdef==net)[0]
return betas[net_ind,:].T
def setupMatrix(subj,ruledim,net):
Sets up basic SVM Matrix for a classification of a particular rule dimension and network
svm_mat = loadBetas(subj,net=net)
# Subtract miniblock indices by 1, since they were originally created for Matlab array indices
labels = np.loadtxt('./data/CPROTaskIdentifiers/' + subj + '_' + ruledim + '_miniblocksPerRule_MatlabIndices.csv',delimiter=',') - 1
labels = np.asarray(labels,dtype=int)
return svm_mat, labels
def rsaCV(svm_mat,labels,ruledim):
Runs a leave-4-out CV for a 4 way classification
# 32 folds, if we do a leave 4 out for 128 total miniblocks
# Want to leave a single block from each rule from each CV
# labels is a sample x rule matrix, so 32 samples x 4 rules
# Number of CVs is columns
ncvs = labels.shape[0]
nrules = labels.shape[1]
corr_rho_cvs = []
err_rho_cvs = []
for cv in range(ncvs):
# Select a test set from the CV Fold matrix
test_ind = labels[cv,:].copy()
# Delete the CV included from the train set
train_ind = np.delete(labels,cv,axis=0)
# Identify the train and test sets
svm_train = svm_mat[np.reshape(train_ind,-1),:]
svm_test = svm_mat[test_ind,:]
prototype = {}
# Construct RSA prototypes
for rule in range(nrules):
prototype_ind = np.reshape(train_ind[:,rule],-1)
prototype[rule] = np.mean(svm_mat[prototype_ind],axis=0)
corr_rho = []
err_rho = []
for rule1 in range(nrules):
for rule2 in range(nrules):
r = stats.spearmanr(prototype[rule1],svm_test[rule2,:])[0]
r = np.arctanh(r)
if rule1==rule2:
corr_rho.append(r)
else:
err_rho.append(r)
corr_rho_cvs.append(np.mean(corr_rho))
err_rho_cvs.append(np.mean(err_rho))
return np.mean(corr_rho_cvs), np.mean(err_rho_cvs)
def subjRSACV((subj,ruledim,net)):
svm_mat, labels = setupMatrix(subj,ruledim,net)
# Demean each sample
svmmean = np.mean(svm_mat,axis=1)
svmmean.shape = (len(svmmean),1)
svm_mat = svm_mat - svmmean
# svm_mat = preprocessing.scale(svm_mat,axis=0)
corr_rho, err_rho = rsaCV(svm_mat, labels, ruledim)
diff_rho = corr_rho - err_rho
return corr_rho, err_rho, diff_rho
netkeys = {0:'fpn', 1:'dan', 2:'con', 3:'dmn', 4:'vis', 5:'aud', 6:'smn'}
ruledims = ['logic','sensory','motor']
corr_rho = {}
err_rho = {}
diff_rho = {}
avg_acc = {}
for ruledim in ruledims:
avg_acc[ruledim] = {}
corr_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
err_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
diff_rho[ruledim] = np.zeros((len(netkeys),len(subjNums)))
print 'Running', ruledim
for net in netkeys.keys():
# print 'Running network', net
inputs = []
for subj in subjNums: inputs.append((subj,ruledim,networkmappings[netkeys[net]]))
pool = mp.Pool(processes=3)
results = pool.map_async(subjRSACV,inputs).get()
pool.close()
pool.join()
scount = 0
for result in results:
tmp_corr, tmp_err, tmp_diff = result
corr_rho[ruledim][net,scount] = tmp_corr
err_rho[ruledim][net,scount] = tmp_err
diff_rho[ruledim][net,scount] = tmp_diff
scount += 1
avg_acc[ruledim][net] = np.mean(diff_rho[ruledim][net])
# Compute group stats
chance = 0.0
results_dict_fdr = {}
for ruledim in ruledims:
results_dict_fdr[ruledim] = {}
for net in netkeys.keys(): results_dict_fdr[ruledim][netkeys[net]] = {}
pvals = []
for net in netkeys.keys():
results_dict_fdr[ruledim][netkeys[net]]['Accuracy'] = str(round(np.mean(avg_acc[ruledim][net]),3))
t, p = stats.ttest_1samp(diff_rho[ruledim][net],chance)
results_dict_fdr[ruledim][netkeys[net]]['T-stats'] = t
results_dict_fdr[ruledim][netkeys[net]]['P-values'] = p
pvals.append(p)
qvals = mc.fdrcorrection0(pvals)[1]
qcount = 0
for net in netkeys.keys():
results_dict_fdr[ruledim][netkeys[net]]['Q-values'] = qvals[qcount]
qcount += 1
results_dframe_fdr = {}
for ruledim in ruledims:
print 'Dataframe for', ruledim, 'classification'
results_dframe_fdr[ruledim] = pandas.DataFrame(data=results_dict_fdr[ruledim])
display(results_dframe_fdr[ruledim])
#### Compute statistics for bar plot
bar_avgs = {}
bar_sems = {}
bar_avg_all = {}
for net in netkeys.keys(): bar_avg_all[net] = np.zeros((len(ruledims),len(subjNums)))
rulecount = 0
for ruledim in ruledims:
bar_avgs[ruledim] = {}
bar_sems[ruledim] = {}
for net in netkeys.keys():
bar_avgs[ruledim][net] = np.mean(diff_rho[ruledim][net])
bar_sems[ruledim][net] = np.std(diff_rho[ruledim][net])/np.sqrt(len(subjNums))
bar_avg_all[net][rulecount,:] = diff_rho[ruledim][net]
rulecount += 1
bar_sem_all = {}
for net in netkeys.keys():
meanacc = np.mean(bar_avg_all[net],axis=0)
bar_avg_all[net] = np.mean(meanacc)
bar_sem_all[net] = np.std(meanacc)/np.sqrt(len(subjNums))
##### Generate figures
width=0.25
width=.265
networks = netkeys.keys()
nbars = len(networks)
fig = plt.figure()
ax = fig.add_subplot(111)
rects = {}
widthcount = 0
colors = ['b','g','r']
colorcount = 0
for ruledim in ruledims:
rects[ruledim] = ax.bar(np.arange(nbars)+widthcount, bar_avgs[ruledim].values(), width,align='center',
yerr=bar_sems[ruledim].values(), color=colors[colorcount], error_kw=dict(ecolor='black'))
widthcount += width
colorcount += 1
ax.set_title('Network RSA of CPRO rules (FDR)',
y=1.04, fontsize=16)
ax.set_ylabel('Match V. Mismatch Difference (Rho)',fontsize=12)
ax.set_xlabel('Rule types by Networks', fontsize=12)
ax.set_xticks(np.arange(nbars)+width)
ax.set_xticklabels(netkeys.values(),rotation=-45)
vmax=0.07
ax.set_ylim([0,vmax])
plt.legend((rects['logic'], rects['sensory'], rects['motor']),
('Logic', 'Sensory', 'Motor'), loc=((1.08,.65)))
## Add asterisks
def autolabel(rects,df,ruledim):
# attach some text labels
netcount = 0
for rect in rects:
height = rect.get_height()
# Decide where to put the asterisk
if height > vmax:
yax = vmax - .005
else:
yax = height + .01
# Slightly move this asterisk since it's in the way
if ruledim=='sensory' and netkeys[netcount]=='dan': yax -= .0025
# Retrive q-value and assign asterisk accordingly
q = df[netkeys[netcount]]['Q-values']
if q > .05: asterisk=''
if q < .05: asterisk='*'
if q < .01: asterisk='**'
if q < .001: asterisk='***'
# Label bar
ax.text(rect.get_x() + rect.get_width()/2., yax,
asterisk, ha='center', va='bottom', fontsize=8)
# Go to next network
netcount += 1
for ruledim in ruledims:
autolabel(rects[ruledim],results_dframe_fdr[ruledim],ruledim)
# autolabel(rects2)
plt.tight_layout()
# plt.savefig('FigS1a_NetworkRSA_InformationEstimate.pdf')
import sys
pathToFWEPackage = '../../'
sys.path.append(pathToFWEPackage + 'MultipleComparisonsPermutationTesting/pythonCode/')
import permutationTesting as pt
# Compute group stats
chance = 0.0
results_dict_fwe = {}
for ruledim in ruledims:
results_dict_fwe[ruledim] = {}
for net in netkeys.keys(): results_dict_fwe[ruledim][netkeys[net]] = {}
pvals = []
for net in netkeys.keys():
results_dict_fwe[ruledim][netkeys[net]]['Accuracy'] = str(round(np.mean(avg_acc[ruledim][net]),3))
t, maxT, p = pt.maxT(diff_rho[ruledim],nullmean=0,permutations=10000,nproc=3,pvals=True)
# t, p = stats.ttest_1samp(diff_rho[ruledim][net],chance)
results_dict_fwe[ruledim][netkeys[net]]['T-stats'] = t[net]
results_dict_fwe[ruledim][netkeys[net]]['Q-values'] = p[net]
results_dframe_fwe = {}
for ruledim in ruledims:
print 'Dataframe for', ruledim, 'classification'
results_dframe_fwe[ruledim] = pandas.DataFrame(data=results_dict_fwe[ruledim])
display(results_dframe_fwe[ruledim])
#### Compute statistics for bar plot
bar_avgs = {}
bar_sems = {}
bar_avg_all = {}
for net in netkeys.keys(): bar_avg_all[net] = np.zeros((len(ruledims),len(subjNums)))
rulecount = 0
for ruledim in ruledims:
bar_avgs[ruledim] = {}
bar_sems[ruledim] = {}
for net in netkeys.keys():
bar_avgs[ruledim][net] = np.mean(diff_rho[ruledim][net])
bar_sems[ruledim][net] = np.std(diff_rho[ruledim][net])/np.sqrt(len(subjNums))
bar_avg_all[net][rulecount,:] = diff_rho[ruledim][net]
rulecount += 1
bar_sem_all = {}
for net in netkeys.keys():
meanacc = np.mean(bar_avg_all[net],axis=0)
bar_avg_all[net] = np.mean(meanacc)
bar_sem_all[net] = np.std(meanacc)/np.sqrt(len(subjNums))
##### Generate figures
width=0.25
width=.265
networks = netkeys.keys()
nbars = len(networks)
fig = plt.figure()
ax = fig.add_subplot(111)
rects = {}
widthcount = 0
colors = ['b','g','r']
colorcount = 0
for ruledim in ruledims:
rects[ruledim] = ax.bar(np.arange(nbars)+widthcount, bar_avgs[ruledim].values(), width,align='center',
yerr=bar_sems[ruledim].values(), color=colors[colorcount], error_kw=dict(ecolor='black'))
widthcount += width
colorcount += 1
ax.set_title('Network RSA of CPRO rules (FWER)',
y=1.04, fontsize=16)
ax.set_ylabel('Match V. Mismatch Difference (Rho)',fontsize=12)
ax.set_xlabel('Rule types by Networks', fontsize=12)
ax.set_xticks(np.arange(nbars)+width)
ax.set_xticklabels(netkeys.values(),rotation=-45)
vmax=0.07
ax.set_ylim([0,vmax])
plt.legend((rects['logic'], rects['sensory'], rects['motor']),
('Logic', 'Sensory', 'Motor'), loc=((1.08,.65)))
## Add asterisks
def autolabel(rects,df,ruledim):
# attach some text labels
netcount = 0
for rect in rects:
height = rect.get_height()
# Decide where to put the asterisk
if height > vmax:
yax = vmax - .005
else:
yax = height + .01
# Slightly move this asterisk since it's in the way
if ruledim=='sensory' and netkeys[netcount]=='dan': yax -= .0025
# Retrive q-value and assign asterisk accordingly
q = df[netkeys[netcount]]['Q-values']
if q > .05: asterisk=''
if q < .05: asterisk='*'
if q < .01: asterisk='**'
if q < .001: asterisk='***'
# Label bar
ax.text(rect.get_x() + rect.get_width()/2., yax,
asterisk, ha='center', va='bottom', fontsize=8)
# Go to next network
netcount += 1
for ruledim in ruledims:
autolabel(rects[ruledim],results_dframe_fwe[ruledim],ruledim)
# autolabel(rects2)
plt.tight_layout()
# plt.savefig('FigS1a_NetworkRSA_InformationEstimate.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <div class="align-center" style="display
Step7: <p style="text-align
Step8: <div class="align-center" style="display
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <span style="text-align
Step15: <p style="text-align
Step16: <p style="text-align
Step17: <p style="text-align
Step18: <p style="text-align
Step19: <div class="align-center" style="display
Step20: <span style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step23: <p style="text-align
Step24: <p style="text-align
Step25: <p style="text-align
Step27: <div class="align-center" style="display
Step29: <span style="align
|
<ASSISTANT_TASK:>
Python Code:
items = ['banana', 'apple', 'carrot']
stock = [2, 3, 4]
def get_stock(item_name, items, stock):
item_index = items.index(item_name)
how_many = stock[item_index]
return how_many
print(get_stock('apple', items, stock))
items = [('banana', 2), ('apple', 3), ('carrot', 4)]
def get_stock(item_name_to_find, items_with_stock):
for item_to_stock in items_with_stock:
item_name = item_to_stock[0]
stock = item_to_stock[1]
if item_name == item_name_to_find:
return stock
get_stock('apple', items)
ages = {'Yam': 27, 'Methuselah': 969, 'Baby Groot': 3}
age_of_my_elephants = {}
names = ['Yam', 'Mathuselah', 'Baby Groot']
names[2]
items = {'banana': 2, 'apple': 3, 'carrot': 4}
items['banana']
items['melon'] = 1
items['melon'] = items['melon'] + 4
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
for something in favorite_animals:
print(something)
favorite_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('favorite_animals items:')
for key in favorite_animals:
value = favorite_animals[key]
print(f"{key:10} -----> {value}.") # תרגיל קטן: זהו את הטריק שגורם לזה להיראות טוב כל כך בהדפסה
print(list(favorite_animals.items()))
print('favorite_animals items:')
for key, value in favorite_animals.items():
print(f"{key:10} -----> {value}.")
print('favorite_animals items:')
for character, animal in favorite_animals.items():
print(f"{character:10} -----> {animal}.")
empty_dict = {}
empty_dict['DannyDin']
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print('Achiles' in loved_animals)
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
if 'Achiles' in loved_animals:
value = loved_animals['Achiles']
else:
value = 'Pony'
print(value)
def get_value(dictionary, key, default_value):
if key in dictionary:
return dictionary[key]
else:
return default_value
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + get_value(loved_animals, 'Mad hatter', 'Pony'))
print("Queen of hearts: " + get_value(loved_animals, 'Queen of hearts', 'Pony'))
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print("Mad hatter: " + loved_animals.get('Mad hatter', 'Pony'))
print("Queen of hearts: " + loved_animals.get('Queen of hearts', 'Pony'))
loved_animals = {'Alice': 'Cat', 'Mad hatter': 'Hare', 'Achiles': 'Tortoise'}
print(loved_animals.get('Mad hatter'))
print(loved_animals.get('Queen of hearts'))
decryption_key = {
'O': 'A', 'D': 'B', 'F': 'C', 'I': 'D', 'H': 'E',
'G': 'F', 'L': 'G', 'C': 'H', 'K': 'I', 'Q': 'J',
'B': 'K', 'J': 'L', 'Z': 'M', 'V': 'N', 'S': 'O',
'R': 'P', 'M': 'Q', 'X': 'R', 'E': 'S', 'P': 'T',
'A': 'U', 'Y': 'V', 'W': 'W', 'T': 'X', 'U': 'Y',
'N': 'Z',
}
SONG =
sc, kg pchxh'e svh pckvl k covl svps
pcop lhpe zh pcxsalc pch vklcp
k okv'p lsvvo is wcop k isv'p wovp ps
k'z lsvvo jkyh zu jkgh
eckvkvl jkbh o ikozsvi, xsjjkvl wkpc pch ikfh
epovikvl sv pch jhilh, k ecsw pch wkvi csw ps gju
wchv pch wsxji lhpe kv zu gofh
k eou, coyh o vkfh iou
coyh o vkfh iou
encryption_key = {
'T': '1', 'F': '6', 'W': 'c', 'Y': 'h', 'B': 'k',
'P': '~', 'H': 'q', 'S': 's', 'E': 'w', 'Q': '@',
'U': '$', 'M': 'i', 'I': 'l', 'N': 'o', 'J': 'y',
'Z': 'z', 'G': '!', 'L': '#', 'A': '&', 'O': '+',
'D': ',', 'R': '-', 'C': ':', 'V': '?', 'X': '^',
'K': '|',
}
SONG =
l1's ih #l6w
l1's o+c +- ow?w-
l &lo'1 !+oo& #l?w 6+-w?w-
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
(l1's ih #l6w)
ih qw&-1 ls #l|w &o +~wo ql!qc&h
#l|w 6-&o|lw s&l,
l ,l, l1 ih c&h
l y$s1 c&o1 1+ #l?w cql#w l'i &#l?w
l1's ih #l6w
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
Step3: 尝试性练习:写程序,能够在屏幕上显示空行。
Step4: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议
|
<ASSISTANT_TASK:>
Python Code:
name = input('请输入你的姓名')
print('你好',name)
print('请输入出生的月份与日期')
month = int(input('月份:'))
date = int(input('日期:'))
if month == 4:
if date < 20:
print(name, '你是白羊座')
else:
print(name,'你是非常有性格的金牛座')
if month == 5:
if date < 21:
print(name, '你是非常有性格的金牛座')
else:
print(name,'你是双子座')
if month == 6:
if date < 22:
print(name, '你是双子座')
else:
print(name,'你是巨蟹座')
if month == 7:
if date < 23:
print(name, '你是巨蟹座')
else:
print(name,'你是狮子座')
if month == 8:
if date < 23:
print(name, '你是狮子座')
else:
print(name,'你是处女座')
if month == 9:
if date < 24:
print(name, '你是处女座')
else:
print(name,'你是天秤座')
if month == 10:
if date < 24:
print(name, '你是天秤座')
else:
print(name,'你是天蝎座')
if month == 11:
if date < 23:
print(name, '你是天蝎座')
else:
print(name,'你是射手座')
if month == 12:
if date < 22:
print(name, '你是射手座')
else:
print(name,'你是摩羯座')
if month == 1:
if date < 20:
print(name, '你是摩羯座')
else:
print(name,'你是水瓶座')
if month == 2:
if date < 19:
print(name, '你是水瓶座')
else:
print(name,'你是双鱼座')
if month == 3:
if date < 22:
print(name, '你是双鱼座')
else:
print(name,'你是白羊座')
m = int(input('请输入一个整数,回车结束'))
n = int(input('请输入一个整数,不为零'))
intend = input('请输入计算意图,如 + * %')
if m<n:
min_number = m
else:
min_number = n
total = min_number
if intend == '+':
if m<n:
while m<n:
m = m + 1
total = total + m
print(total)
else:
while m > n:
n = n + 1
total = total + n
print(total)
elif intend == '*':
if m<n:
while m<n:
m = m + 1
total = total * m
print(total)
else:
while m > n:
n = n + 1
total = total * n
print(total)
elif intend == '%':
print(m % n)
else:
print(m // n)
number = int(input('现在北京的PM2.5指数是多少?请输入整数'))
if number > 500:
print('应该打开空气净化器,戴防雾霾口罩')
elif 300 < number < 500:
print('尽量呆在室内不出门,出门佩戴防雾霾口罩')
elif 200 < number < 300:
print('尽量不要进行户外活动')
elif 100 < number < 200:
print('轻度污染,可进行户外活动,可不佩戴口罩')
else:
print('无须特别注意')
print('空行是我')
print('空行是我')
print('空行是我')
print( )
print('我是空行')
word = input('请输入一个单词,回车结束')
if word.endswith('s') or word.endswith('sh') or word.endswith('ch') or word.endswith('x'):
print(word,'es',sep = '')
elif word.endswith('y'):
if word.endswith('ay') or word.endswith('ey') or word.endswith('iy') or word.endswith('oy') or word.endswith('uy'):
print(word,'s',sep = '')
else:
word = word[:-1]
print(word,'ies',sep = '')
elif word.endswith('f'):
word = word[:-1]
print(word,'ves',sep = '')
elif word.endswith('fe'):
word = word[:-2]
print(word,'ves',sep = '')
elif word.endswith('o'):
print('词尾加s或者es')
else:
print(word,'s',sep = '')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following cell checks that all expected packages and correct versions are installed. SimpleITK may possibly work with other versions of these packages, but these are the ones we rely on.
Step2: We expect that you have an external image viewer installed. The default viewer is <a href="http
Step3: Download all of the data in advance if you expect to be working offline (may take a couple of minutes).
|
<ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
from downloaddata import fetch_data, fetch_data_all
print(sitk.Version())
from __future__ import print_function
import importlib
from distutils.version import LooseVersion
required_packages = {'IPython' : '4.0.0',
'numpy' : '1.9.2',
'matplotlib' : '1.4.2',
'ipywidgets' : '4.0.x'
}
problem_packages = list()
# Iterate over the required packages: If the package is not installed
# ignore the exception. If it is installed check the version and remove
# from dictionary. In the end the dictionary contains the packages
# that are not installed.
for package, required_version in required_packages.items():
try:
p = importlib.import_module(package)
# Current release of ipywidgets has a bug with the __version__
# attribute. This was fixed in master, so for now we do not
# check ipywidgets version.
if package != 'ipywidgets':
installed_version = LooseVersion(p.__version__)
required_version = LooseVersion(required_version)
if installed_version < required_version:
print('{0} - required version: {1} installed version: {2}'.format(
p.__name__, required_version, installed_version))
problem_packages.append(package)
except ImportError:
problem_packages.append(package)
if len(problem_packages) is 0:
print('All is well.')
else:
print('The following packages are required but not installed: ' \
+ ', '.join(problem_packages))
# Uncomment the line below to change the default external viewer to your viewer of choice and test that it works.
#%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP
# Retrive an image from the network, read it and display using the external viewer
sitk.Show(sitk.ReadImage(fetch_data("SimpleITK.jpg")))
fetch_data_all("Data", "Data/manifest.json")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attribute Information
Step2: Calculating Mututal Information Score Directly
Step3: Mutual Information is defined by
Step4: Next step is to calculate $H(Y \mid X)$.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
lenses_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/lenses/lenses.data', sep='\s+', header=None
)
lenses_data.columns= ['index', 'age', 'spec_type', 'astigmatic', 'tear_prod_rate', 'lens_type']
lenses_data = lenses_data.set_index('index')
lenses_data.head()
lens_type_names = {1: 'hard', 2: 'soft', 3: 'no_lense'}
lenses_data = lenses_data.assign(lens_type=lenses_data.lens_type.map(lambda n: lens_type_names[n]))
type_names = {1: 'no', 2: 'yes'}
lenses_data = lenses_data.assign(astigmatic=lenses_data.astigmatic.map(lambda n: type_names[n]))
lenses_data.head()
from sklearn.metrics import mutual_info_score
mutual_info_score(lenses_data['lens_type'], lenses_data['astigmatic'])
y_counts = lenses_data['astigmatic'].value_counts()
y_counts
P_y = lenses_data['astigmatic'].value_counts(normalize=True)
P_y
H_y = - P_y.dot(np.log(P_y))
H_y
# cont_table = pd.crosstab(lenses_data['astigmatic'], lenses_data['lens_type'])
cont_table = pd.crosstab(lenses_data['astigmatic'], lenses_data['lens_type'])
cont_table
n_elems = cont_table.sum(axis=0)
n_elems
P_cond = cont_table / n_elems
P_cond
P_x = lenses_data['lens_type'].value_counts(normalize=True)
P_x
P_cond_aug = P_cond.where(P_cond != 0, other=1.0)
P_cond_aug
H_temp = P_cond.mul(np.log(P_cond_aug)).sum(axis=0)
H_temp
H_Y_given_X = - H_temp.dot(P_x)
H_Y_given_X
mutual_info = H_y - H_Y_given_X
mutual_info
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That seems to indicate this identity
Step2: Hmm... now that is a very interesting structure. I'm even more convinced that there's a nice compact way of writing that, but I have no idea what it is, and probably involves a lot more Pure Math than I know. Also, this approach works fine, but starts taking longer and longer as q gets bigger. But we should be able to just compute the coefficients on each entry in the $M$ matrix, giving us $W$, a 6x6x36 tensor where $\mathcal{M} = W vec(M)$.
Step3: Or we can do the same thing a little more efficiently by doing slices through the $\mathcal{C}$ matrix, since that's all the $M$ matrix is doing.
Step4: Either way, we get a very pretty pattern
Step5: (each $i,j$ sub-square shows the coefficients of $m_{i,j}$ for the final $\mathcal{M}$ matrix)
Step6: This explains all the interesting symmetries in the matrix, but is still pretty slow to compute. Mind you, it does indicate pretty clearly that we can do a lot of caching, since the order of the arguments doesn't matter at all. Indeed, the result doesn't even depend on q!
|
<ASSISTANT_TASK:>
Python Code:
from scipy.special import legendre
q = 20
n_steps = 100000
t = np.linspace(0, 1, n_steps)
P = np.asarray([legendre(i)(2*t - 1) for i in range(q)]).T
total = np.zeros((q,q))
for Pt in P:
Ct = np.outer(Pt, Pt)
total += Ct / n_steps
plt.figure(figsize=(12,6))
plt.subplot(1, 2, 1)
plt.imshow(total)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.plot(1.0/(1+np.arange(q)*2), c='g', lw=8, ls='--', label='$1/(1+2i)$')
plt.plot(np.diag(total), label=r'$\mathcal{C}[i,i]$', c='k')
plt.legend()
plt.show()
import sympy
t = sympy.Symbol('t')
def leg_poly(n):
# from https://en.wikipedia.org/wiki/Legendre_polynomials#Rodrigues'_formula_and_other_explicit_formulas
return 1 /(sympy.Rational(2**n * np.math.factorial(n))) * sympy.diff((t**2-1)**n,t,n)
#return sympy.sqrt((sympy.Rational(2*n+1))/2) /(sympy.Rational(2**n * np.math.factorial(n))) * sympy.diff((t**2-1)**n,t,n)
def compute_M(q):
P = sympy.Matrix([leg_poly(i) for i in range(q)])
C = P*P.T
M = sympy.Matrix([[sympy.Symbol('m_{%d,%d}'%(i,j)) for i in range(q)] for j in range(q)])
# note that we integrate from -1 to 1 and then divide by 2 as these are the non-shifted Legendre Polynomials
return sympy.integrate(C*M*C, (t, -1, 1))/2
compute_M(q=1)
compute_M(q=2)
compute_M(q=3)
compute_M(q=4)
def basis_matrix(q, i, j):
P = sympy.Matrix([leg_poly(i) for i in range(q)])
C = P*P.T
m = sympy.Matrix([[1 if (i==ii and j==jj) else 0 for ii in range(q)] for jj in range(q)])
return sympy.integrate(C*sympy.Matrix(m)*C, (t, -1, 1))/2
q = 6
w = []
for i in range(q):
for j in range(q):
w.append(np.array(basis_matrix(q=q, i=i, j=j)))
w = np.array(w)
w
def basis_matrix(q, i, j):
P = sympy.Matrix([leg_poly(i) for i in range(q)])
C = P*P.T
m = C[:,i]*C[j,:]
return sympy.integrate(m, (t, -1, 1))/2
q = 6
w = []
for i in range(q):
for j in range(q):
w.append(np.array(basis_matrix(q=q, i=i, j=j)))
w = np.array(w)
plt.figure(figsize=(12,12))
plt.imshow(w.astype(float).reshape(q**2, q**2), cmap='gray_r')
for i in range(q):
plt.axvline(i*q-0.5)
plt.axhline(i*q-0.5)
plt.colorbar()
plt.show()
def m_element_method_1(q, i, j, n, m):
P = sympy.Matrix([leg_poly(i) for i in range(q)])
C = P*P.T
m = (C[:,i]*C[j,:])[n,m]
return sympy.integrate(m, (t, -1, 1))/2
# these are identical!
def m_element_method_2(q, i, j, n, m):
return sympy.integrate(leg_poly(i)*leg_poly(j)*leg_poly(n)*leg_poly(m), (t, -1, 1))/2
import functools
@functools.lru_cache()
def m_element(i, j, n, m):
return sympy.integrate(leg_poly(i)*leg_poly(j)*leg_poly(n)*leg_poly(m), (t, -1, 1))/2
def m_element_fast(i, j, n, m):
args = tuple(sorted([i, j, n, m]))
return m_element(*args)
q=6
w = np.zeros((q,q,q,q))
for i in range(q):
for j in range(q):
for m in range(q):
for n in range(q):
w[i,j,n,m] = m_element_fast(i,j,m,n)
plt.figure(figsize=(12,12))
plt.imshow(w.astype(float).reshape(q**2, q**2), cmap='gray_r')
for i in range(q):
plt.axvline(i*q-0.5)
plt.axhline(i*q-0.5)
plt.colorbar()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interpolated SiGe band structure
Step2: Plotting misc parts of Brillouin zones
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import eigvalsh
from collections import namedtuple
import TB
TB.band(TB.Si)
TB.band(TB.GaAs)
TB.band(TB.Ge)
def SiGe_band(x=0.2):
Si_data = TB.bandpts(TB.Si)
Ge_data = TB.bandpts(TB.Ge)
data = (1-x)*Si_data + x*Ge_data
TB.bandplt("SiGe, %%Ge=%.2f" % x,data)
return
SiGe_band(0)
SiGe_band(0.1)
SiGe_band(0.25)
SiGe_band(0.37)
Ge_CB = TB.bandpts(TB.Ge)[:,4]
Si_CB = TB.bandpts(TB.Si)[:,4]
nk = len(Si_CB)
n = (nk-2)//3
plt.plot(Si_CB)
plt.plot(Ge_CB)
TB.band_labels(n)
plt.axis(xmax=3*n+1)
plt.plot(Si_CB,label='Si')
plt.plot(Ge_CB,label='Ge')
plt.plot(0.9*Si_CB + 0.1*Ge_CB,label='Si_0.9 Ge_0.1')
TB.band_labels(n)
plt.axis(xmax=3*n+1)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
min_Si = min(Si_CB)
min_Ge = min(Ge_CB)
print min_Si, min_Ge
# min_Si - min_Ge = 0.12
Si_CB_shifted = Si_CB - min_Si + min_Ge + 0.12
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: for btn, sconc is passed as a normal util_3d-compatible argument, no need for the list anymore
Step2: or we pass in the sconc2 kwarg explicitly
Step3: error raised for unrecognized kwargs
Step4: same behavior for SSM
Step5: DSP does not need multDiff flag for multi species. But in this case, arrays are by layer, not by layer, row, column.
Step6: If we set multiDiff to True, then there will be mcomp (not ncomp, according to mt3d supplemental guide) number of 3d dmcoef arrays.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import flopy
reload(flopy)
from flopy.modflow import *
from flopy.mt3d import *
nlay, nrow, ncol = 3, 10, 10
ml = Modflow("test")
dis = ModflowDis(ml,nlay=nlay, nrow=nrow, ncol=ncol)
mt = Mt3dms(modflowmodel=ml)
btn = Mt3dBtn(mt, sconc=1.0,ncomp=2)
mt = Mt3dms(modflowmodel=ml)
btn = Mt3dBtn(mt,sconc=1.0,ncomp=2,sconc2=1.5)
print btn.sconc[1].name_base
mt = Mt3dms(modflowmodel=ml)
btn = Mt3dBtn(mt,sconc=1.0,ncomp=2,sconc3=1.5)
mt = Mt3dms(modflowmodel=ml)
btn = Mt3dBtn(mt,sconc=1.0,ncomp=2, mcomp=2, sconc2=1.5)
ssm = Mt3dSsm(mt,crch=1.0,crch2=1.0,cevt=0.5,cevt2=1.5)
print ssm.crch,ssm.cevt
dsp = Mt3dDsp(mt,dmcoef2=1.0)
dsp = Mt3dDsp(mt,dmcoef2=1.0,multiDiff=True)
print dsp.dmcoef
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load EEGLAB example data (a small EEG dataset)
Step2: Create Epochs
Step3: Plot using
Step4: Plot using median
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD-3-Clause
import mne
from mne.event import define_target_events
from mne.channels import make_1020_channel_selections
print(__doc__)
data_path = mne.datasets.testing.data_path()
fname = data_path / 'EEGLAB' / 'test_raw.set'
event_id = {"rt": 1, "square": 2} # must be specified for str events
raw = mne.io.read_raw_eeglab(fname)
mapping = {
'EEG 000': 'Fpz', 'EEG 001': 'EOG1', 'EEG 002': 'F3', 'EEG 003': 'Fz',
'EEG 004': 'F4', 'EEG 005': 'EOG2', 'EEG 006': 'FC5', 'EEG 007': 'FC1',
'EEG 008': 'FC2', 'EEG 009': 'FC6', 'EEG 010': 'T7', 'EEG 011': 'C3',
'EEG 012': 'C4', 'EEG 013': 'Cz', 'EEG 014': 'T8', 'EEG 015': 'CP5',
'EEG 016': 'CP1', 'EEG 017': 'CP2', 'EEG 018': 'CP6', 'EEG 019': 'P7',
'EEG 020': 'P3', 'EEG 021': 'Pz', 'EEG 022': 'P4', 'EEG 023': 'P8',
'EEG 024': 'PO7', 'EEG 025': 'PO3', 'EEG 026': 'POz', 'EEG 027': 'PO4',
'EEG 028': 'PO8', 'EEG 029': 'O1', 'EEG 030': 'Oz', 'EEG 031': 'O2'
}
raw.rename_channels(mapping)
raw.set_channel_types({"EOG1": 'eog', "EOG2": 'eog'})
raw.set_montage('standard_1020')
events = mne.events_from_annotations(raw, event_id)[0]
# define target events:
# 1. find response times: distance between "square" and "rt" events
# 2. extract A. "square" events B. followed by a button press within 700 msec
tmax = 0.7
sfreq = raw.info["sfreq"]
reference_id, target_id = 2, 1
new_events, rts = define_target_events(events, reference_id, target_id, sfreq,
tmin=0., tmax=tmax, new_id=2)
epochs = mne.Epochs(raw, events=new_events, tmax=tmax + 0.1,
event_id={"square": 2})
# Parameters for plotting
order = rts.argsort() # sorting from fast to slow trials
selections = make_1020_channel_selections(epochs.info, midline="12z")
# The actual plots (GFP)
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='gfp',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='median',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: To merge these two lists together we will use the zip function.
Step3: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
Step4: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
Step5: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
Step6: Get Data
Step7: Even though this functions has many parameters, we will simply pass it the location of the text file.
Step8: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
Step9: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
Step10: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
Step11: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
Step12: Prepare Data
Step13: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Step14: Present Data
|
<ASSISTANT_TASK:>
Python Code:
# General syntax to import specific functions in a library:
##from (library) import (specific library function)
from pandas import DataFrame, read_csv
# General syntax to import a library but no functions:
##import (library) as (give the library a nickname/alias)
import matplotlib.pyplot as plt
import pandas as pd #this is how I usually import pandas
import sys #only needed to determine Python version number
import matplotlib #only needed to determine Matplotlib version number
# Enable inline plotting
%matplotlib inline
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
# The inital set of baby names and bith rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
zip?
BabyDataSet = list(zip(names,births))
BabyDataSet
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df
df.to_csv?
df.to_csv('births1880.csv',index=False,header=False)
read_csv?
Location = r'C:\Users\david\notebooks\update\births1880.csv'
df = pd.read_csv(Location)
df
df = pd.read_csv(Location, header=None)
df
df = pd.read_csv(Location, names=['Names','Births'])
df
import os
os.remove(Location)
# Check data type of the columns
df.dtypes
# Check data type of Births column
df.Births.dtype
# Method 1:
Sorted = df.sort_values(['Births'], ascending=False)
Sorted.head(1)
# Method 2:
df['Births'].max()
# Create graph
df['Births'].plot()
# Maximum value in the data set
MaxValue = df['Births'].max()
# Name associated with the maximum value
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# Text to display on graph
Text = str(MaxValue) + " - " + MaxName
# Add text to graph
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
print("The most popular name")
df[df['Births'] == df['Births'].max()]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocess the database
Step2: Begin preprocessing the language data
Step3: Train the language detection model and save as an object.
Step4: Test the model trained during the last step.
|
<ASSISTANT_TASK:>
Python Code:
import LangByWord as lbw
import BuildTrainingDataFiles as btdf
# Set the input directory for preprocessing here:
base_input_dir = '/Users/frank/data/LanguageDetectionModel/txt'
# Set the output directory for the preprocessing here:
base_output_dir = '/Users/frank/data/LanguageDetectionModel/exp_data_test'
build_obj = btdf.BuildTrainingDataFiles()
build_obj.start_building(base_input_dir, base_output_dir)
lo = lbw.LangByWord()
lo.train(base_output_dir, max_words_per_lang=0, report_freq=0)
lo.print_most_prob_words()
object_file = 'LbyW_obj.pck'
lo.save_object_to_file(object_file)
test_file_name = '/Users/frank/data/LanguageDetectionModel/europarl.test'
lo2 = lbw.LangByWord.load_object_from_file(object_file)
lo2.test_on_test(test_file_name, report_freq=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Whenever people compare Bayesian inference with conventional approaches, one of the questions that comes up most often is something like, "What about p-values?"
Step2: I'll use Pandas to load the data into a DataFrame.
Step3: The Treatment column indicates whether each student was in the treated or control group.
Step4: Here are CDFs of the scores for the two groups and summary statistics.
Step6: There is overlap between the distributions, but it looks like the scores are higher in the treated group.
Step7: make_uniform takes as parameters
Step8: I chose the lower and upper bounds by trial and error.
Step9: Now we can use make_joint to make the joint prior distribution.
Step10: And we'll start by working with the data from the control group.
Step11: In the next section we'll compute the likelihood of this data for each pair of parameters in the prior distribution.
Step12: Now we can use norm.pdf to compute the probability density of each score for each hypothetical pair of parameters.
Step13: The result is a 3-D array. To compute likelihoods, I'll multiply these densities along axis=2, which is the axis of the data
Step14: The result is a 2-D array that contains the likelihood of the entire dataset for each hypothetical pair of parameters.
Step16: The result is a DataFrame that represents the joint posterior distribution.
Step17: Here are the updates for the control and treatment groups
Step18: And here's what they look like
Step19: Along the $x$-axis, it looks like the mean score for the treated group is higher.
Step20: Here's what they look like
Step21: In both cases the posterior probabilities at the ends of the range are near zero, which means that the bounds we chose for the prior distribution are wide enough.
Step22: There is a 98% chance that the mean in the treated group is higher.
Step23: There are two things to be careful about when you use methods like sub_dist.
Step24: In the worst case, the size of the result can be the product of the sizes of the originals.
Step25: There are two ways to work around that limitation. One is to plot the CDF, which smooths out the noise
Step27: The other option is to use kernel density estimation (KDE) to make a smooth approximation of the PDF on an equally-spaced grid, which is what this function does
Step28: kde_from_pmf takes as parameters a Pmf and the number of places to evaluate the KDE.
Step29: The mean of this distribution is almost 10 points on a test where the mean is around 45, so the effect of the treatment seems to be substantial.
Step30: We can use credible_interval to compute a 90% credible interval.
Step31: Based on this interval, we are pretty sure the treatment improves test scores by 2 to 17 points.
Step32: Now suppose we draw a sample from this distribution with sample size n=20, and compute the mean of the sample, which I'll call m, and the standard deviation of the sample, which I'll call s.
Step33: The summary statistics, m and s, are not too far from the parameters $\mu$ and $\sigma$, so it seems like they are not too unlikely.
Step34: This is the "sampling distribution of the mean".
Step35: Now let's compute the likelihood of the observed value of s, which is 18.
Step36: Then we create a chi2 object to represent the distribution of t
Step37: Now we can compute the likelihood of t
Step38: Finally, because m and s are independent, their joint likelihood is the product of their likelihoods
Step39: Now we can compute the likelihood of the data for any values of $\mu$ and $\sigma$, which we'll use in the next section to do the update.
Step40: Update with Summary Statistics
Step41: The result is a dictionary that maps from group name to a tuple that contains the sample size, n, the sample mean, m, and the sample standard deviation s, for each group.
Step42: I'll make a mesh with hypothetical values of mu on the x axis and values of sigma on the y axis.
Step43: Now we can compute the likelihood of seeing the sample mean, m, for each pair of parameters.
Step44: And we can compute the likelihood of the sample standard deviation, s, for each pair of parameters.
Step45: Finally, we can do the update with both likelihoods
Step47: To compute the posterior distribution for the treatment group, I'll put the previous steps in a function
Step48: Here's the update for the treatment group
Step49: And here are the results.
Step50: Visually, these posterior joint distributions are similar to the ones we computed using the entire dataset, not just the summary statistics.
Step51: And compare them to results we got using the entire dataset (the dashed lines).
Step52: The posterior distributions based on summary statistics are similar to the posteriors we computed using the entire dataset, but in both cases they are shorter and a little wider.
Step53: I'll create a norm object to represent this distribution.
Step54: norm provides rvs, which generates random values from the distribution.
Step55: The result is an array with 1000 rows, each containing a sample or 20 simulated test scores.
Step57: Now, let's compare the distribution of these means to dist_m.
Step58: pmf_from_dist takes an object representing a continuous distribution, evaluates its probability density function at equally space points between low and high, and returns a normalized Pmf that approximates the distribution.
Step59: Now let's compare this theoretical distribution to the means of the samples.
Step60: The following figure shows the two distributions.
Step61: The theoretical distribution and the distribution of sample means are in accord.
Step62: Now we'll compute the transformed values, $t = n s^2 / \sigma^2$.
Step63: We expect the transformed values to follow a chi-square distribution with parameter $n-1$.
Step64: We can use pmf_from_dist again to make a discrete approximation.
Step65: And we'll use kde_from_sample to estimate the distribution of the sample standard deviations.
Step66: Now we can compare the theoretical distribution to the distribution of the standard deviations.
Step67: The distribution of transformed standard deviations agrees with the theoretical distribution.
Step68: Their correlation is near zero, which is consistent with their being independent.
Step69: It looks like the axes of the ellipses are aligned with the axes, which indicates that the variables are independent.
Step71: Exercise
Step72: Here's how we can use it to sample pairs from the posterior distributions for the two groups.
Step74: The result is an array of tuples, where each tuple contains a possible pair of values for $\mu$ and $\sigma$.
Step75: Exercise
Step77: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/drp_scores.csv')
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head(3)
grouped = df.groupby('Treatment')
responses = {}
for name, group in grouped:
responses[name] = group['Response']
from empiricaldist import Cdf
from utils import decorate
for name, response in responses.items():
cdf = Cdf.from_seq(response)
cdf.plot(label=name)
decorate(xlabel='Score',
ylabel='CDF',
title='Distributions of test scores')
from empiricaldist import Pmf
def make_uniform(qs, name=None, **options):
Make a Pmf that represents a uniform distribution.
pmf = Pmf(1.0, qs, **options)
pmf.normalize()
if name:
pmf.index.name = name
return pmf
import numpy as np
qs = np.linspace(20, 80, num=101)
prior_mu = make_uniform(qs, name='mean')
qs = np.linspace(5, 30, num=101)
prior_sigma = make_uniform(qs, name='std')
from utils import make_joint
prior = make_joint(prior_mu, prior_sigma)
data = responses['Control']
data.shape
mu_mesh, sigma_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
mu_mesh.shape
from scipy.stats import norm
densities = norm(mu_mesh, sigma_mesh).pdf(data_mesh)
densities.shape
likelihood = densities.prod(axis=2)
likelihood.shape
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
posterior.shape
def update_norm(prior, data):
Update the prior based on data.
mu_mesh, sigma_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = norm(mu_mesh, sigma_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
data = responses['Control']
posterior_control = update_norm(prior, data)
data = responses['Treated']
posterior_treated = update_norm(prior, data)
import matplotlib.pyplot as plt
from utils import plot_contour
plot_contour(posterior_control, cmap='Blues')
plt.text(49.5, 18, 'Control', color='C0')
cs = plot_contour(posterior_treated, cmap='Oranges')
plt.text(57, 12, 'Treated', color='C1')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint posterior distributions of mu and sigma')
from utils import marginal
pmf_mean_control = marginal(posterior_control, 0)
pmf_mean_treated = marginal(posterior_treated, 0)
pmf_mean_control.plot(label='Control')
pmf_mean_treated.plot(label='Treated')
decorate(xlabel='Population mean (mu)',
ylabel='PDF',
title='Posterior distributions of mu')
Pmf.prob_gt(pmf_mean_treated, pmf_mean_control)
pmf_diff = Pmf.sub_dist(pmf_mean_treated, pmf_mean_control)
len(pmf_mean_treated), len(pmf_mean_control), len(pmf_diff)
pmf_diff.plot()
decorate(xlabel='Difference in population means',
ylabel='PDF',
title='Posterior distribution of difference in mu')
cdf_diff = pmf_diff.make_cdf()
cdf_diff.plot()
decorate(xlabel='Difference in population means',
ylabel='CDF',
title='Posterior distribution of difference in mu')
from scipy.stats import gaussian_kde
def kde_from_pmf(pmf, n=101):
Make a kernel density estimate for a PMF.
kde = gaussian_kde(pmf.qs, weights=pmf.ps)
qs = np.linspace(pmf.qs.min(), pmf.qs.max(), n)
ps = kde.evaluate(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
kde_diff = kde_from_pmf(pmf_diff)
kde_diff.plot()
decorate(xlabel='Difference in means',
ylabel='PDF',
title='Posterior distribution of difference in mu')
pmf_diff.mean()
pmf_diff.credible_interval(0.9)
mu = 42
sigma = 17
n = 20
m = 41
s = 18
dist_m = norm(mu, sigma/np.sqrt(n))
like1 = dist_m.pdf(m)
like1
t = n * s**2 / sigma**2
t
from scipy.stats import chi2
dist_s = chi2(n-1)
like2 = dist_s.pdf(t)
like2
like = like1 * like2
like
samples = norm(mu, sigma).rvs((100000, n))
samples.shape
sample_s = samples.std(axis=1)
sample_s.shape
sample_t = n * sample_s**2 / sigma**2
from empiricaldist import Cdf
xs = np.linspace(1, 60, 101)
ys = dist_s.cdf(xs)
plt.plot(xs, ys, lw=1)
Cdf.from_seq(sample_t).plot(lw=1)
summary = {}
for name, response in responses.items():
summary[name] = len(response), response.mean(), response.std()
summary
n, m, s = summary['Control']
mus, sigmas = np.meshgrid(prior.columns, prior.index)
mus.shape
like1 = norm(mus, sigmas/np.sqrt(n)).pdf(m)
like1.shape
ts = n * s**2 / sigmas**2
like2 = chi2(n-1).pdf(ts)
like2.shape
posterior_control2 = prior * like1 * like2
normalize(posterior_control2)
def update_norm_summary(prior, data):
Update a normal distribution using summary statistics.
n, m, s = data
mu_mesh, sigma_mesh = np.meshgrid(prior.columns, prior.index)
like1 = norm(mu_mesh, sigma_mesh/np.sqrt(n)).pdf(m)
like2 = chi2(n-1).pdf(n * s**2 / sigma_mesh**2)
posterior = prior * like1 * like2
normalize(posterior)
return posterior
data = summary['Treated']
posterior_treated2 = update_norm_summary(prior, data)
plot_contour(posterior_control2, cmap='Blues')
plt.text(49.5, 18, 'Control', color='C0')
cs = plot_contour(posterior_treated2, cmap='Oranges')
plt.text(57, 12, 'Treated', color='C1')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint posterior distributions of mu and sigma')
from utils import marginal
pmf_mean_control2 = marginal(posterior_control2, 0)
pmf_mean_treated2 = marginal(posterior_treated2, 0)
pmf_mean_control.plot(color='C5', linestyle='dashed')
pmf_mean_control2.plot(label='Control')
pmf_mean_treated.plot(color='C5', linestyle='dashed')
pmf_mean_treated2.plot(label='Treated')
decorate(xlabel='Population mean',
ylabel='PDF',
title='Posterior distributions of mu')
mu = 42
sigma = 17
dist = norm(mu, sigma)
n = 20
samples = dist.rvs((1000, n))
samples.shape
sample_means = samples.mean(axis=1)
sample_means.shape
def pmf_from_dist(dist, low, high):
Make a discrete approximation of a continuous distribution.
dist: SciPy dist object
low: low end of range
high: high end of range
returns: normalized Pmf
qs = np.linspace(low, high, 101)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
low = dist_m.mean() - dist_m.std() * 3
high = dist_m.mean() + dist_m.std() * 3
pmf_m = pmf_from_dist(dist_m, low, high)
from utils import kde_from_sample
qs = pmf_m.qs
pmf_sample_means = kde_from_sample(sample_means, qs)
pmf_m.plot(label='Theoretical distribution',
style=':', color='C5')
pmf_sample_means.plot(label='KDE of sample means')
decorate(xlabel='Mean score',
ylabel='PDF',
title='Distribution of the mean')
sample_stds = samples.std(axis=1)
sample_stds.shape
transformed = n * sample_stds**2 / sigma**2
from scipy.stats import chi2
dist_s = chi2(n-1)
low = 0
high = dist_s.mean() + dist_s.std() * 4
pmf_s = pmf_from_dist(dist_s, low, high)
qs = pmf_s.qs
pmf_sample_stds = kde_from_sample(transformed, qs)
pmf_s.plot(label='Theoretical distribution',
style=':', color='C5')
pmf_sample_stds.plot(label='KDE of sample std',
color='C1')
decorate(xlabel='Standard deviation of scores',
ylabel='PDF',
title='Distribution of standard deviation')
np.corrcoef(sample_means, sample_stds)[0][1]
import seaborn as sns
sns.kdeplot(x=sample_means, y=sample_stds)
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint distribution of mu and sigma')
# Solution
pmf_std_control = marginal(posterior_control, 1)
pmf_std_treated = marginal(posterior_treated, 1)
# Solution
pmf_std_control.plot(label='Control')
pmf_std_treated.plot(label='Treated')
decorate(xlabel='Population standard deviation',
ylabel='PDF',
title='Posterior distributions of sigma')
# Solution
Pmf.prob_gt(pmf_std_control, pmf_std_treated)
# Solution
pmf_diff2 = Pmf.sub_dist(pmf_std_control, pmf_std_treated)
# Solution
pmf_diff2.mean()
# Solution
pmf_diff2.credible_interval(0.9)
# Solution
kde_from_pmf(pmf_diff2).plot()
decorate(xlabel='Difference in population standard deviation',
ylabel='PDF',
title='Posterior distributions of difference in sigma')
def sample_joint(joint, size):
Draw a sample from a joint distribution.
joint: DataFrame representing a joint distribution
size: sample size
pmf = Pmf(joint.transpose().stack())
return pmf.choice(size)
sample_treated = sample_joint(posterior_treated, 1000)
sample_treated.shape
sample_control = sample_joint(posterior_control, 1000)
sample_control.shape
# Solution
def cohen_effect(pair1, pair2):
Compute Cohen's effect size for difference in means.
pair1: tuple of (mu1, sigma1)
pair2: tuple of (mu2, sigma2)
return: float
mu1, sigma1 = pair1
mu2, sigma2 = pair2
sigma = (sigma1 + sigma2) / 2
return (mu1 - mu2) / sigma
# Solution
cohen_effect(sample_treated[0], sample_control[0])
# Solution
ds = []
for pair1, pair2 in zip(sample_treated, sample_control):
d = cohen_effect(pair1, pair2)
ds.append(d)
# Solution
cdf = Cdf.from_seq(ds)
cdf.plot()
decorate(xlabel='Cohen effect size',
ylabel='CDF',
title='Posterior distributions of effect size')
# Solution
cdf.mean()
# Solution
cdf.credible_interval(0.9)
# Solution
# Based on trial and error, here's a range of
# values for the prior
hypos = np.linspace(1, 51, 101)
# Solution
# Here are the probabilities of a score greater than 90
# for each hypothetical value of sigma.
from scipy.stats import norm
pgt90 = norm(81, hypos).sf(90)
pgt90.shape
# Solution
# And here's the chance that 5 out of 25 people
# get a score greater than 90
from scipy.stats import binom
likelihood1 = binom(25, pgt90).pmf(5)
likelihood1.shape
# Solution
# Here's the first update
prior = Pmf(1, hypos)
posterior = prior * likelihood1
posterior.normalize()
# Solution
# Here's the first posterior.
posterior.plot()
decorate(xlabel='Standard deviation (sigma)',
ylabel='PMF',
title='Posterior distribution of sigma')
# Solution
# Here's the probability of a score greater than 60
pgt60s = norm(81, hypos).sf(60)
# Solution
# And here's the probability that all 25 students exceed 60
likelihood2 = pgt60s ** 25
# Solution
plt.plot(hypos, likelihood2)
decorate(xlabel='Standard deviation (sigma)',
ylabel='Likelihood',
title='Likelihood function')
# Solution
# Here's the posterior after both updates
prior = Pmf(1, hypos)
prior.normalize()
posterior2 = prior * likelihood1 * likelihood2
posterior2.normalize()
# Solution
posterior.plot(label='Posterior 1')
posterior2.plot(label='Posterior 2')
decorate(xlabel='Standard deviation (sigma)',
ylabel='PMF',
title='Posterior distribution of sigma')
# Solution
posterior.mean(), posterior2.mean()
# Solution
posterior2.credible_interval(0.9)
def get_posterior_cv(joint):
Get the posterior distribution of CV.
joint: joint distribution of mu and sigma
returns: Pmf representing the smoothed posterior distribution
pmf_mu = marginal(joint, 0)
pmf_sigma = marginal(joint, 1)
pmf_cv = Pmf.div_dist(pmf_sigma, pmf_mu)
return kde_from_pmf(pmf_cv)
# Solution
n = 154407
mean = 178
std = 8.27
# Solution
qs = np.linspace(mean-0.1, mean+0.1, num=101)
prior_mu = make_uniform(qs, name='mean')
qs = np.linspace(std-0.1, std+0.1, num=101)
prior_sigma = make_uniform(qs, name='std')
prior = make_joint(prior_mu, prior_sigma)
# Solution
data = n, mean, std
posterior_male = update_norm_summary(prior, data)
plot_contour(posterior_male, cmap='Blues')
decorate(xlabel='Mean (mu)',
ylabel='Standard deviation (sigma)',
title='Joint distribution of mu and sigma')
# Solution
n = 254722
mean = 163
std = 7.75
# Solution
qs = np.linspace(mean-0.1, mean+0.1, num=101)
prior_mu = make_uniform(qs, name='mean')
qs = np.linspace(std-0.1, std+0.1, num=101)
prior_sigma = make_uniform(qs, name='std')
prior = make_joint(prior_mu, prior_sigma)
# Solution
data = n, mean, std
posterior_female = update_norm_summary(prior, data)
plot_contour(posterior_female, cmap='Oranges');
# Solution
pmf_cv_male = get_posterior_cv(posterior_male)
kde_from_pmf(pmf_cv_male).plot()
pmf_cv_female = get_posterior_cv(posterior_female)
kde_from_pmf(pmf_cv_female).plot()
decorate(xlabel='Coefficient of variation',
ylabel='PDF',
title='Posterior distributions of CV')
# Solution
ratio_cv = Pmf.div_dist(pmf_cv_female, pmf_cv_male)
ratio_cv.max_prob()
# Solution
ratio_cv.credible_interval(0.9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show datasets in collection
Step2: Another graph example
Step3: Social graph drawing 101
Step4: Create a graph using Python iterations
Step5: Fast graph construction
|
<ASSISTANT_TASK:>
Python Code:
import graph_tool.all as gt
import pandas as pd
import numpy as np
from IPython.display import display
%matplotlib inline
print("graph-tool version: {}".format(gt.__version__.split(' ')[0]))
with pd.option_context('display.max_colwidth', -1):
display(pd.DataFrame.from_records(gt.collection.descriptions,
index=['description']).transpose())
g = gt.collection.data['karate']
g
# construct a simple drawing of this graph
# If you run this for the first time, download the data with the command below
#!wget https://git.skewed.de/count0/graph-tool/raw/2c8c9899dd05549eaef728dabd93dc0759a2d4e0/doc/search_example.xml
gs = gt.load_graph("search_example.xml")
# TODO: print available edge and vertex properties
# TODO: visualize edge weight and names
X_knows = {
'Mary': ['Peter', 'Albert', 'DavidF', 'Peter'],
'Judy': ['Bob', 'Alan'],
'Peter': ['Mary', 'DavidF', 'Jon'],
'DavidF': ['Albert', 'Joseph', 'Peter', 'Mary'],
'Jon': ['Peter', 'Joseph', 'DavidE'],
'DavidE': ['Jon', 'Joseph', 'Albert'],
'Joseph': ['DavidE', 'Jon', 'DavidF'],
'Bob': ['Judy', 'Alan'],
'Alan': ['Bob', 'Mary', 'Judy'],
'Albert': ['DavidF', 'Mary', 'DavidE'],
}
g = gt.Graph(directed=True)
# Create edge tuples and list of unique names
X_edges = list((n,k) for n in X_knows for k in X_knows[n])
from functools import reduce
X_names = reduce(lambda a,b: set(a).union(b),
(X_knows[n] for n in X_knows)
).union(X_knows.keys())
X_names = list(X_names)
# Construct a 'StringIndexer' to convert strings to integers
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
lem = le.fit(list(X_names))
X_edges = list(map(lem.transform, X_edges))
# Create Graph object and add a string property for names
g2 = gt.Graph()
v_name = g2.new_vertex_property('string')
g2.vertex_properties['name'] = v_name
for vn in lem.classes_:
v = g2.add_vertex()
v_name[v] = vn
for f,t in X_edges:
g2.add_edge(f,t)
# TODO: Same as above, make a tidy, undirectional drawing of this graph
# TODO: find one-line call to g.add_edge_list that constructs the X_knows graph
# hint: use nested list comprehension to reshape the dictionary
# TODO: Create an undirected view of this graph
# Tidy up parallel edges
# Try two different layouts presented in the tutorial
# Produce a tidy drawing of the undirected graph
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To create our model, we need to specify the geometry and material properties of the system. The units and purpose of each property is defined below.
Step2: Simulate
Step3: The x,y,z components of the magnetisation can be visualised with the .plot() function.
Step4: We can also access this data directly and plot it however we like! In this example, we normalise the magnetisation and plot it in 3d space.
|
<ASSISTANT_TASK:>
Python Code:
import magpy as mp
single_particle = mp.Model(
radius = [12e-9],
anisotropy = [4e4],
anisotropy_axis = [[0., 0., 1.]],
magnetisation_direction = [[1., 0., 0.]],
location = [[0., 0., 0.]],
damping = 0.1,
temperature = 300.,
magnetisation = 400e3
)
results = single_particle.simulate(
end_time = 5e-9,
time_step = 1e-14,
max_samples=1000,
seed = 1001
)
results.plot()
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Ms = 400e3
time = results.time
mx = results.x[0] / Ms # particle 0
my = results.y[0] / Ms # particle 0
mz = results.z[0] / Ms # particle 0
fg = plt.figure()
ax = fg.add_subplot(111, projection='3d')
ax.plot3D(mx, my, mz)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2>02.Excellent or Not</h2>
Step2: <h1>03.Even or Odd</h1>
Step3: <h2>04.Greater Number</h2>
Step4: <h2>06.Bonus Score</h2>
Step5: <h2>07.Sum Seconds</h2>
Step6: <h2>08.Metric Converter</h2>
Step7: <h2>09.Password Guess</h2>
Step8: <h2>10.Number 100...200</h2>
Step9: <h2>11.Equal Words</h2>
Step10: <h2>12.Speed Info</h2>
Step11: <h2>13.Area of Figures</h2>
Step12: <h2>14.Time + 15 Minutes</h2>
Step13: <h2>15.3 Equal Numbers</h2>
|
<ASSISTANT_TASK:>
Python Code:
num = float(input())
if num >= 5.50:
print("Excellent!")
grade = float(input())
if grade >= 5.50:
print("Excellent!")
else:
print("Not excellent.")
num = int(input())
if num % 2 == 0:
print("even")
else:
print("odd")
num = int(input())
if num == 0:
print("zero")
elif num == 1:
print("one")
elif num == 2:
print("two")
elif num == 3:
print("three")
elif num == 4:
print("four")
elif num == 5:
print("five")
elif num == 6:
print("six")
elif num == 7:
print("seven")
elif num == 8:
print("eight")
elif num == 9:
print("nine")
else:
print("number too big")
num = int(input())
bonus = 0
if num <= 100:
bonus += 5
elif num > 100 and num < 1000:
bonus += (num * 0.2)
elif num >= 1000:
bonus += (num * 0.1)
if num % 2 == 0:
bonus += 1
if num % 10 == 5:
bonus += 2
print(bonus)
print(bonus + num)
first_Time = int(input())
second_Time = int(input())
third_Time = int(input())
total_Time = first_Time + second_Time + third_Time
minutes = int(total_Time / 60)
seconds = total_Time % 60
if total_Time < 60:
if total_Time <= 9:
print(f'0:0{seconds}')
else:
print(f'0:{seconds}')
elif total_Time >= 60:
if seconds <= 9:
print(f'{minutes}:0{seconds}')
else:
print(f'{minutes}:{seconds}')
input_num = float(input())
input_unit = input()
output_unit = input()
if input_unit == "mm":
if output_unit == "mm":
print(input_num * 1,"mm")
elif output_unit == "cm":
print(input_num / 10,"cm")
elif output_unit == "m":
print(input_num / 1000,"m")
elif output_unit == "mi":
print((input_num / 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1000) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1000) * 1.0936133,"yd")
elif input_unit == "cm":
if output_unit == "mm":
print(input_num * 10,"mm")
elif output_unit == "cm":
print(input_num * 1,"cm")
elif output_unit == "m":
print(input_num / 100,"m")
elif output_unit == "mi":
print((input_num / 100) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 100) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 100) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 100) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 100) * 1.0936133,"yd")
elif input_unit == "mi":
if output_unit == "mm":
print((input_num * 1609.344)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1609.344) * 100,"cm")
elif output_unit == "m":
print(input_num * 1609.344,"m")
elif output_unit == "mi":
print(input_num * 1,"mi")
elif output_unit == "in":
print((input_num * 1609.344) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1609.344) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1609.344) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1609.344) * 1.0936133,"yd")
elif input_unit == "in":
if output_unit == "mm":
print((input_num * 0.0254)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 0.0254) * 100,"cm")
elif output_unit == "m":
print(input_num * 0.0254,"m")
elif output_unit == "mi":
print((input_num * 0.0254) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1),"in")
elif output_unit == "km":
print((input_num * 0.0254) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 0.0254) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 0.0254) * 1.0936133,"yd")
elif input_unit == "km":
if output_unit == "mm":
print((input_num * 1000)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1000) * 100,"cm")
elif output_unit == "m":
print(input_num * 1000,"m")
elif output_unit == "mi":
print((input_num * 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1),"km")
elif output_unit == "ft":
print((input_num * 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1000) * 1.0936133,"yd")
elif input_unit == "ft":
if output_unit == "mm":
print((input_num / 3.2808399)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 3.2808399) * 100,"cm")
elif output_unit == "m":
print(input_num / 3.2808399,"m")
elif output_unit == "mi":
print((input_num / 3.2808399) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 3.2808399) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 3.2808399) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1),"ft")
elif output_unit == "yd":
print((input_num / 3.2808399) * 1.0936133,"yd")
elif input_unit == "yd":
if output_unit == "mm":
print((input_num / 1.0936133)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 1.0936133) * 100,"cm")
elif output_unit == "m":
print(input_num / 1.0936133,"m")
elif output_unit == "mi":
print((input_num / 1.0936133) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1.0936133) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1.0936133) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1.0936133) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1),"yd")
elif input_unit == "m":
if output_unit == "mm":
print(input_num * 1000,"mm")
elif output_unit == "cm":
print(input_num * 100,"cm")
elif output_unit == "m":
print(input_num * 1,"m")
elif output_unit == "mi":
print(input_num * 0.000621371192,"mi")
elif output_unit == "in":
print(input_num * 39.3700787,"in")
elif output_unit == "km":
print(input_num * 0.001,"km")
elif output_unit == "ft":
print(input_num * 3.2808399,"ft")
elif output_unit == "yd":
print(input_num * 1.0936133,"yd")
password = input()
if password == "s3cr3t!P@ssw0rd":
print("Welcome")
else:
print("Wrong password!")
num = int(input())
if num < 100:
print("Less than 100")
elif num >= 100 and num <= 200:
print("Between 100 and 200")
elif num > 200:
print("Greater than 200")
first_Word = input().lower()
second_Word = input().lower()
if first_Word == second_Word:
print("yes")
else:
print("no")
speed = float(input())
if speed <= 10:
print("slow")
elif speed > 10 and speed <= 50:
print("average")
elif speed > 50 and speed <= 150:
print("fast")
elif speed > 150 and speed <= 1000:
print("ultra fast")
else:
print("extremely fast")
import math
figure = input()
if figure == "square":
side = float(input())
area = side ** 2
print(format(area,'.3f'))
elif figure == "rectangle":
side_a = float(input())
side_b = float(input())
area = side_a * side_b
print(format(area,'.3f'))
elif figure == "circle":
radius = float(input())
area = radius ** 2 * math.pi
print(format(area,'.3f'))
elif figure == "triangle":
side = float(input())
height = float(input())
area = (side * height) / 2
print(format(area,'.3f'))
hours = int(input())
minutes = int(input())
minutes += 15
if minutes >= 60:
minutes %= 60
hours += 1
if hours >= 24:
hours -= 24
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
print(f'{hours}:{minutes}')
first_num = int(input())
second_num = int(input())
third_num = int(input())
sum = first_num + second_num + third_num
if sum / 3 == first_num:
print("yes")
else:
print("no")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving the whole-genome metabolic model from path2models
Step2: reading path2models SBML
Step3: construct regex patterns
Step4: create pandas dataframe for reactions
Step5: Integrating path2models dataframe with KEGG pathway
|
<ASSISTANT_TASK:>
Python Code:
import libsbml
import pandas as pd
import re
!curl -o BMID000000140222.xml http://www.ebi.ac.uk/biomodels-main/download?mid=BMID000000140222
document = libsbml.readSBML('BMID000000140222.xml')
model = document.getModel()
bigg = re.compile(r'BIGG:.*</p>')
brenda = re.compile(r'BRENDA:.*</p>')
gene_association = re.compile(r'GENE_ASSOCIATION:.*</p>')
kegg = re.compile(r'KEGG:.*</p>')
metacyc = re.compile(r'METACYC:.*</p>')
reactome = re.compile(r'REACTOME:.*</p>')
rhea = re.compile(r'RHEA:.*</p>')
seed = re.compile(r'SEED:.*</p>')
upa = re.compile(r'UPA:.*</p>')
id_ary = []
bigg_ary = []
brenda_ary = []
gene_association_ary = []
kegg_ary = []
metacyc_ary =[]
for i in range(0,model.getNumReactions()):
r = model.getReaction(i)
notes = r.getNotesString()
bigg_re = bigg.search(notes)
brenda_re = brenda.search(notes)
gene_association_re = gene_association.search(notes)
kegg_re = kegg.search(notes)
metacyc_re = metacyc.search(notes)
id_ary.append(r.id)
bigg_ary.append(None)
brenda_ary.append(None)
gene_association_ary.append(None)
kegg_ary.append(None)
metacyc_ary.append(None)
if bigg_re:
bigg_ary.pop()
big = bigg_re.group()[6:-4]
bigg_ary.append(big)
if brenda_re:
brenda_ary.pop()
br = brenda_re.group()[8:-4]
brenda_ary.append(br)
if gene_association_re:
gene_association_ary.pop()
gene = gene_association_re.group()[18:-4]
gene_association_ary.append(gene)
if kegg_re:
kegg_ary.pop()
keg = kegg_re.group()[6:-4]
kegg_ary.append(keg)
if metacyc_re:
metacyc_ary.pop()
meta = metacyc_re.group()[9:-4]
metacyc_ary.append(meta)
id_se = pd.Series(id_ary, name='ID')
bigg_se = pd.Series(bigg_ary, name='BIGG')
brenda_se = pd.Series(brenda_ary, name='BRENDA')
gene_association_se = pd.Series(gene_association_ary, name='GENE_ASSOCIATION')
kegg_se = pd.Series(kegg_ary, name='KEGG')
metacyc_se = pd.Series(metacyc_ary, name='METACYC')
df = pd.concat([id_se, bigg_se, brenda_se, gene_association_se, kegg_se, metacyc_se], axis=1)
df.head()
import requests
import json
# Basic Setup
PORT_NUMBER = 1234
BASE_URL = "http://localhost:" + str(PORT_NUMBER) + "/v1/"
# Header for posting data to the server as JSON
HEADERS = {'Content-Type': 'application/json'}
requests.get(BASE_URL)
pathway_location = "http://rest.kegg.jp/get/eco00010/kgml"
res1 = requests.post(BASE_URL + "networks?source=url", data=json.dumps([pathway_location]), headers=HEADERS)
result = json.loads(res1.content)
pathway_suid = result[0]["networkSUID"][0]
print("Pathway SUID = " + str(pathway_suid))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is the plot of the average firing rate across all the neurons
Step2: But that's across all the neurons. In the empirical data we're comparing to, it looks like they only selected neurons that are preferentially active for that stimulus. This corresponds to the dot product between the encoder and the actual value being greater than zero.
|
<ASSISTANT_TASK:>
Python Code:
dimensions = 10
input_scale = 1
n_neurons_per_dim = 50
intercept_low = -0.5
intercept_high = 1.0
tau_input = 0.01
tau_recurrent = 0.1
tau_reset = 0.2
max_rate_high = 200
max_rate_low = 150
sensory_delay = 0.05
reset_scale = 0.3
model = nengo.Network()
with model:
vocab = spa.Vocabulary(dimensions)
value = vocab.parse('A').v
def stim(t):
if 0.5 < t - sensory_delay < 0.75:
return value
else:
return [0]*dimensions
stim = nengo.Node(stim)
a = nengo.Ensemble(n_neurons=n_neurons_per_dim * dimensions,
dimensions=dimensions,
max_rates=nengo.dists.Uniform(max_rate_low, max_rate_high),
intercepts=nengo.dists.Uniform(intercept_low, intercept_high))
b = nengo.Ensemble(n_neurons=n_neurons_per_dim * dimensions,
dimensions=dimensions,
max_rates=nengo.dists.Uniform(max_rate_low, max_rate_high),
intercepts=nengo.dists.Uniform(intercept_low, intercept_high))
nengo.Connection(stim, a, synapse=None)
nengo.Connection(a, b, synapse=tau_input, transform=input_scale)
nengo.Connection(b, b, synapse=tau_recurrent)
def reset(t):
if t - sensory_delay > 1.75:
return 1
else:
return 0
reset_stim = nengo.Node(reset)
reset_value = vocab.parse('B').v
reset_value.shape = dimensions, 1
nengo.Connection(reset_stim, b.neurons, transform=np.ones((b.n_neurons, 1))*-reset_scale, synapse=tau_reset)
#nengo.Connection(reset_stim, b, transform=reset_value*reset_scale, synapse=tau_reset)
p_value = nengo.Probe(b, synapse=0.01)
p_neurons = nengo.Probe(b.neurons)
sim = nengo.Simulator(model)
sim.run(2.5)
rates = sim.data[p_neurons]
ratesf = nengo.synapses.Lowpass(0.05).filt(rates)
pylab.plot(sim.trange(), np.mean(ratesf, axis=1))
pylab.axvline(0.5)
pylab.axvline(0.75)
pylab.axvline(1.75)
pylab.show()
encs = sim.data[b].encoders
similarity = np.dot(encs, value)
items = np.where(similarity>0.0) # TODO: What is this threshold in the real data?
print 'N= %d' % len(items[0])
pylab.plot(sim.trange(), np.mean(ratesf[:,items[0]], axis=1))
pylab.axvline(0.5)
pylab.axvline(0.75)
pylab.axvline(1.75)
pylab.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure you have the geoserver VM running
Step2: Display geoserver status
Step3: Get the data from S3
Step4: Adding an RGB layer to the map
Step5: To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).
Step6: This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.
Step7: The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values
Step8: R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image.
Step9: Then we can re-add the layer with a color interval of 0 to 1.
Step10: We can also brighten this up by changing the gamma.
Step11: Finally, let's add a little opacity to layer so we can see some of the underlying base map features.
Step12: Adding a single band Layer
Step13: You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps
Step14: Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.
Step15: What can I do with this data?
Step16: Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon).
Step17: As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function
Step18: NDVI Segmentation analysis
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pylab as plt
!cd ../devops/geoserver && vagrant status
from IPython.core.display import display, HTML
from geonotebook.config import Config
geoserver = Config().vis_server
display(HTML(geoserver.c.get("/about/status").text))
!curl -o /tmp/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff http://golden-tile-geotiffs.s3.amazonaws.com/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff
# Set the center of the map to the location the data
M.set_center(-120.32, 47.84, 7)
from geonotebook.wrappers import RasterData
rd = RasterData('data/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff')
rd
M.add_layer(rd[1, 2, 3], opacity=1.0)
M.layers.annotation.points[0].data.next()
from geonotebook.vis.ktile.utils import get_layer_vrt
print get_layer_vrt(M.layers[0])
M.layers
print("Color Min Max")
print("Red: {}, {}".format(rd[1].min, rd[1].max))
print("Green: {}, {}".format(rd[2].min, rd[2].max))
print("Blue: {}, {}".format(rd[3].min, rd[3].max))
M.remove_layer(M.layers[0])
M.add_layer(rd[1, 2, 3], interval=(0,1))
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5)
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5, opacity=0.75)
# Remove the layer before moving on to the next section
M.remove_layer(M.layers[0])
M.add_layer(rd[4])
cmap = plt.get_cmap('winter', 10)
M.add_layer(rd[4], colormap=cmap, opacity=0.8)
from matplotlib.colors import LinearSegmentedColormap
# Divergent Blue to Beige to Green colormap
cmap =LinearSegmentedColormap.from_list(
'ndvi', ['blue', 'beige', 'green'], 20)
# Add layer with custom colormap
M.add_layer(rd[4], colormap=cmap, opacity=0.8, min=-1.0, max=1.0)
M.set_center(-119.25618502500376, 47.349300631765104, 11)
layer, data = next(M.layers.annotation.rectangles[0].data)
data
import numpy as np
fig, ax = plt.subplots(figsize=(16, 16))
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
# Adapted from the scikit-image segmentation tutorial
# See: http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
import numpy as np
from skimage import measure
from skimage.filters import sobel
from skimage.morphology import watershed
from scipy import ndimage as ndi
THRESHOLD = 20
WATER_MIN = 0.2
WATER_MAX = 0.6
fig, ax = plt.subplots(figsize=(16, 16))
edges = sobel(data)
markers = np.zeros_like(data)
markers[data > WATER_MIN] = 2
markers[data > WATER_MAX] = 1
mask = (watershed(edges, markers) - 1).astype(bool)
seg = np.zeros_like(mask, dtype=int)
seg[~mask] = 1
# Fill holes
seg = ndi.binary_fill_holes(seg)
# Ignore entities smaller than THRESHOLD
label_objects, _ = ndi.label(seg)
sizes = np.bincount(label_objects.ravel())
mask_sizes = sizes > THRESHOLD
mask_sizes[0] = 0
clean_segs = mask_sizes[label_objects]
# Find contours of the segmented data
contours = measure.find_contours(clean_segs, 0)
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
ax.axis('tight')
for n, contour in enumerate(contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motion Data
Step2: for more details on plotting options see also
Step3: try it with the gyroscope data ... what's the difference?
Step4: Feature Caluclation and Selection
Step5: Only for advanced students
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
#use a nicer plotting style
plt.style.use(u'seaborn-notebook')
print(plt.style.available)
data = pd.read_csv('./data/assembly.dat',delimiter='\t',skiprows=11,names=['s','usec','ax','ay','az','gx','gy','gz','mx','my','mz','label'])
#to get an overview of the data you can use describe in pandas
data.describe()
data[['gx','gy','gz']].plot()
data[['ax','ay','az']].plot()
#simple line plot
plt.plot(data['ax'])
plt.xlim(4000,7000)
fig= figure()
plot(data['ax'], label='acceleration x')
plot(data['gx'], label='gyrscope x')
legend(loc='upper right')
#to explore the hist command a bit more ... use this list
alist =[1,2,3,4,5,5,5,4,4,4,4,4,4,5,6,4,4,4,4,4,4,4,4,3,2,2,8,8,8]
hist(alist)
#if you want to plot the density function with it ... pandas is nice:
d = pd.Series(alist)
d.hist()
d.plot(kind='kde', style='k--')
#try it with the gryo and accelerometer data
data['ax'].plot(kind='kde', style='b--')
data['ay'].plot(kind='kde', style='g--')
data['az'].plot(kind='kde', style='r--')
y = [3, 10, 7, 5, 3, 4.5, 6, 8.1]
N = len(y)
x = range(N)
width = 1/1.5
plt.bar(x, y, width, color="blue")
plt.barh(x, y, width, color="blue")
gyro_hammer = data[data['label']==1][['gx','gy','gz']]
gyro_screw = data[data['label']==2][['gx','gy','gz']]
gyro_sand = data[data['label']==3][['gx','gy','gz']]
gyro_saw = data[data['label']==4][['gx','gy','gz']]
gyro_hammer.plot()
gyro_screw.plot()
gyro_sand.plot()
gyro_saw.plot()
#calculating a sliding window ...
from pandas.stats.moments import rolling_apply
print size(gyro_screw)/10.0
method = median
wsize = 10
feat1 = rolling_apply(gyro_screw, wsize, method).dropna()
feat2 = rolling_apply(gyro_hammer,wsize, method).dropna()
feat3 = rolling_apply(gyro_sand, wsize, method).dropna()
feat4 = rolling_apply(gyro_saw, wsize, method).dropna()
scatter(feat1['gz'], feat1['gy'])
scatter(feat2['gz'], feat2['gy'],color='red')
scatter(feat3['gz'], feat3['gy'],color='green')
scatter(feat4['gz'], feat4['gy'],color='yellow')
#3d is usually worse but you can do also 3d plotd
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = figure(figsize=(14,6))
ax = axes(projection='3d')
ax.scatter(feat1['gx'], feat1['gy'], feat1['gz'])
ax.scatter(feat2['gx'], feat2['gy'], feat2['gz'],c='red')
#don't look here ... I split the data in test and training set
#bad code :)
l_1 = len(feat1)
l_2 = len(feat2)
l_3 = len(feat3)
l_4 = len(feat4)
#X = feat1.append(feat2).append(feat3).append(feat4)
#Y = [1.0] * l_1 + [2.0]* l_2 + [3] * l_3 + [4] * l_4
X = feat1[0:l_1/2].append(feat2[0:l_2/2]).append(feat3[0:l_3/2]).append(feat4[0:l_4/2])
Y = [1]*(l_1/2) + [2]*(l_2/2)+[3]*(l_3/2) + [4]*(l_4/2)
T = feat1[l_1/2:l_1].append(feat2[l_2/2:l_2]).append(feat3[l_3/2:l_3]).append(feat4[l_4/2:l_4])
t_gg = [1]*len(feat1[l_1/2:l_1]) + [2]*len(feat2[l_2/2:l_2]) + [3]*len(feat3[l_3/2:l_3]) + [4]*len(feat1[l_4/2:l_4])
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
dt = DecisionTreeClassifier()
dt.fit(X,Y)
res = dt.predict(T)
plot(res+0.2, 'r.')
plot(t_gg, 'b.')
print dt.score(T, t_gg)
from sklearn.utils import shuffle
from sklearn.cross_validation import StratifiedKFold, cross_val_score
Xn, yn = shuffle(T, t_gg)
skf = StratifiedKFold(yn, 10)
print cross_val_score(dt, Xn, yn, cv=skf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This should have taken about 20 seconds to read in 50 files. A bit slow so hopefully you don't have to read thousands of s-files very often.
Step2: There are a few important attributes of the class
Step3: Names that start with bla indicate a blank field is expected (will raise value error if it isn't).
|
<ASSISTANT_TASK:>
Python Code:
import seisobs
sfile_directory = 'TEST_'
cat = seisobs.seis2cat(sfile_directory)
cat
spec1 = seisobs.specs.specs['1']
print ('colspecs\n')
print (spec1.colspec)
print ('\n')
print ('colnames\n')
print (spec1.colname)
print('\n')
print ('colformat\n')
print (spec1.colformat)
example_line_1 = ' 1996 625 0337 31.0 L 61.689 3.259 15.0 TES 35 3.0 3.3LTES 3.0CTES 3.2LNAO1'
sline = seisobs.core.Sline(example_line_1)
sline
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2
Step2: To avoid this problem in evaluation, we first shuffle our data
Step3: Now implementing cross-validation is easy
Step4: Let's check that our test mask does the right thing
Step5: And now let's look a the scores we computed
Step6: As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer.
Step7: As you can see, the function uses three folds by default. You can change the number of folds using the cv argument
Step8: There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold
Step9: By default, cross_val_score will use StrafifiedKFold for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0.
Step10: As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds.
Step11: For comparison, again the standard KFold, that ignores the labels
Step12: Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation
Step13: Another helpful cross-validation generator is ShuffleSplit. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently
Step14: If you want a more robust estimate, you can just increase the number of iterations
Step15: You can use all of these cross-validation generators with the cross_val_score method
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
y
import numpy as np
rng = np.random.RandomState(0)
permutation = rng.permutation(len(X))
X, y = X[permutation], y[permutation]
print(y)
k = 5
n_samples = len(X)
fold_size = n_samples // k
scores = []
masks = []
for fold in range(k):
# generate a boolean mask for the test set in this fold
test_mask = np.zeros(n_samples, dtype=bool)
test_mask[fold * fold_size : (fold + 1) * fold_size] = True
# store the mask for visualization
masks.append(test_mask)
# create training and test sets using this mask
X_test, y_test = X[test_mask], y[test_mask]
X_train, y_train = X[~test_mask], y[~test_mask]
# fit the classifier
classifier.fit(X_train, y_train)
# compute the score and record it
scores.append(classifier.score(X_test, y_test))
import matplotlib.pyplot as plt
%matplotlib inline
plt.matshow(masks)
print(scores)
print(np.mean(scores))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(classifier, X, y)
print(scores)
print(np.mean(scores))
cross_val_score(classifier, X, y, cv=5)
from sklearn.cross_validation import KFold, StratifiedKFold, ShuffleSplit, LeavePLabelOut
cv = StratifiedKFold(iris.target, n_folds=5)
for train, test in cv:
print(test)
def plot_cv(cv, n_samples):
masks = []
for train, test in cv:
mask = np.zeros(n_samples, dtype=bool)
mask[test] = 1
masks.append(mask)
plt.matshow(masks)
plot_cv(StratifiedKFold(iris.target, n_folds=5), len(iris.target))
plot_cv(KFold(len(iris.target), n_folds=5), len(iris.target))
plot_cv(KFold(len(iris.target), n_folds=10), len(iris.target))
plot_cv(ShuffleSplit(len(iris.target), n_iter=5, test_size=.2), len(iris.target))
plot_cv(ShuffleSplit(len(iris.target), n_iter=20, test_size=.2), len(iris.target))
cv = ShuffleSplit(len(iris.target), n_iter=5, test_size=.2)
cross_val_score(classifier, X, y, cv=cv)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualizations
Step2: Daily average of number of upvotes per post
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
%matplotlib inline
posts_df = pd.DataFrame.from_csv("reddit_posts_the_donald_201604.csv")
posts_df[0:5]
posts_df['created'] = pd.to_datetime(posts_df.created_utc, unit='s')
posts_df['created_date'] = posts_df.created.dt.date
posts_df['downs'] = posts_df.score - posts_df.ups
posts_time_ups = posts_df.set_index('created_date').ups.sort_index()
posts_time_ups[0:5]
posts_date_df = posts_df.set_index('created').sort_index()
posts_date_df[0:5]
posts_groupby = posts_date_df.groupby([pd.TimeGrouper('1D', closed='left')])
posts_groupby.mean().num_comments.plot(kind='barh', figsize=[8,8])
posts_groupby.mean().ups.plot(kind='barh', figsize=[8,8])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we establish a link with the database. We know that the database created by tweetharvester is called tweets_db and within it is a collection of tweets that goes by the name of the project, in this example
Step2: We now have an object, coll, that offers full access to the MongoDB API where we can analyse the data in the collected tweets. For instance, in our small example collection, we can count the number of tweets
Step3: Or we can count the number of tweets that are geolocated with a field containing the latitude and longitude of the user when they sent the tweet. We construct a MongoDB query that looks for a non-empty field called coordinates.
Step4: Or how many tweets had the hashtag #happy in them?
Step5: Pre-requisites for Analysis
Step6: This JSON data structure is documented on the Twitter API website where each field is described in detail. It is recommended that this description is studied in order to understand how to construct valid queries.
Step7: Establish a Link to the Dataset as a MongoDB Collection
Step8: Descriptive Statistics
Step9: Number of Geolocated Tweets
Step10: Range of Creation Times for Tweets
Step11: Plot Tweets per Hour
Step12: More Complex Query
Step13: Build a Network of Hashtags
Step14: Generate list of all pairs of hashtags
Step15: Build graph with weighted edges between hashtags
Step16: Remove rarer edges
Step17: View as Table
Step18: Plot the Network
Step19: Repeat for #sad
Step20: Graph is drawn with a spring layout to bring out more clearly the disconnected sub-graphs.
|
<ASSISTANT_TASK:>
Python Code:
import pymongo
db = pymongo.MongoClient().tweets_db
coll = db.emotweets
coll
coll.count()
query = {'coordinates': {'$ne': None}}
coll.find(query).count()
query = {'hashtags': {'$in': ['happy']}}
coll.find(query).count()
coll.find_one()
%matplotlib inline
import pymongo # in case we have run Part 1 above
import pandas as pd # for data manipulation and analysis
import matplotlib.pyplot as plt
db = pymongo.MongoClient().tweets_db
COLL = db.emotweets
COLL
COLL.count()
def count_by_tag(coll, hashtag):
query = {'hashtags': {'$in': [hashtag]}}
count = coll.find(query).count()
return count
print 'Number of #happy tweets: {}'.format(count_by_tag(COLL, 'happy'))
print 'Number of #sad tweets: {}'.format(count_by_tag(COLL, 'sad'))
query = {'coordinates': {'$ne': None}}
COLL.find(query).count()
# return a cursor that iterates over all documents and returns the creation date
cursor = COLL.find({}, {'created_at': 1, '_id': 0})
# list all the creation times and convert to Pandas DataFrame
times = pd.DataFrame(list(cursor))
times = pd.to_datetime(times.created_at)
earliest_timestamp = min(times)
latest_timestamp = max(times)
print 'Creation time for EARLIEST tweet in dataset: {}'.format(earliest_timestamp)
print 'Creation time for LATEST tweet in dataset: {}'.format(latest_timestamp)
query = {} # empty query means find all documents
# return just two columns, the date of creation and the id of each document
projection = {'created_at': 1}
df = pd.DataFrame(list(COLL.find(query, projection)))
times = pd.to_datetime(df.created_at)
df.set_index(times, inplace=True)
df.drop('created_at', axis=1, inplace=True)
tweets_all = df.resample('60Min', how='count')
tweets_all.plot(figsize=[12, 7], title='Number of Tweets per Hour', legend=None);
query = { # find all documents that:
'hashtags': {'$in': ['happy']}, # contain #happy hashtag
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = COLL.find(query, projection)
for tags in cursor[:10]:
print tags['hashtags']
from itertools import combinations
import networkx as nx
def gen_edges(coll, hashtag):
query = { # find all documents that:
'hashtags': {'$in': [hashtag]}, # contain hashtag of interest
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = coll.find(query, projection)
for tags in cursor:
hashtags = tags['hashtags']
for edge in combinations(hashtags, 2):
yield edge
def build_graph(coll, hashtag, remove_node=True):
g = nx.Graph()
for u,v in gen_edges(coll, hashtag):
if g.has_edge(u,v):
# add 1 to weight attribute of this edge
g[u][v]['weight'] = g[u][v]['weight'] + 1
else:
# create new edge of weight 1
g.add_edge(u, v, weight=1)
if remove_node:
# since hashtag is connected to every other node,
# it adds no information to this graph; remove it.
g.remove_node(hashtag)
return g
G = build_graph(COLL, 'happy')
def trim_edges(g, weight=1):
# function from http://shop.oreilly.com/product/0636920020424.do
g2 = nx.Graph()
for u, v, edata in g.edges(data=True):
if edata['weight'] > weight:
g2.add_edge(u, v, edata)
return g2
G2 = trim_edges(G, weight=25)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
G3 = trim_edges(G, weight=35)
pos=nx.circular_layout(G3) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3.edges()]
weight_list = [edata['weight']/5.0 for u, v, edata in G3.edges(data=True)]
# edges
nx.draw_networkx_edges(G3, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3, pos, font_size=20,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(10, 10)
plt.axis('off');
G_SAD = build_graph(COLL, 'sad')
G2S = trim_edges(G_SAD, weight=5)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2S.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
G3S = trim_edges(G_SAD, weight=5)
pos=nx.spring_layout(G3S) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3S, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3S.edges()]
weight_list = [edata['weight'] for u, v, edata in G3S.edges(data=True)]
# edges
nx.draw_networkx_edges(G3S, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3S, pos, font_size=12,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(13, 13)
plt.axis('off');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get/Set the filenames required
Step2: Below we set the field to column mappings. The number on the right of the '=' refers to the column in the template in which the field on the left occurs.
|
<ASSISTANT_TASK:>
Python Code:
#Import modules
import sys, os
import pandas as pd
from openpyxl import load_workbook
#Set the location of the data directory
dataDir = '../../Data'
#Get the water balance input csv file
inDataFN = dataDir + os.sep + 'StateData' + os.sep + 'la_2010.csv'
#Get the template
inXlsxFN = dataDir + os.sep + 'Templates' + os.sep + 'StatePSUTTemplate.xlsx' #The template that will be filled in
#Load the water balance data into a pandas dataframe
dfData = pd.read_csv(inDataFN)
#Row and column indices
#--Columns--
Aq = 5 #Aquaculture
Do = 19 #Domestic
In = 16 #Industrial
Ic = 2 #Irrigation-cropland
Ig = 17 #Irrigation-golf courses
Li = 3 #Livestock
Mi = 7 #Mining
PS = 14 #Public supply
TC = 10 #Thermoelectric-once thru
TR = 9 #Thermoelectric-recirculated
Supply = 20 #Environment
#--Rows--
Sf = 21 #Surface
Gw = 22 #Groundwater
#Create the dictionary of GroupNames and cell locations
celLocs = {'Aquaculture_Surface':(Sf,Aq),
'Aquaculture_Groundwater':(Gw,Aq),
'Domestic_Surface':(Sf,Do),
'Domestic_Groundwater':(Gw,Do),
'Industrial_Surface':(Sf,In),
'Industrial_Groundwater':(Gw,In),
'Irrigation_Crop_Surface':(Sf,Ic),
'Irrigation_Crop_Groundwater':(Gw,Ic),
'Irrigation_Golf_Surface':(Sf,Ig),
'Irrigation_Golf_Groundwater':(Gw,Ig),
'Livestock_Surface':(Sf,Li),
'Livestock_Groundwater':(Gw,Li),
'Mining_Surface':(Sf,Mi),
'Mining_Groundwater':(Gw,Mi),
'PublicSupply_Surface':(Sf,PS),
'PublicSupply_Groundwater':(Gw,PS),
'ThermoElec_OnceThru_Surface':(Sf,TC),
'ThermoElec_OnceThru_Groundwater':(Gw,TC),
'ThermoElec_Recirc_Surface':(Sf,TR),
'ThermoElec_Recirc_Groundwater':(Gw,TR),
'Supply':(4,12)
}
#Create the workbook object
wb = load_workbook(filename = inXlsxFN)
for year in (2000,2005, 2010):
#Get the year worksheet in the workbook
ws = wb.get_sheet_by_name(str(year))
#Label the sheet
ws.cell(column=1,row=1,value="US Water Balance: {}. Values in MGal/Year".format(year))
#use the dictionary to insert values
for name, cellLoc in celLocs.items():
#Get the value for selected year
val = dfData[(dfData.Group == name) & (dfData.YEAR == year)]['MGal'].iloc[0]
#insert it into the Excel file
ws.cell(column = cellLoc[1],row = cellLoc[0],value = val)
wb.save(dataDir+os.sep+'BalanceSheet.xlsx')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This module has a special section, starting with
Step4: Item
Step5: Here are some tests your code should pass
Step6: How do they behave in a list?
Step11: ShoppingCart
Step12: Here are some tests
|
<ASSISTANT_TASK:>
Python Code:
import vector2d
u = vector2d.Vector(1,2)
u
INVENTORY_TEXT =
apple, 0.60
banana, 0.20
grapefruit, 0.75
grapes, 1.99
kiwi, 0.50
lemon, 0.20
lime, 0.25
mango, 1.50
papaya, 2.95
pineapple, 3.50
blueberries, 1.99
blackberries, 2.50
peach, 0.50
plum, 0.33
clementine, 0.25
cantaloupe, 3.25
pear, 1.25
quince, 0.45
orange, 0.60
# this will be a global -- convention is all caps
INVENTORY = {}
for line in INVENTORY_TEXT.splitlines():
if line.strip() == "":
continue
item, price = line.split(",")
INVENTORY[item] = float(price)
INVENTORY
class Item(object):
an item to buy
def __init__(self, name, quantity=1):
if name not in INVENTORY:
raise ValueError
self.name = name
self.quantity = quantity
def __repr__(self):
pass
def __eq__(self, other):
pass
def __add__(self, other):
pass
a = Item("apple", 10)
b = Item("banana", 20)
c = Item("apple", 20)
# won't work
a + b
# will work
a += c
a
a == b
a == c
items = []
items.append(a)
items.append(b)
items
c in items
class ShoppingCart(object):
def __init__(self):
self.items = []
def subtotal(self):
return a subtotal of our items
pass
def add(self, name, quantity):
add an item to our cart
pass
def remove(self, name):
remove all of item name from the cart
pass
def report(self):
print a summary of the cart
pass
sc = ShoppingCart()
sc.add("orange", 19)
sc.add("apple", 2)
sc.report()
sc.add("apple", 9)
sc.report()
sc.subtotal()
sc.remove("apple")
sc.report()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: At this point, we've got the list of patients by their IDs, and their associated labels stored in a dataframe. Now, we can begin to iterate through the patients and gather their respective data. We're almost certainly going to need to do some preprocessing of this data, but we'll see.
Step2: Above, we iterate through each patient, we grab their label, we get the full path to that specific patient (inside THAT path contains ~200ish scans which we also iterate over, BUT also want to sort, since they wont necessarily be in proper order).
Step3: Alright, so above we just went ahead and grabbed the pixel_array attribute, which is what I assume to be the scan slice itself (we will confirm this soon), but immediately I am surprised by this non-uniformity of slices. This isn't quite ideal and will cause a problem later. All of our images are the same size, but the slices arent. In terms of a 3D rendering, these actually are not the same size.
Step4: Oh.
Step5: Now, I am not a doctor, but I'm going to claim a mini-victory and say that's our first CT scan slice.
Step7: Alright, so we're resizing our images from 512x512 to 150x150. 150 is still going to wind up likely being waaaaaaay to big. That's fine, we can play with that constant more later, we just want to know how to do it.
Step8: The struggle is real. Okay, what you're about to see you shouldn't attempt if anyone else is watching, like if you're going to show your code to the public...
Step9: Okay, the Python gods are really not happy with me for that hacky solution. If any of you would like to improve this chunking/averaging code, feel free. Really, any of this code...if you have improvements, share them! This is going to stay pretty messy. But hey, we did it! We figured out a way to make sure our 3 dimensional data can be at any resolution we want or need. Awesome!
Step11: Section 3
Step12: Section 4
Step13: Now we're ready for the network itself
Step14: Why 54080 magic number? To get this, I simply run the script once, and see what the error yells at me for the expected size multiple. This is certainly not the right way to go about it, but that's my 100% honest method, and my first time working in a 3D convnet. AFAIK, it's the padding that causes this to not be EXACTLY 50,000, (50 x 50 x 20 is the size of our actual input data, which is 50,000 total).
Step15: Example output that I got
Step16: So, actually, our dataset has 1035 non-cancer examples and 362 cancerous examples. Thus, an algorithm that always predicted no-cancer with our model would be ~ 74% accurate (1035/1397).
|
<ASSISTANT_TASK:>
Python Code:
import dicom # for reading dicom files
import os # for doing directory operations
import pandas as pd # for some simple data analysis (right now, just to load in the labels data and quickly reference it)
# Change this to wherever you are storing your data:
# IF YOU ARE FOLLOWING ON KAGGLE, YOU CAN ONLY PLAY WITH THE SAMPLE DATA, WHICH IS MUCH SMALLER
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0)
labels_df.head()
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
# a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
print(len(slices),label)
print(slices[0])
for patient in patients[:3]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
# a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
print(slices[0].pixel_array.shape, len(slices))
len(patients)
import matplotlib.pyplot as plt
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
# the first slice
plt.imshow(slices[0].pixel_array)
plt.show()
import cv2
import numpy as np
IMG_PX_SIZE = 150
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
fig = plt.figure()
for num,each_slice in enumerate(slices[:12]):
y = fig.add_subplot(3,4,num+1)
new_img = cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE))
y.imshow(new_img)
plt.show()
import math
def chunks(l, n):
# Credit: Ned Batchelder
# Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
Yield successive n-sized chunks from l.
for i in range(0, len(l), n):
yield l[i:i + n]
def mean(l):
return sum(l) / len(l)
IMG_PX_SIZE = 150
HM_SLICES = 20
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0)
for patient in patients[:10]:
try:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
print(len(slices), len(new_slices))
except:
# some patients don't have labels, so we'll just pass on this for now
pass
for patient in patients[:10]:
try:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == HM_SLICES-1:
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES+2:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
if len(new_slices) == HM_SLICES+1:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
print(len(slices), len(new_slices))
except Exception as e:
# again, some patients are not labeled, but JIC we still want the error if something
# else is wrong with our code
print(str(e))
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == HM_SLICES-1:
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES+2:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
if len(new_slices) == HM_SLICES+1:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
fig = plt.figure()
for num,each_slice in enumerate(new_slices):
y = fig.add_subplot(4,5,num+1)
y.imshow(each_slice, cmap='gray')
plt.show()
import numpy as np
import pandas as pd
import dicom
import os
import matplotlib.pyplot as plt
import cv2
import math
IMG_SIZE_PX = 50
SLICE_COUNT = 20
def chunks(l, n):
# Credit: Ned Batchelder
# Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
Yield successive n-sized chunks from l.
for i in range(0, len(l), n):
yield l[i:i + n]
def mean(a):
return sum(a) / len(a)
def process_data(patient,labels_df,img_px_size=50, hm_slices=20, visualize=False):
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(img_px_size,img_px_size)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / hm_slices)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == hm_slices-1:
new_slices.append(new_slices[-1])
if len(new_slices) == hm_slices-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == hm_slices+2:
new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],])))
del new_slices[hm_slices]
new_slices[hm_slices-1] = new_val
if len(new_slices) == hm_slices+1:
new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],])))
del new_slices[hm_slices]
new_slices[hm_slices-1] = new_val
if visualize:
fig = plt.figure()
for num,each_slice in enumerate(new_slices):
y = fig.add_subplot(4,5,num+1)
y.imshow(each_slice, cmap='gray')
plt.show()
if label == 1: label=np.array([0,1])
elif label == 0: label=np.array([1,0])
return np.array(new_slices),label
# stage 1 for real.
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels = pd.read_csv('../input/stage1_labels.csv', index_col=0)
much_data = []
for num,patient in enumerate(patients):
if num % 100 == 0:
print(num)
try:
img_data,label = process_data(patient,labels,img_px_size=IMG_SIZE_PX, hm_slices=SLICE_COUNT)
#print(img_data.shape,label)
much_data.append([img_data,label])
except KeyError as e:
print('This is unlabeled data!')
np.save('muchdata-{}-{}-{}.npy'.format(IMG_SIZE_PX,IMG_SIZE_PX,SLICE_COUNT), much_data)
import tensorflow as tf
import numpy as np
IMG_SIZE_PX = 50
SLICE_COUNT = 20
n_classes = 2
batch_size = 10
x = tf.placeholder('float')
y = tf.placeholder('float')
keep_rate = 0.8
def conv3d(x, W):
return tf.nn.conv3d(x, W, strides=[1,1,1,1,1], padding='SAME')
def maxpool3d(x):
# size of window movement of window as you slide about
return tf.nn.max_pool3d(x, ksize=[1,2,2,2,1], strides=[1,2,2,2,1], padding='SAME')
def convolutional_neural_network(x):
# # 5 x 5 x 5 patches, 1 channel, 32 features to compute.
weights = {'W_conv1':tf.Variable(tf.random_normal([3,3,3,1,32])),
# 5 x 5 x 5 patches, 32 channels, 64 features to compute.
'W_conv2':tf.Variable(tf.random_normal([3,3,3,32,64])),
# 64 features
'W_fc':tf.Variable(tf.random_normal([54080,1024])),
'out':tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {'b_conv1':tf.Variable(tf.random_normal([32])),
'b_conv2':tf.Variable(tf.random_normal([64])),
'b_fc':tf.Variable(tf.random_normal([1024])),
'out':tf.Variable(tf.random_normal([n_classes]))}
# image X image Y image Z
x = tf.reshape(x, shape=[-1, IMG_SIZE_PX, IMG_SIZE_PX, SLICE_COUNT, 1])
conv1 = tf.nn.relu(conv3d(x, weights['W_conv1']) + biases['b_conv1'])
conv1 = maxpool3d(conv1)
conv2 = tf.nn.relu(conv3d(conv1, weights['W_conv2']) + biases['b_conv2'])
conv2 = maxpool3d(conv2)
fc = tf.reshape(conv2,[-1, 54080])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out'])+biases['out']
return output
much_data = np.load('muchdata-50-50-20.npy')
# If you are working with the basic sample data, use maybe 2 instead of 100 here... you don't have enough data to really do this
train_data = much_data[:-100]
validation_data = much_data[-100:]
def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) )
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
successful_runs = 0
total_runs = 0
for epoch in range(hm_epochs):
epoch_loss = 0
for data in train_data:
total_runs += 1
try:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
epoch_loss += c
successful_runs += 1
except Exception as e:
# I am passing for the sake of notebook space, but we are getting 1 shaping issue from one
# input tensor. Not sure why, will have to look into it. Guessing it's
# one of the depths that doesn't come to 20.
pass
#print(str(e))
print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('Done. Finishing accuracy:')
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('fitment percent:',successful_runs/total_runs)
# Run this locally:
# train_neural_network(x)
labels_df.cancer.value_counts()
labels_df.ix[-100:].cancer.value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Size distribution of bacterial genomes
Step3: Distribution of 16S gene copies per genome
Step4: Fitting distribution
Step5: Distribution of 16S gene copies per genome
Step6: Fitting distribution
Step7: Notes
|
<ASSISTANT_TASK:>
Python Code:
import os
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/SSU_genes_per_ng_DNA/'
rnammerDir = os.path.join(workDir + 'rnammer')
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
import glob
import pyfasta
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
import scipy.stats as ss
from fitter import Fitter
from functools import partial
%matplotlib inline
%load_ext rpy2.ipython
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
if not os.path.isdir(workDir):
os.makedirs(workDir)
if not os.path.isdir(rnammerDir):
os.makedirs(rnammerDir)
p = os.path.join(genomeDir, '*.fasta')
genomeFiles = glob.glob(p)
print 'Number of genome files: {}'.format(len(genomeFiles))
total_seq_len = lambda x: sum([len(y) for y in x.values()])
def total_genome_lens(genome_files):
genome_lens = {}
for fasta in genome_files:
name = os.path.split(fasta)[-1]
name = os.path.splitext(name)[0]
pyf = pyfasta.Fasta(fasta)
genome_lens[name] = [total_seq_len(pyf)]
return genome_lens
genome_lens = total_genome_lens(genomeFiles)
df_genome_len = pd.DataFrame(genome_lens).transpose()
df_genome_len
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(df.ix[:,0], bins=20)
fo = Fitter(df_genome_len.ix[:,0])
fo.fit()
fo.summary()
genome_len_best_fit = fo.fitted_param['rayleigh']
genome_len_best_fit
# test of distribution
x = ss.rayleigh.rvs(*genome_len_best_fit, size=10000)
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(x, bins=50)
fig.show()
%%bash -s "$genomeDir" "$rnammerDir"
find $1 -name "*fasta" | \
perl -pe 's/.+\/|\.fasta//g' | \
xargs -n 1 -I % -P 30 bash -c \
"rnammer -S bac -m ssu -gff $2/%_rrn.gff -f $2/%_rrn.fna -xml $2/%_rrn.xml < $1/%.fasta"
## Summarizing the results
!cd $rnammerDir; \
egrep -v "^#" *.gff | \
grep "16s_rRNA" | \
perl -pe 's/:/\t/' > ssu_summary.txt
inFile = os.path.join(rnammerDir, 'ssu_summary.txt')
inFH = open(inFile, 'rb')
df_ssu = pd.read_csv(inFH, sep='\t', header=None)
df_ssu.head()
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(df_ssu.ix[:,6], bins=50)
fig.show()
# filtering by gene length of >= 1000 bp
df_ssu_f = df_ssu.loc[df[6] >= 1000]
df_ssu_f.head()
# counting number of 16S genes per genome
ssu_count = Counter(df_ssu_f[1])
ssu_max = max(ssu_count.values())
# plotting distribution
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(ssu_count.values(), bins=ssu_max)
fig.show()
fo = Fitter(ssu_count.values())
fo.fit()
fo.summary()
ssu_ray_fit = fo.fitted_param['rayleigh']
ssu_ray_fit
# test of distribution
x = ss.rayleigh.rvs(*ssu_ray_fit, size=10000)
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(x, bins=50)
fig.show()
ssu_beta_fit = fo.fitted_param['beta']
ssu_beta_fit
# test of distribution
x = ss.beta.rvs(*ssu_beta_fit, size=10000)
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(x, bins=50)
fig.show()
# example of calculations
gradient_DNA_conc = 1e-9 # g of DNA
avogadro = 6.022e23 # molecules/mole
genome_len = 4000000
mw_genome = genome_len * 607.4 + 157.9
n_genomes = gradient_DNA_conc / mw_genome * avogadro
ssu_copy_per_genome = 4
n_genomes * ssu_copy_per_genome
def SSU_copies_in_ng_DNA(DNA_conc, genome_len, ssu_copy_per_genome):
DNA_conc__g = DNA_conc * 1e-9 # ng --> g of DNA
avogadros = 6.022e23 # molecules/mole
mw_genome = genome_len * 607.4 + 157.9
n_genomes = DNA_conc__g / mw_genome * avogadros
ssu_copies = n_genomes * ssu_copy_per_genome
return ssu_copies
# run
SSU_copies_in_ng_DNA(1, 4000000, 4)
def SSU_copies_MC(DNA_conc, genome_len_dist, ssu_copy_dist, n=100000):
n_copy_dist = []
for i in range(n):
genome_len = genome_len_dist(size=1)[0]
ssu_copy_per_genome = ssu_copy_dist(size=1)[0]
n_copies = SSU_copies_in_ng_DNA(DNA_conc, genome_len, ssu_copy_per_genome)
n_copy_dist.append(n_copies)
return n_copy_dist
# distribution functions
genome_len_dist = partial(ss.rayleigh.rvs, *genome_len_best_fit)
ssu_copy_dist = partial(ss.rayleigh.rvs, *ssu_ray_fit)
# monte carlo estimation of ssu copies in a gradient
gradient_dna_conc__ng = 5000
n_copy_dist = SSU_copies_MC(gradient_dna_conc__ng, genome_len_dist, ssu_copy_dist, n=10000)
fig = plt.figure()
ax = plt.subplot(111)
ax.hist(n_copy_dist, bins=50)
fig.show()
median_copy = int(np.median(n_copy_dist))
std_copy = int(np.std(n_copy_dist))
print 'Number of SSU copies in {} ng of DNA: {} +/- {}'.format(gradient_dna_conc__ng, median_copy, std_copy)
def median_confidence_interval(data, confidence=0.95):
a = 1.0*np.array(data)
n = len(a)
m, se = np.median(a), ss.sem(a)
h = se * ss.t._ppf((1+confidence)/2., n-1)
return m, m-h, m+h
mci = median_confidence_interval(n_copy_dist)
mci = map(int, mci)
# lci,hci = ss.norm.interval(0.05, loc=np.mean(n_copy_dist), scale=np.std(n_copy_dist))
# copy_median = np.median(n_copy_dist)
# mci = [copy_median, copy_median - lci, copy_median + hci]
print 'Number of SSU copies in {} ng of DNA: {:,d} (low:{:,d}, high:{:,d})'.format(gradient_dna_conc__ng, *mci)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experimenting with a sample
Step2: Preparing full data set
Step3: Model
Step4: (10th place in the competition was 0.108)
Step5: (10th place in the competition was 0.108)
|
<ASSISTANT_TASK:>
Python Code:
path = Config().data/'rossmann'
train_df = pd.read_pickle(path/'train_clean')
train_df.head().T
n = len(train_df); n
idx = np.random.permutation(range(n))[:2000]
idx.sort()
small_df = train_df.iloc[idx]
small_cont_vars = ['CompetitionDistance', 'Mean_Humidity']
small_cat_vars = ['Store', 'DayOfWeek', 'PromoInterval']
small_df = small_df[small_cat_vars + small_cont_vars + ['Sales']].reset_index(drop=True)
small_df.head()
small_df.iloc[1000:].head()
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), Categorify, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items.head()
to.valid.items.head()
to.classes['DayOfWeek']
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), FillMissing, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items[to.train.items['CompetitionDistance_na'] == True]
train_df = pd.read_pickle(path/'train_clean')
test_df = pd.read_pickle(path/'test_clean')
len(train_df),len(test_df)
procs=[FillMissing, Categorify, Normalize]
dep_var = 'Sales'
cat_names = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'StoreType', 'Assortment',
'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw',
'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw']
cont_names = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h',
'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
dep_var = 'Sales'
df = train_df[cat_names + cont_names + [dep_var,'Date']].copy()
test_df['Date'].min(), test_df['Date'].max()
cut = train_df['Date'][(train_df['Date'] == train_df['Date'][len(test_df)])].index.max()
cut
splits = (list(range(cut, len(train_df))),list(range(cut)))
train_df[dep_var].head()
train_df[dep_var] = np.log(train_df[dep_var])
#train_df = train_df.iloc[:100000]
#cut = 20000
splits = (list(range(cut, len(train_df))),list(range(cut)))
%time to = TabularPandas(train_df, procs, cat_names, cont_names, dep_var, y_block=TransformBlock(), splits=splits)
dls = to.dataloaders(bs=512, path=path)
dls.show_batch()
max_log_y = np.log(1.2) + np.max(train_df['Sales'])
y_range = (0, max_log_y)
dls.c = 1
learn = tabular_learner(dls, layers=[1000,500], loss_func=MSELossFlat(),
config=tabular_config(ps=[0.001,0.01], embed_p=0.04, y_range=y_range),
metrics=exp_rmspe)
learn.model
len(dls.train_ds.cont_names)
learn.lr_find()
learn.fit_one_cycle(5, 3e-3, wd=0.2)
learn.recorder.plot_loss(skip_start=1000)
test_to = to.new(test_df)
test_to.process()
test_dls = test_to.dataloaders(bs=512, path=path, shuffle_train=False)
learn.metrics=[]
tst_preds,_ = learn.get_preds(dl=test_dls.train)
np.exp(tst_preds.numpy()).T.shape
test_df["Sales"]=np.exp(tst_preds.numpy()).T[0]
test_df[["Id","Sales"]] = test_df[["Id","Sales"]].astype("int")
test_df[["Id","Sales"]].to_csv("rossmann_submission.csv",index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.
Step2: If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options
Step3: The translate() function can also be called with a formula object. Either as a function, or as a method.
Step4: When used as a method, all the arguments are translation options. Here is a monitor
Step5: The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.
Step6: Here is how to build an unambiguous automaton
Step7: Compare with the standard translation
Step8: And here is the automaton above with state-based acceptance
Step9: Some example of running the self-loopization algorithm on an automaton
Step10: Reading from file (see automaton-io.ipynb for more examples).
Step11: Explicit determinization after translation
Step12: Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.
Step13: Adding an automatic proposition to all edges
Step14: Adding an atomic proposition to the edge between 0 and 1
|
<ASSISTANT_TASK:>
Python Code:
a = spot.translate('(a U b) & GFc & GFd', 'BA', 'complete'); a
a.show("v")
a.show(".ast")
f = spot.formula('a U b'); f
spot.translate(f)
f.translate()
f.translate('mon')
f = spot.formula('Ga | Gb | Gc'); f
f.translate('ba', 'small').show('.v')
f.translate('ba', 'det').show('v.')
spot.translate('GFa -> GFb', 'unambig')
spot.translate('GFa -> GFb')
spot.translate('GFa -> GFb', 'sbacc')
a = spot.translate('F(a & X(!a &Xb))', "any"); a
spot.sl(a)
a.is_empty()
%%file example1.aut
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 4 Inf(0)&Fin(1)&Fin(3) | Inf(2)&Inf(3) | Inf(1)
--BODY--
State: 0 {3}
[t] 0
[0] 1 {1}
[!0] 2 {0}
State: 1 {3}
[1] 0
[0&1] 1 {0}
[!0&1] 2 {2}
State: 2
[!1] 0
[0&!1] 1 {0}
[!0&!1] 2 {0}
--END--
a = spot.automaton('example1.aut')
display(a.show('.a'))
display(spot.remove_fin(a).show('.a'))
display(a.postprocess('TGBA', 'complete').show('.a'))
display(a.postprocess('BA'))
!rm example1.aut
spot.complete(a)
spot.complete(spot.translate('Ga'))
# Using +1 in the display options is a convient way to shift the
# set numbers in the output, as an aid in reading the product.
a1 = spot.translate('a W c'); display(a1.show('.bat'))
a2 = spot.translate('a U b'); display(a2.show('.bat+1'))
# the product should display pairs of states, unless asked not to (using 1).
p = spot.product(a1, a2); display(p.show('.bat')); display(p.show('.bat1'))
a = spot.translate('FGa')
display(a)
display(a.is_deterministic())
spot.tgba_determinize(a).show('.ba')
aut = spot.translate('FGa', 'generic', 'deterministic'); aut
import buddy
b = buddy.bdd_ithvar(aut.register_ap('b'))
for e in aut.edges():
e.cond &= b
aut
c = buddy.bdd_ithvar(aut.register_ap('c'))
for e in aut.out(0):
if e.dst == 1:
e.cond &= c
aut
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with evoOp and do a measurement using qubitOp. Below, we illustrate how such an example dynamics process can be easily prepared.
Step2: With all the necessary pieces prepared, we can then proceed to run the algorithm and examine the result.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from qiskit_aqua.operator import Operator
num_qubits = 2
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
qubitOp = Operator(matrix=temp + temp.T)
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
evoOp = Operator(matrix=temp + temp.T)
from qiskit_aqua.input import get_input_instance
params = {
'problem': {
'name': 'eoh'
},
'algorithm': {
'name': 'EOH',
'num_time_slices': 1
},
'initial_state': {
'name': 'CUSTOM',
'state': 'uniform'
},
'backend': {
'name': 'statevector_simulator'
}
}
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
algo_input.add_aux_op(evoOp)
from qiskit_aqua import run_algorithm
ret = run_algorithm(params, algo_input)
print('The result is\n{}'.format(ret))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
pos_words = []
neg_words = []
for review, label in zip(reviews,labels):
if label == 'POSITIVE':
pos_words.append(review.split())
else:
neg_words.append(review.split())
positive_counts = Counter([item for sublist in pos_words for item in sublist])
negative_counts = Counter([item for sublist in neg_words for item in sublist])
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
#list(positive_counts.keys())
positive_counts['the']
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
for word in list(positive_counts):
pos_neg_ratios[word] = positive_counts[word]/float(negative_counts[word]+1)
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
for word in list(pos_neg_ratios):
if pos_neg_ratios[word] >= 1:
pos_neg_ratios[word] = np.log(pos_neg_ratios[word])
else:
pos_neg_ratios[word] = -np.log(1/(pos_neg_ratios[word] + 0.01))
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set()
for review in reviews:
for i in review.split(" "):
vocab.add(i)
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
return 1 if label == "POSITIVE" else 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for i in review.split(" "):
review_vocab.add(i)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i,word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i,label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, output_nodes**-0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
# clear out previous state by resetting the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == "POSITIVE" else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# input layer
self.update_input_layer(review)
# hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_error = self.get_target_for_label(label) - layer_2
output_error_delta = output_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(self.weights_1_2, output_error_delta)
layer_1_error_delta = layer_1_error
self.weights_1_2 += output_error_delta * layer_1.T * self.learning_rate / len(training_reviews)
self.weights_0_1 += self.layer_0.T.dot(layer_1_error_delta.T) * self.learning_rate / len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(np.abs(output_error) < 0.5):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# input layer
self.update_input_layer(review)
# hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for i in review.split(" "):
review_vocab.add(i)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i,word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i,label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, output_nodes**-0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
# clear out previous state by resetting the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == "POSITIVE" else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# input layer
self.update_input_layer(review)
# hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_error = self.get_target_for_label(label) - layer_2
output_error_delta = output_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(self.weights_1_2, output_error_delta)
layer_1_error_delta = layer_1_error
self.weights_1_2 += output_error_delta * layer_1.T * self.learning_rate #/ len(training_reviews)
self.weights_0_1 += self.layer_0.T.dot(layer_1_error_delta.T) * self.learning_rate #/ len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(np.abs(output_error) < 0.5):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# input layer
self.update_input_layer(review)
# hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for i in review.split(" "):
review_vocab.add(i)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i,word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i,label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, output_nodes**-0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x hidden_nodes, with all values initialized to zero
self.layer_1 = np.zeros((1, hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == "POSITIVE" else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = []
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# hidden layer
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_error = self.get_target_for_label(label) - layer_2
output_error_delta = output_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(self.weights_1_2, output_error_delta)
layer_1_error_delta = layer_1_error
self.weights_1_2 += self.layer_1.T.dot(output_error_delta) * self.learning_rate #/ len(training_reviews)
for index in review:
self.weights_0_1[index] += layer_1_error_delta[0] * self.learning_rate #/ len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(np.abs(output_error) < 0.5):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, min_count = 10, polarity_cutoff=0.5, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if cnt >= 50:
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i,word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i,label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes, hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, output_nodes**-0.5, (hidden_nodes, output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x hidden_nodes, with all values initialized to zero
self.layer_1 = np.zeros((1, hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 1 if label == "POSITIVE" else 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = []
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# hidden layer
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_error = self.get_target_for_label(label) - layer_2
output_error_delta = output_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(self.weights_1_2, output_error_delta)
layer_1_error_delta = layer_1_error
self.weights_1_2 += self.layer_1.T.dot(output_error_delta) * self.learning_rate #/ len(training_reviews)
for index in review:
self.weights_0_1[index] += layer_1_error_delta[0] * self.learning_rate #/ len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(np.abs(output_error) < 0.5):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: Preprocess Feature Columns
Step4: Implementation
Step5: Training and Evaluating Models
Step6: Implementation
Step7: Tabular Results
Step8: Choosing the Best Model
Step9: Question 5 - Final F<sub>1</sub> Score
Step10: Prediction for them
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns[:-1])
# TODO: Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# TODO: Calculate failing students
n_failed = n_students - n_passed
# TODO: Calculate graduation rate
grad_rate = (float(n_passed) / n_students) * 100
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
from IPython.display import display
# Show the feature information by printing the first five rows
print "\nFeature values:"
# Added pretty table display
display(X_all.head())
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
%matplotlib inline
import seaborn as sns
sns.factorplot("failures", col="goout", data=student_data, hue='passed', kind="count");
# TODO: Import any additional functionality you may need here
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, stratify=y_all,train_size=num_train, random_state=37)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
print "\nTrain set 'yes' pct = {:.2f}%".format(100 * (y_train == 'yes').mean())
print "Test set 'yes' pct = {:.2f}%".format(100 * (y_test == 'yes').mean())
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
f1_training = predict_labels(clf, X_train, y_train)
f1_test = predict_labels(clf, X_test, y_test)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(f1_training)
print "F1 score for test set: {:.4f}.".format(f1_test)
return [f1_training, f1_test]
# TODO: Import the three supervised learning models from sklearn
from sklearn import tree
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier
#from sklearn.ensemble import RandomForestClassifier
# TODO: Initialize the three models
rand_state = 37
clf_A = tree.DecisionTreeClassifier(random_state=rand_state)
clf_B = svm.SVC(random_state=rand_state)
clf_C = GradientBoostingClassifier(random_state=rand_state)
models = [clf_A, clf_B, clf_C]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
results = []
for model in models:
# TODO: Set up the training set sizes
print '************************************************************\n'
train = []
test = []
for set_size in (100,200,300):
print '------------------------------------------------------------'
train_values = train_predict(model, X_train[:set_size], y_train[:set_size], X_test, y_test)
train.append(train_values[0])
test.append(train_values[1])
results.append(train)
results.append(test)
import matplotlib.pyplot as plt
%matplotlib inline
for a in range(3):
plt.plot([100,200,300], results[a+a], '-o')
plt.legend(['y = Tree_train', 'y = SVM_train', 'y = EGB_train'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0)
plt.ylabel('Score')
plt.title('Training Scores')
plt.show()
for a in range(1,6,2):
plt.plot([100,200,300], results[a], '-o')
plt.legend(['y = Tree_test', 'y = SVM_test', 'y = EGB_test'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0)
plt.ylabel('Score')
plt.title('Testing Scores')
plt.show()
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
# TODO: Create the parameters list you wish to tune
parameters = {'kernel':['linear', 'rbf', 'poly','sigmoid'],
'C': [0.6, 1, 1.5, 3],
'probability': [True, False],
'tol': [1e-6,1e-5, 1e-4],
'random_state': [37]
}
# TODO: Initialize the classifier
clf = svm.SVC()
# TODO: Make an f1 scoring function using 'make_scorer'
def f1_metrics(y_true, y_pred):
f1 = f1_score(y_true, y_pred, pos_label='yes')
return f1
f1_scorer = make_scorer(f1_metrics)
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, param_grid=parameters, scoring=f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print clf
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
from IPython.display import display
#display(pd.DataFrame(grid_obj.grid_scores_))
grid_results = pd.DataFrame(grid_obj.grid_scores_)
grid_ = [[x[0]['C'], x[0]['tol'], x[1]] for index, x in list(grid_results.iterrows())]
#sns.heatmap( grid_, annot=True, fmt="d", linewidths=.5)
# tried but couldn't make it a heatmap
students = X_train[:10]
display(students)
for i, price in enumerate(clf.predict(students)):
print "Is the Student {} predicted to pass the year: {}".format(i+1, price)
X_all.iloc[164]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SimpleITK has a built in Show method which saves the image to disk and launches a user configurable program ( defaults to ImageJ ), to display the image.
Step2: This image does not appear bigger.
Step3: Tips and Tricks for Visualizing Segmentations
Step4: We can also map the scalar label image to a color image as shown below.
Step5: Most filters which take multiple images as arguments require that the images occupy the same physical space. That is the pixel you are operating must refer to the same location. Luckily for us our image and labels do occupy the same physical space, allowing us to overlay the segmentation onto the original image.
Step6: We can also overlay the labels as contours.
Step7: Tips and Tricks for 3D Image Visualization
Step8: Let's create a version of the show methods which allows the selection of slices to be displayed.
Step9: We next visualize the T1 image with an overlay of the labels.
Step10: Two ways to solve our problem
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import SimpleITK as sitk
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
img1 = sitk.ReadImage(fdata("cthead1.png"))
sitk.Show(img1, title="cthead1")
img2 = sitk.ReadImage(fdata("VM1111Shrink-RGB.png"))
sitk.Show(img2, title="Visible Human Head")
nda = sitk.GetArrayViewFromImage(img1)
plt.imshow(nda)
nda = sitk.GetArrayViewFromImage(img2)
ax = plt.imshow(nda)
def myshow(img):
nda = sitk.GetArrayViewFromImage(img)
plt.imshow(nda)
myshow(img2)
myshow(sitk.Expand(img2, [10] * 5))
def myshow(img, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
if nda.ndim == 3:
# fastest dim, either component or x
c = nda.shape[-1]
# the the number of components is 3 or 4 consider it an RGB image
if not c in (3, 4):
nda = nda[nda.shape[0] // 2, :, :]
elif nda.ndim == 4:
c = nda.shape[-1]
if not c in (3, 4):
raise Runtime("Unable to show 3D-vector Image")
# take a z-slice
nda = nda[nda.shape[0] // 2, :, :, :]
ysize = nda.shape[0]
xsize = nda.shape[1]
# Make a figure big enough to accommodate an axis of xpixels by ypixels
# as well as the ticklabels, etc...
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(figsize=figsize, dpi=dpi)
# Make the axis the right size...
ax = fig.add_axes([margin, margin, 1 - 2 * margin, 1 - 2 * margin])
extent = (0, xsize * spacing[1], ysize * spacing[0], 0)
t = ax.imshow(nda, extent=extent, interpolation=None)
if nda.ndim == 2:
t.set_cmap("gray")
if title:
plt.title(title)
myshow(sitk.Expand(img2, [2, 2]), title="Big Visibile Human Head")
img1_seg = sitk.ReadImage(fdata("2th_cthead1.png"))
myshow(img1_seg, "Label Image as Grayscale")
myshow(sitk.LabelToRGB(img1_seg), title="Label Image as RGB")
myshow(sitk.LabelOverlay(img1, img1_seg), title="Label Overlayed")
myshow(sitk.LabelOverlay(img1, sitk.LabelContour(img1_seg), 1.0))
img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd"))
img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd"))
img_labels = sitk.ReadImage(
fdata("nac-hncma-atlas2013-Slicer4Version/Data/hncma-atlas.nrrd")
)
myshow(img_T1)
myshow(img_T2)
myshow(sitk.LabelToRGB(img_labels))
size = img_T1.GetSize()
myshow(img_T1[:, size[1] // 2, :])
slices = [
img_T1[size[0] // 2, :, :],
img_T1[:, size[1] // 2, :],
img_T1[:, :, size[2] // 2],
]
myshow(sitk.Tile(slices, [3, 1]), dpi=20)
nslices = 5
slices = [img_T1[:, :, s] for s in range(0, size[2], size[0] // (nslices + 1))]
myshow(sitk.Tile(slices, [1, 0]))
def myshow3d(img, xslices=[], yslices=[], zslices=[], title=None, margin=0.05, dpi=80):
size = img.GetSize()
img_xslices = [img[s, :, :] for s in xslices]
img_yslices = [img[:, s, :] for s in yslices]
img_zslices = [img[:, :, s] for s in zslices]
maxlen = max(len(img_xslices), len(img_yslices), len(img_zslices))
img_null = sitk.Image([0, 0], img.GetPixelID(), img.GetNumberOfComponentsPerPixel())
img_slices = []
d = 0
if len(img_xslices):
img_slices += img_xslices + [img_null] * (maxlen - len(img_xslices))
d += 1
if len(img_yslices):
img_slices += img_yslices + [img_null] * (maxlen - len(img_yslices))
d += 1
if len(img_zslices):
img_slices += img_zslices + [img_null] * (maxlen - len(img_zslices))
d += 1
if maxlen != 0:
if img.GetNumberOfComponentsPerPixel() == 1:
img = sitk.Tile(img_slices, [maxlen, d])
# TO DO check in code to get Tile Filter working with vector images
else:
img_comps = []
for i in range(0, img.GetNumberOfComponentsPerPixel()):
img_slices_c = [sitk.VectorIndexSelectionCast(s, i) for s in img_slices]
img_comps.append(sitk.Tile(img_slices_c, [maxlen, d]))
img = sitk.Compose(img_comps)
myshow(img, title, margin, dpi)
myshow3d(
img_T1,
yslices=range(50, size[1] - 50, 20),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
myshow3d(
img_T2,
yslices=range(50, size[1] - 50, 30),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
myshow3d(
sitk.LabelToRGB(img_labels),
yslices=range(50, size[1] - 50, 20),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
# Why doesn't this work? The images do overlap in physical space.
myshow3d(
sitk.LabelOverlay(img_T1, img_labels),
yslices=range(50, size[1] - 50, 20),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
# Option 1: Resample the label image using the identity transformation
resampled_img_labels = sitk.Resample(
img_labels,
img_T1,
sitk.Transform(),
sitk.sitkNearestNeighbor,
0.0,
img_labels.GetPixelID(),
)
# Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can
# be combined with the color overlay (we use an alpha blending of 0.5).
myshow3d(
sitk.LabelOverlay(
sitk.Cast(sitk.RescaleIntensity(img_T1), sitk.sitkUInt8),
resampled_img_labels,
0.5,
),
yslices=range(50, size[1] - 50, 20),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
# Option 2: Resample the T1 image using the identity transformation
resampled_T1 = sitk.Resample(
img_T1, img_labels, sitk.Transform(), sitk.sitkLinear, 0.0, img_T1.GetPixelID()
)
# Overlay onto the T1 image, requires us to rescale the intensity of the T1 image to [0,255] and cast it so that it can
# be combined with the color overlay (we use an alpha blending of 0.5).
myshow3d(
sitk.LabelOverlay(
sitk.Cast(sitk.RescaleIntensity(resampled_T1), sitk.sitkUInt8), img_labels, 0.5
),
yslices=range(50, size[1] - 50, 20),
zslices=range(50, size[2] - 50, 20),
dpi=30,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How many nodes?
Step2: Which means the node 0 leads to the node 1079 with cost 113 and so on.
Step3: Geolocation using geopy
Step5: We need a function that provides the index of the closest node in the graph of Paris. The distance between two pairs of latitude and longitude is given by the following haversine function
Step6: Visualization using Folium
Step7: Pathfinding using tryalgo
Step8: To finish, let's display the path.
|
<ASSISTANT_TASK:>
Python Code:
with open('paris.txt') as f:
lines = f.read().splitlines()
N, M, T, C, S = map(int, lines[0].split())
paris_coords = []
for i in range(1, N + 1):
paris_coords.append(list(map(float, lines[i].split()))) # Read coords
paris = {node: {} for node in range(N)}
for i in range(N + 1, N + M + 1):
start, end, nb_directions, duration, length = map(int, lines[i].split())
paris[start][end] = length
if nb_directions == 2:
paris[end][start] = length
len(paris)
paris[0]
%matplotlib inline
from matplotlib import pyplot as plt
x = [point[0] for point in paris_coords]
y = [point[1] for point in paris_coords]
plt.scatter(x, y, marker='.', s=1)
from geopy.geocoders import Nominatim
geocoder = Nominatim(user_agent='tryalgo')
start = geocoder.geocode("Gare de Lyon, Paris")
end = geocoder.geocode("Porte d'Italie, Paris")
start.longitude, start.latitude
from math import radians, cos, sin, asin, sqrt
def haversine(lon1, lat1, lon2, lat2):
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
r = 6371 # Radius of earth in kilometers. Use 3956 for miles
return c * r
def closest_node(coords, location):
dmin = float('inf')
closest = None
for i in range(len(coords)):
d = haversine(coords[i][1], coords[i][0], location.longitude, location.latitude)
if d < dmin:
closest = i
dmin = d
return closest
import folium
paris_viz = folium.Map(location=(48.8330293, 2.3618845), tiles='Stamen Watercolor', zoom_start=13)
paris_viz
from tryalgo.dijkstra import dijkstra
source = closest_node(paris_coords, start)
target = closest_node(paris_coords, end)
dist, prec = dijkstra(paris, paris, source, target)
# Let's build the path
path = [target]
node = target
while prec[node] is not None:
node = prec[node]
path.append(node)
print('Path found with', len(path), 'nodes:', path[::-1])
from folium.features import PolyLine
paris_viz.add_child(PolyLine(map(lambda node: paris_coords[node], path)))
paris_viz
# We can also save it to a file
# paris_viz.save('pathfinding_in_paris.html')
# from IPython.display import IFrame
# IFrame('pathfinding_in_paris.html', width='100%', height=510)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing the quite heavy DataFrame with the voting fields and the results. We drop a useless column and create a Name field, which will contain both the first and the last name of a person, so we can then create a model for each unique deputee at the parliament.
Step2: The first reduction of our DataFrame is to suppress all the entries of the Decision field, which contain either a 4, a 5, a 6 or a 7. It basically means that the person did not take part to the voting, and that is hence not useful to our purpose.
Step4: We now want to slice to the DataFrame into multiple smaller DataFrames which contains all the entries for a single person. This is done in order to be able to apply machine Learning to each person. The function split_df below splits the DataFrame into a dictionary which contains all the unique entries with respect to a given field.
Step5: 1.1 Machine Learning on a single deputee
Step6: 1.1.1 Preparing the Features
Step7: We see a way smaller number of abtensions that yes and no, this is why we choose to ignore them at first. We rescale the decision output to 0 and 1, otherwise the algorithm will not understand that it is a classification problem.
Step8: We will now format the data and keep the relevant columns only, as well as split them into a training set. The X DataFrame will contain the probabilities we got from the nlp, X_text will be the textual data, that we will store for visualisation of the results later on. The Y vector contains the Decision taken by the person, this is what we want to predict. We will use the Random Forest Classifier as we did in the homework 4 of the course as our prediction algorithm.
Step9: 1.1.2 Classification of our data
Step10: The plot describes the classification into binary output. The default RandomForestClassifier method clearly overfits our data, this is notably due to the fact of having an unfixed depth for the depth of the tree. This is why we will iterate over different depth and fix it to a value that yields a good result, and the fact of having a max depth will mitigate overfitting.
Step11: Now, we plot the learning curve having a max depth of 3
Step12: 1.1.3 Results
Step13: We format the output DataFrame in a useful way, storing the name of the deputee who voted, their actual Decision, our Predicted Decision and the fields that were used in the learning.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import glob
import os
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.ensemble
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score, train_test_split, cross_val_predict, learning_curve
from ML_helpers import *
import sklearn.metrics
%matplotlib inline
%load_ext autoreload
%autoreload 2
# There's a lot of columns in the DF.
# Therefore, we add this option so that we can see more columns
pd.options.display.max_columns = 100
path = '../datas/nlp_results/'
voting_df = pd.read_csv(path+'voting_with_topics_sentiment.csv')
print('Entries in the DataFrame',voting_df.shape)
#Dropping the useless column
#voting_df = voting_df.drop('Unnamed: 0',1)
#Putting numerical values into the columns that should have numerical values
num_cols = ['BillTitle', 'BusinessTitle','text','text_eng','FirstName','LastName']
voting = voting_df.drop(num_cols,axis=1).apply(pd.to_numeric)
voting['text'] = voting_df.text
#Inserting the full name at the second position
voting.insert(1,'Name', voting_df['FirstName'] + ' ' + voting_df['LastName'])
voting.head(10)
voting_df = voting[((voting.Decision != 4) & (voting.Decision != 5) & (voting.Decision != 6) & (voting.Decision != 7))]
print(voting_df.shape)
#print('Top number of entries in the df :\n', voting_df.Name.value_counts())
def split_df(df, field):
Splits the input df along a certain field into multiple dictionaries which links each unique
entry of the field to the entries in the dataframe
# Retrieve first all the unique Name entries
unique_field = df[field].unique()
print('Number of unique entries in',field,':',len(unique_field))
#Create a dictionary of DataFrames which stores all the info relative to a single deputee
df_dict = {elem : pd.DataFrame for elem in unique_field}
for key in df_dict.keys():
df_dict[key] = df.loc[df[field] == key]
return df_dict
voting_dict = split_df(voting_df, 'Name')
#df_deputee = voting_dict['Guy Parmelin'].drop_duplicates('text', keep = 'last')
df_deputee = voting_dict['Silvia Schenker'].drop_duplicates(['text','Name'], keep = 'last')#
print(df_deputee.shape)
df_deputee.head(10)
df_deputee['sentiment'] = 1*(df_deputee['positive']>df_deputee['negative'])-1*(df_deputee['negative']>df_deputee['positive'])
print(df_deputee.Decision.value_counts())
df_deputee = df_deputee[df_deputee.Decision!=3]
df_deputee.shape
no_pred_field = ['Decision','ParlGroupCode', 'text', 'Name','sentiment','compound']
no_scaled = ['positive','negative','neutral','compound']
df_deputee = df_deputee[df_deputee['compound'] !=0]
pred_field = df_deputee.columns-no_pred_field
X = df_deputee.drop(no_pred_field,axis=1)
X[X[pred_field-no_scaled]>0.2]=1
X[X[pred_field-no_scaled]<0.2]=0
X = X[X[' assurances']==1]
X = X[['positive','negative','neutral']]
#X[pred_field-no_scaled]=X[pred_field-no_scaled].multiply(df_deputee.sentiment,axis=0)
X_text = df_deputee[['Name','text']]
Y = df_deputee[df_deputee[' assurances']>0.2]['Decision'] -1
estimator = RandomForestClassifier()
title = "Learning Curves (Random Forest Classifier with 2 classes)"
plot_learning_curve(estimator,X,Y,title,20)
cv_score = np.zeros(20)
tr_score = np.zeros(20)
cv_param = 20
for i in range(1,21):
forest = RandomForestClassifier(max_depth = i)
cv_score[i-1] = cross_val_score(forest,X,Y,cv = cv_param).mean()
forest.fit(X,Y)
tr_score[i-1] = forest.score(X,Y)
plot_fig(cv_score,tr_score,
"Cross validation score against the depth of the random forest",
"Max depth of the random forest","Cross validation score")
estimator = RandomForestClassifier(max_depth = 10)
title = "Learning Curves (Random Forest Classifier with 2 classes)"
plot_learning_curve(estimator,X,Y,title,20)
Y_predicted = cross_validation(X, Y, cv_param=20, max_depth=10)
features_2 = plot_feature_importances(X,Y,pred_field, max_depth=10)
df_out = df_deputee[np.r_[['Decision'],pred_field]]
df_out.insert(1,'Predicted Decision',Y_predicted + 1)
#Optional line to round the probabilities to 3 decimals (will not sum to one, but simpler to look at it.)
df_out = df_out.apply(np.around,decimals=3)
if not os.path.exists("../datas/treated_data/Voting_prediction"):
os.makedirs("../datas/treated_data/Voting_prediction")
df_out.to_csv('../datas/treated_data/Voting_prediction/'+'prediction_'+
df_deputee['Name'].unique()[0].lower().replace(' ','_')+'.csv',index=False)
df_out.head(100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember the Fibonacci sequence from yesterday's Lecture 1? Let's define our own function that will help us to write the Fibonacci sequence.
Step3: Remember loops too? Let's get the first 10 numbers in the Fibonacci sequence.
Step4: There's your quick review of numpy and functions along with a while loop thrown in. Now we can move on to the content of Lecture 3.
Step5: Let's generate a vector of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
Step6: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array
Step7: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
Step8: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Step9: Now, let's plot a uniform distribution and take a look.
Step10: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
Step11: Nice job! Do you see why the "uniform distribution" is referred to as such?
Step12: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
Step13: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Step14: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
Step15: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
Step16: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
Step17: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
Step18: Next, plot a histogram of this data set (play around with the number of bins, too).
Step19: Now, calculate and print the mean and standard deviation of this distribution.
Step20: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
Step21: We will show you an example, and then you get to try it out for yourself!
Step22: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
Step23: We will then vectorize the function to allow it to act on all elements of an array at once. Magic!
Step24: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Step25: Let's see what the y values look like now
Step26: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
Step27: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
Step28: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
Step29: And we can also compare y_fitted to the original y values without any noise
Step30: Not a bad job for your first fit function!
Steps for using curve_fit
Step31: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
Step32: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.linspace(0,10,11)
def myFib(a,b):
return a+b
fibLength = 10 #the length we want for our Fibonacci sequence
fibSeq = np.zeros(fibLength) #make a numpy array of 10 zeros
# Let's define the first 2 elements of the Fibonacci sequence
fibSeq[0] = 0
fibSeq[1] = 1
i = 2 #with the first 2 elements defined, we can calculate the rest of the sequence beginning with the 3rd element
while i-1 < fibLength-1:
nextFib = myFib(fibSeq[i-2],fibSeq[i-1])
fibSeq[i] = nextFib
i = i + 1
print(fibSeq)
import numpy as np
np.random.rand(5)
np.random.rand(5,5)
np.random.randn(5)
import matplotlib.pyplot as plt
%matplotlib inline
#your code here
X = ...
plt.hist(X, bins=20);
#your code here
mu = 5 #the mean of the distribution
sigma = 3 #the standard deviation
X = sigma * np.random.randn(5000) + mu
plt.hist(X,bins=50);
#write your observations here
N,bins,patches = plt.hist(X, bins=50)
bin_avg = (bins[1:]+bins[:-1])/2
plt.plot(bin_avg, N, 'r*')
plt.show()
mean = np.mean(X)
std = np.std(X)
print('mean: '+ repr(mean) )
print('standard deviation: ' + repr(std))
lifetimes = np.loadtxt('Data\LifetimeData.txt')
#your code here
#your code here
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# your code here
f = np.vectorize(f_scalar)
y = f(x)
# your code here
plt.plot(x,y_noisy)
def quadratic(x,a,b,c):
return a*x**2 + b*x + c
optimal_values, _ = curve_fit(quadratic,x,y_noisy)
a = optimal_values[0]
b = optimal_values[1]
c = optimal_values[2]
print(a, b, c)
y_fitted = quadratic(x,a,b,c)
plt.plot(x,y_fitted)
plt.plot(x,y_noisy)
plt.plot(x,y_fitted)
plt.plot(x,y)
# Step 1: Import the data
# Step 2: Plot the data to see what it looks like
data = np.loadtxt("Data/photopeak.txt")
x_data = data[:,0]
y_data = data[:,1]
plt.scatter(x,y)
from IPython.display import display, Image
display(Image(filename='Data/photopeak.png'))
# Step 3: Define your function here
def gaussian(x,A,mu,sigma):
pi = np.pi
return A/np.sqrt(2*pi*sigma)*np.exp(-(x-mu)**2/(2*sigma**2))
# Step 3.5: SANITY CHECK! Use this step as a way to check that the function you defined above is mathematically correct.
x = np.linspace(-8,12,100)
A = 1
mu = 2
sigma = 3
y = gaussian(x,A,mu,sigma)
plt.plot(x,y)
# Step 4: Use curve_fit to generate your output parameters
popt,pcov = curve_fit(gaussian,x_data,y_data)
# Step 5: Generate your model fit
y_fitted = gaussian(x_data,popt[0],popt[1],popt[2])
plt.scatter(x_data,y_fitted)
plt.scatter(x_data,y_data)
# Step 6: Plot the best fit function and the scatter plot of data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Google Cloud Platform's Machine Learning APIs
Step2: 导入需要用到的一些功能程序库:
Step3: GCP Machine Learning API Key
Step4: 多媒体二进制base64码转换 (Define image pre-processing functions)
Step5: 机器智能API接口控制参数 (Define control parameters for API)
Step6: * 消息文字转成语音 (Speech synthesis
Step7: * 语音转换成消息文字 (Speech recognition
Step8: * 消息文字的多语言互译 (Text based language translation)
Step9: 用微信App扫QR码图片来自动登录
Step10: Combined
|
<ASSISTANT_TASK:>
Python Code:
parm_runtime_env_GCP = True
# parm_runtime_env_GCP = False
# Copyright 2016 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# import subprocess
# retcode = subprocess.call(['pip', 'install', '-U', 'google-api-python-client'])
# retcode = subprocess.call(['pip', 'install', '-U', 'gTTS'])
# Below is for GCP only: install audio conversion tool
# retcode = subprocess.call(['apt-get', 'update', '-y'])
# retcode = subprocess.call(['apt-get', 'install', 'libav-tools', '-y'])
import io, os, subprocess, sys, re, codecs, time, datetime, requests, itchat
from itchat.content import *
from googleapiclient.discovery import build
# Here I read in my own API_KEY from a file, which is not shared in Github repository:
# with io.open('../../API_KEY.txt') as fp:
# for line in fp: APIKEY = line
# You need to un-comment below line and replace 'APIKEY' variable with your own GCP API key:
APIKEY='AIzaSyCvxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# Below is for Google Speech synthesis: text to voice API
from gtts import gTTS
# Below is for Google Speech recognition: voice to text API
speech_service = build('speech', 'v1', developerKey=APIKEY)
# Below is for Google Language Tranlation API
service = build('translate', 'v2', developerKey=APIKEY)
# Import the base64 encoding library.
import base64
# Pass the image data to an encoding function.
def encode_image(image_file):
with io.open(image_file, "rb") as image_file:
image_content = image_file.read()
# Python 2
if sys.version_info[0] < 3:
return base64.b64encode(image_content)
# Python 3
else:
return base64.b64encode(image_content).decode('utf-8')
# Pass the audio data to an encoding function.
def encode_audio(audio_file):
with io.open(audio_file, 'rb') as audio_file:
audio_content = audio_file.read()
# Python 2
if sys.version_info[0] < 3:
return base64.b64encode(audio_content)
# Python 3
else:
return base64.b64encode(audio_content).decode('utf-8')
# API control parameter for Image API:
parm_image_maxResults = 10 # max objects or faces to be extracted from image analysis
# API control parameter for Language Translation API:
parm_translation_origin_language = 'zh' # original language in text: to be overwriten by TEXT_DETECTION
parm_translation_target_language = 'zh' # target language for translation: Chinese
# API control parameter for 消息文字转成语音 (Speech synthesis: text to voice)
parm_speech_synthesis_language = 'zh' # speech synthesis API 'text to voice' language
# parm_speech_synthesis_language = 'zh-tw' # speech synthesis API 'text to voice' language
# parm_speech_synthesis_language = 'zh-yue' # speech synthesis API 'text to voice' language
# API control parameter for 语音转换成消息文字 (Speech recognition: voice to text)
# parm_speech_recognition_language = 'en' # speech API 'voice to text' language
parm_speech_recognition_language = 'cmn-Hans-CN' # speech API 'voice to text' language
# Running Speech API
def KudosData_text_to_voice(text2voice):
# Python 2
if sys.version_info[0] < 3:
tts = gTTS(text=text2voice.encode('utf-8'), lang=parm_speech_synthesis_language, slow=False)
# Python 3
else:
tts = gTTS(text=text2voice, lang=parm_speech_synthesis_language, slow=False)
text2voiceMP3name = 'Voice_For_You.mp3'
tts.save(text2voiceMP3name)
print('\nCompeleted: Speech synthesis API ( Text -> Voice)')
print(text2voice)
return text2voiceMP3name
# msg.download(msg.fileName)
# print('\nDownloaded image file name is: %s' % msg['FileName'])
# audio_file_input = msg['FileName']
# audio_type = ['flac', 'wav']
# Running Speech API
def KudosData_voice_to_text(audio_file_input, audio_type):
audio_file_output = str(audio_file_input) + '.' + str(audio_type)
# print('audio_file_input : %s' % audio_file_input)
print('Converted audio file for API: %s' % audio_file_output)
# convert mp3 file to target GCP audio file:
# remove audio_file_output, is exist
retcode = subprocess.call(['rm', audio_file_output])
# print(retcode)
if parm_runtime_env_GCP: # using Datalab in Google Cloud Platform
# GCP: use avconv to convert audio
retcode = subprocess.call(['avconv', '-i', audio_file_input, '-ac', '1', audio_file_output])
else: # using a Kudos Data Virtual Machine, or local machine
# VM : use ffmpeg to convert audio
retcode = subprocess.call(['ffmpeg', '-i', audio_file_input, '-ac', '1', audio_file_output])
# print(retcode)
# Call GCP Speech API:
# response = speech_service.speech().syncrecognize(
response = speech_service.speech().recognize(
body={
'config': {
# 'encoding': 'LINEAR16',
# 'sampleRateHertz': 16000,
'languageCode': parm_speech_recognition_language
},
'audio': {
'content': encode_audio(audio_file_output) # base64 of converted audio file, for speech recognition
}
}).execute()
print('Compeleted: Speech recognition API ( Voice -> Text )')
return response
def KudosData_TEXT_TRANSLATION(text, origin_language_code, target_language_code):
# Call translation if parm_translation_origin_language is not parm_translation_target_language
if origin_language_code != target_language_code:
outputs = service.translations().list(source=origin_language_code,
target=target_language_code, q=text).execute()
translated_text = ''
translated_text += u'---- Start Translation ----\n'
translated_text += u'( Origin Lang 原文: ' + origin_language_code + ' )\n'
translated_text += u'( Target Lang 译文: ' + target_language_code + ' )\n'
translated_text += outputs['translations'][0]['translatedText'] + '\n' + '----- End Translation -----\n'
print('Compeleted: Translation API : From Language \'%s\' to \'%s\''
% (origin_language_code, target_language_code))
else:
translated_text = text
return translated_text
itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
# Obtain my own Nick Name
MySelf = itchat.search_friends()
NickName4RegEx = '@' + MySelf['NickName'] + '\s*'
# 1. 消息文字转成语音 (Speech synthesis: text to voice)
# 在群里,如果收到 @ 自己的文字信息,会自动将文字转换成语音,再以 mp3 文件方式发送回复:
@itchat.msg_register(TEXT, isGroupChat=True)
def text_to_voice_reply(msg):
if msg['isAt']:
# Remove my own Nick Name from message:
text2voice = re.sub(NickName4RegEx, '', msg['Content'])
text2voiceMP3name = KudosData_text_to_voice(text2voice)
itchat.send('@%s@%s' % ('fil', text2voiceMP3name), msg['FromUserName'])
# 2. 语音转换成消息文字 (Speech recognition: voice to text)
@itchat.msg_register([RECORDING], isGroupChat=True)
@itchat.msg_register([RECORDING])
def download_files(msg):
parm_translation_origin_language = 'zh' # will be overwriten by TEXT_DETECTION
msg.download(msg.fileName)
print('\nDownloaded audio file name is: %s' % msg['FileName'])
##############################################################################################################
# call audio analysis APIs #
##############################################################################################################
audio_analysis_reply = u'[ Audio Analysis 音频处理结果 ]\n'
# Voice to Text:
audio_analysis_reply += u'\n[ Voice -> Text 语音识别 ]\n'
response = KudosData_voice_to_text(msg['FileName'], 'flac')
# response = KudosData_voice_to_text(msg['FileName'], 'wav')
if response != {}:
print (response['results'][0]['alternatives'][0]['transcript'])
print ('( confidence %f )' % response['results'][0]['alternatives'][0]['confidence'])
audio_analysis_reply += response['results'][0]['alternatives'][0]['transcript'] + '\n'
audio_analysis_reply += '( confidence ' + str(response['results'][0]['alternatives'][0]['confidence']) + ' )\n'
# Translate recognised text to another language:
parm_translation_origin_language = 'zh'
parm_translation_target_language = 'en'
translated_text_reply = KudosData_TEXT_TRANSLATION(response['results'][0]['alternatives'][0]['transcript'],
parm_translation_origin_language, parm_translation_target_language)
print(translated_text_reply)
audio_analysis_reply += translated_text_reply
return audio_analysis_reply
# 3. 消息文字的多语言互译 (Text based language translation)
# 在群里,如果收到 @ 自己的文字信息,会自动进行文字翻译,再发送回复:
@itchat.msg_register(TEXT, isGroupChat=True)
def text_to_translation_reply(msg):
if msg['isAt']:
text4translation = re.sub(NickName4RegEx, '', msg['Content'])
parm_translation_origin_language = 'zh'
parm_translation_target_language = 'en'
translated_text_reply = KudosData_TEXT_TRANSLATION(text4translation,
parm_translation_origin_language, parm_translation_target_language)
print(translated_text_reply)
return translated_text_reply
@itchat.msg_register(TEXT, isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
# 1. 消息文字转成语音 (Speech synthesis: text to voice)
text2voice = re.sub(NickName4RegEx, '', msg['Content']) # Remove my own Nick Name from message
text2voiceMP3name = KudosData_text_to_voice(text2voice)
itchat.send('@%s@%s' % ('fil', text2voiceMP3name), msg['FromUserName'])
# 3. 消息文字的多语言互译 (Text based language translation)
text4translation = re.sub(NickName4RegEx, '', msg['Content'])
parm_translation_origin_language = 'zh'
parm_translation_target_language = 'en'
translated_text_reply = KudosData_TEXT_TRANSLATION(text4translation,
parm_translation_origin_language, parm_translation_target_language)
print(translated_text_reply)
return translated_text_reply
itchat.run()
# interupt kernel, then logout
itchat.logout() # 安全退出
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the following cell to create your Cloud Storage bucket if it does not already exist.
Step2: Import libraries
Step3: Download and preprocess the data
Step4: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.
Step5: Read images and labels from TFRecords
Step6: Use the visualization utility function provided earlier to preview flower images with their labels.
Step7: Create training and validation datasets
Step8: Build, train, and evaluate the model
Step9: Train the model
Step10: Visualize local predictions
Step11: Export the model as a TF 2.3 SavedModel
Step12: Get input and output signatures
Step13: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
Step14: Generate explanation metadata
Step15: Deploy model to AI Explanations
Step16: Create explainable model versions
Step17: Deploy an XRAI model
Step18: Get predictions and explanations
Step19: Format your explanation request
Step20: Send the explanations request and visualize
Step21: Check explanations and baselines
Step22: Explain the baseline image
Step23: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.)
Step24: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction
Step25: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model.
Step26: Cleaning Up
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
import os
PROJECT_ID = "" # TODO: your PROJECT_ID here.
os.environ["PROJECT_ID"] = PROJECT_ID
BUCKET_NAME = PROJECT_ID # TODO: replace your BUCKET_NAME, if needed
REGION = "us-central1"
os.environ["BUCKET_NAME"] = BUCKET_NAME
os.environ["REGION"] = REGION
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
import io
import os
import random
from base64 import b64encode
import numpy as np
import PIL
import tensorflow as tf
from matplotlib import pyplot as plt
AUTO = tf.data.experimental.AUTOTUNE
print("AUTO", AUTO)
import explainable_ai_sdk
GCS_PATTERN = "gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec"
IMAGE_SIZE = [192, 192]
BATCH_SIZE = 32
VALIDATION_SPLIT = 0.19
CLASSES = [
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips",
] # do not change, maps to the labels in the data (folder names)
# Split data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print(
"Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(
len(filenames), len(training_filenames), len(validation_filenames)
)
)
validation_steps = (
int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
)
steps_per_epoch = (
int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
)
print(
"With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(
BATCH_SIZE, steps_per_epoch, validation_steps
)
)
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
if tf.executing_eagerly():
# In eager mode, iterate in the Dataset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break
else:
# In non-eager mode, must get the TF note that
# yields the nextitem and run it in a tf.Session.
get_next_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
numpy_images, numpy_labels = ses.run(get_next_item)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = label == correct_label
return (
"{} [{}{}{}]".format(
CLASSES[label],
str(correct),
", shoud be " if not correct else "",
CLASSES[correct_label] if not correct else "",
),
correct,
)
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis("off")
plt.imshow(image)
plt.title(title, fontsize=16, color="red" if red else "black")
return subplot + 1
def display_9_images_from_dataset(dataset):
subplot = 331
plt.figure(figsize=(13, 13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot = 331
plt.figure(figsize=(13, 13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot % 10 == 1: # set up the subplots on the first call
plt.subplots(figsize=(10, 10), facecolor="#F0F0F0")
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor("#F8F8F8")
ax.plot(training)
ax.plot(validation)
ax.set_title("model " + title)
ax.set_ylabel(title)
ax.set_xlabel("epoch")
ax.legend(["train", "valid."])
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature(
[], tf.string
), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example["image"], channels=3)
image = (
tf.cast(image, tf.float32) / 255.0
) # convert image to floats in [0, 1] range
image = tf.reshape(
image, [*IMAGE_SIZE, 3]
) # explicit size will be needed for TPU
one_hot_class = tf.sparse.to_dense(example["one_hot_class"])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(filenames):
# Read data from TFRecords
# TODO: Complete the load_dataset function to load the images from TFRecords
return dataset
display_9_images_from_dataset(load_dataset(training_filenames))
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(
AUTO
) # prefetch next batch while training (autotune prefetch buffer size)
# For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)
some_flowers, some_labels = dataset_to_numpy_util(
load_dataset(validation_filenames), 8 * 20
)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
BatchNormalization,
Conv2D,
Dense,
GlobalAveragePooling2D,
MaxPooling2D,
)
from tensorflow.keras.optimizers import Adam
model = Sequential(
[
# Stem
Conv2D(
kernel_size=3,
filters=16,
padding="same",
activation="relu",
input_shape=[*IMAGE_SIZE, 3],
),
BatchNormalization(),
Conv2D(kernel_size=3, filters=32, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=64, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=96, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"),
BatchNormalization(),
# 1x1 Reduction
Conv2D(kernel_size=1, filters=32, padding="same", activation="relu"),
BatchNormalization(),
# Classifier
GlobalAveragePooling2D(),
Dense(5, activation="softmax"),
]
)
model.compile(
optimizer=Adam(lr=0.005, decay=0.98),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model.summary()
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%
# TODO: Using the GPU train the model for 20 to 60 epochs
# Randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(8 * 20)
some_flowers, some_labels = (
some_flowers[permutation],
some_labels[permutation],
)
predictions = model.predict(some_flowers, batch_size=16)
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
print("[val_loss, val_acc]", evaluations)
display_9_images_with_predictions(some_flowers, predictions, some_labels)
export_path = "gs://" + BUCKET_NAME + "/explanations/mymodel"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(192, 192))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
with tf.device("cpu:0"):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32)
return {
"numpy_inputs": decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[
tf.TensorSpec(
shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs"
)
]
)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
# TODO: Complete the function
return prob
tf.saved_model.save(
model,
export_path,
signatures={
"serving_default": serving_fn,
"xai_preprocess": preprocess_fn, # Required for XAI
"xai_model": m_call, # Required for XAI
},
)
! saved_model_cli show --dir $export_path --all
loaded = tf.saved_model.load(export_path)
input_name = list(
loaded.signatures["xai_model"].structured_input_signature[1].keys()
)[0]
print(input_name)
output_name = list(loaded.signatures["xai_model"].structured_outputs.keys())[0]
print(output_name)
preprocess_name = list(
loaded.signatures["xai_preprocess"].structured_input_signature[1].keys()
)[0]
print(preprocess_name)
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
# We want to explain 'xai_model' signature.
builder = SavedModelMetadataBuilder(export_path, signature_name="xai_model")
random_baseline = np.random.rand(192, 192, 3)
builder.set_image_metadata(
"numpy_inputs", input_baselines=[random_baseline.tolist()]
)
builder.save_metadata(export_path)
import datetime
MODEL = "flowers" + TIMESTAMP
print(MODEL)
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION
# Each time you create a version the name should be unique
IG_VERSION = "v_ig"
! gcloud beta ai-platform versions create $IG_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25 \
--region $REGION
# Make sure the IG model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $IG_VERSION --model $MODEL --region $REGION
# Each time you create a version the name should be unique
XRAI_VERSION = "v_xrai"
# Create the XRAI version with gcloud
! gcloud beta ai-platform versions create $XRAI_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method xrai \
--num-integral-steps 25 \
--region $REGION
# Make sure the XRAI model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL --region=$REGION
# Resize the images to what your model is expecting (192,192)
test_filenames = []
for i in os.listdir("../assets/flowers"):
img_path = "../assets/flowers/" + i
with PIL.Image.open(img_path) as ex_img:
resize_img = ex_img.resize([192, 192])
resize_img.save(img_path)
test_filenames.append(img_path)
# Prepare your images to send to your Cloud model
instances = []
for image_path in test_filenames:
img_bytes = tf.io.read_file(image_path)
b64str = b64encode(img_bytes.numpy()).decode("utf-8")
instances.append({preprocess_name: {"b64": b64str}})
# IG EXPLANATIONS
remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID,
model=MODEL,
version=IG_VERSION,
region=REGION)
ig_response = #TODO
for response in ig_response:
response.visualize_attributions()
# XRAI EXPLANATIONS
remote_xrai_model = #TODO: Similar to above, load the XRAI model
xrai_response = #TODO
for response in xrai_response:
response.visualize_attributions()
for i, response in enumerate(ig_response):
attr = response.get_attribution()
baseline_score = attr.baseline_score
predicted_score = attr.example_score
print("Baseline score: ", baseline_score)
print("Predicted score: ", predicted_score)
print("Predicted - Baseline: ", predicted_score - baseline_score, "\n")
# Convert your baseline from above to a base64 string
rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype("uint8"))
buffer = io.BytesIO()
rand_test_img.save(buffer, format="PNG")
new_image_string = b64encode(np.asarray(buffer.getvalue())).decode("utf-8")
# Preview it
plt.imshow(rand_test_img)
sanity_check_img = {preprocess_name: {"b64": new_image_string}}
# Sanity Check explanations EXPLANATIONS
sanity_check_response = remote_ig_model.explain([sanity_check_img])
sanity_check_response[0].visualize_attributions()
attr = sanity_check_response[0].get_attribution()
baseline_score = attr.baseline_score
example_score = attr.example_score
print(abs(baseline_score - example_score))
# Delete model version resource
! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL
! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL
# Delete model resource
! gcloud ai-platform models delete $MODEL --quiet
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I created a function that takes the observation, mean and standard deviation and returns the z-score. Notice it's just a little bit of arithmetic.
Step2: Calculate the Probability
Step3: Here, I use p for probability, or the probability that an observation will be lower than the value provided. I pass in the observation, the mean and the standard deviation. The function looks up my z-score for me and then calls SciPy's CDF function on the normal distribution.
Step4: This code creates a pandas data frame by first setting a few sample test scores from 60 to 160. Then calculating their z-scores and the proportion of the population estimated to be less intelligent than that score.
Step5: The blue line shows us an approximation of the distribution. I used 1,000 random observations to get my data. I could have used 10,000 or 100,000 and the curve would look really slick. However, that would hide a little what we actually mean when we talk about distributions. If I took 1,000 students and gave them an IQ test, I would expect scores that were kind of blotchy like the red histogram in the plot. Some categories would be a little above the curve, some below.
Step6: That is, the z-score for a 7 test score is -6.2, or 6.2 standard deviations from the mean. That's a very low score. The probability that someone gets a lower score? 2.8e-10. How small is that number?
Step7: Or, if the snarky comment were accurate, there would be 2 people that have an IQ lower than 7 on the planet. Maybe both of us could have chilled out a little and came up with funnier ways to tease.
Step8: Or, from 15 hypothetical elections against various Republican candidates, about 46.8% would vote for former Secretary Clinton over her potential Republican rivals at this point. It's interesting to point out that the standard deviation in all these polls is only about 2%. Or, of all the Republican candidates, at this point very few people are thinking differently from party politics. Either the particular candidates are not well known, or people are just that polarized that they'll vote for their party's candidate no matter who they run.
Step9: So, the z-score is just about 1/10th of one standard deviation below the mean. That is, it's going to be pretty close to 100.
|
<ASSISTANT_TASK:>
Python Code:
def z_score(x, m, s):
return (x - m) / s
print(z_score(95, 100, 15), z_score(130, 100, 15), z_score(7, 100, 15))
# We should see -0.3333333333333333 2.0 -6.2 or 1/3 deviation below average, 2 above and 6.2 below.
import scipy.stats as st
def p(x, m, s):
z = z_score(x, m, s)
return st.norm.cdf(z)
import numpy as np
import pandas as pd
scores = np.arange(60, 161, 20)
z_scores = list(map(lambda x: z_score(x, 100, 15), scores))
less_intelligent = list(map(lambda x: p(x, 100, 15), scores))
df = pd.DataFrame()
df['test_score'] = scores
df['z_score'] = z_scores
df['less_intelligent'] = less_intelligent
df
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
mu, sigma = 100, 15. # mean and standard deviation
s = sorted(np.random.normal(mu, sigma, 1000))
count, bins, ignored = plt.hist(s, 30, normed=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2)
plt.show()
z = z_score(7, 100, 15)
prob = p(7, 100, 15)
rounded_prob = round(prob, 15)
print("The z-score {0} and probability {1} of a test score of 7.".format(z, rounded_prob))
instances_per_billion = round((1/prob) / 1000000000, 2)
people_on_the_planet = 7.125 # billion
instances_on_the_planet = people_on_the_planet / instances_per_billion
instances_on_the_planet
votes = pd.Series([46.3, 45.3, 46.3, 46.3, 49.4, 47.8, 42.7, 43.3, 49.0, 47.7, 48.3, 46.5, 46.5, 49.0, 48.0])
# I thought it was easier to read percentages as 46.3, but I'm converting those numbers here to fit
# in the set [0,1] as well-behaved probabilities do.
votes = votes.apply(lambda x: x / 100)
votes.describe()
hillary_z_score = st.norm.ppf(votes.mean())
hillary_z_score
iq = 15 * hillary_z_score + 100
iq
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color = 'blue'> get a list of the column names </font>
Step2: <font color='blue'> function
Step3: codingDF is intended to be used as a lookup table for recoding. <br />
Step4: The 2 cells below are my attempts to recode the ageData column names. <br />
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import os
import ijson
path = os.chdir('/Users/superuser/Documents/projects/SDRegionalDataLib/age friendly community/acs2015_1yr_B01001/')
ageData = pandas.read_csv('acs2015_1yr_B01001.csv');
ageData.head()
colNames = list(ageData.columns.values)
#show the first ten coded column names
colNames[0:9]
#open the json file
jsonFile = 'metadata.json';
with open(jsonFile, 'r') as f:
objects = ijson.items(f, 'tables.B01001.columns')
columnAttr = list(objects)
#show the values of columnAttr
columnAttr[:]
def getRecodingKeys(element):
if ('Error' not in element) and ('name' != element) and ('geoid' != element):
return element
return False
#filter out the original column names that don't require recoding
codingDF = pandas.DataFrame({'origColNames': list(filter(getRecodingKeys, colNames))})
#add a new column to codingDF that contains the recoded ageDF column names
codingDF['recodeColName'] = ''
#append the recodedColNames to codingDF.recodeColName
for idx, origColName in enumerate(codingDF.origColNames):
codingDF.recodeColName[idx] = columnAttr[0][origColName]['name']
codingDF.head()
#recode ageData with the actual column names
for idx, col in enumerate(ageData.columns):
if codingDF.origColNames.str.contains(col): #codingDF.origColNames.str.contains(ageData.columns[idx]):
colMatchIDX = codingDF.origColNames.get_loc(col)
if ageData.columns[idx].str.contains('Error'):
tempColName = codingDF.recodeColName[colMatchIDX] + '_Error'
ageData.columns[idx] = tempColName
else:
tempColName = codingDF.recodeColName[colMatchIDX]
#recode ageData with the actual column names
for idx, col in enumerate(ageData.columns):
if codingDF.origColNames.str.contains(col): #codingDF.origColNames.str.contains(ageData.columns[idx]):
colMatchIDX = codingDF.origColNames.get_loc(col)
if ageData.columns[idx].str.contains('Error'):
tempColName = codingDF.recodeColName[colMatchIDX] + '_Error'
ageData.rename(columns={col:tempColName}, inplace=True)
else:
tempColName = codingDF.recodeColName[colMatchIDX]
print(tempColName)
ageData.rename(columns={col:tempColName}, inplace=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Implement a custom pytoch Dataset to load image frames from a remote DICOM VL Whole Slide Microscopy Image instance
Step6: Implement a simple binary image segmentation model
Step7: Instantiate the Dataset, pass inputs to the model and receive back outputs in form of NumPy arrays
Step8: Plot an overview of model inputs and outputs
Step9: Encode model outputs in form of a DICOM Segmentation instance
|
<ASSISTANT_TASK:>
Python Code:
class Dataset(torch.utils.data.Dataset):
Class for getting individual Pixel Data element frame items of a DICOM VL Whole Slide Microscocpy Image data set stored on a remote server.
def __init__(self, url: str, study_id: str, series_id: str, instance_id: str):
Parameters
----------
url: str
Address of a DICOMweb origin server
study_id: str
Study Instance UID
series_id: str
Seriess Instance UID of a Slide Microscopy series
instance_id: str
SOP Instance UID of a VL Whole Slide Microscopy Image instance
self.client = dicomweb_client.api.DICOMwebClient(url)
metadata = self.client.retrieve_instance_metadata(
study_instance_uid=study_id,
series_instance_uid=series_id,
sop_instance_uid=instance_id
)
self.meta = pydicom.dataset.Dataset.from_json(metadata)
def __len__(self) -> int:
int: number of frames
return int(self.meta.NumberOfFrames)
def __getitem__(self, idx: int) -> numpy.ndarray:
Retrieves an invidivual frame.
Parameters
----------
idx: int
Zero-based frame index
Returns
-------
numpy.ndarray
Pixels of the frame
frames = self.client.retrieve_instance_frames(
study_instance_uid=self.meta.StudyInstanceUID,
series_instance_uid=self.meta.SeriesInstanceUID,
sop_instance_uid=self.meta.SOPInstanceUID,
frame_numbers=[idx+1],
media_types=('image/jpeg', 'image/jp2', )
)
buf = io.BytesIO(frames[0])
return numpy.array(PIL.Image.open(buf))
def model(image: numpy.ndarray) -> numpy.ndarray:
Segments a microscopy image into regions representing tissue foreground and slide background.
Parameters
----------
image: numpy.ndarray
Pixel matrix of an image or image frame
Returns
-------
numpy.ndarray
Binary mask where tissue foreground is ``True`` and slide background is ``False``
return numpy.max(image < 225, 2).astype(numpy.bool)
dataset = Dataset(
url='https://server.dcmjs.org/dcm4chee-arc/aets/DCM4CHEE/rs',
study_id='1.2.392.200140.2.1.1.1.2.799008771.3960.1519719403.819',
series_id='1.2.392.200140.2.1.1.1.3.799008771.3960.1519719403.820',
instance_id='1.2.392.200140.2.1.1.1.4.799008771.3960.1519719570.834'
)
inputs = []
outputs = []
for i in range(len(dataset)):
image_frame = dataset[i]
mask_frame = model(image_frame)
inputs.append(image_frame)
outputs.append(mask_frame)
image = numpy.stack(inputs)
mask = numpy.stack(outputs)
print('Expected input shape : ', (int(dataset.meta.NumberOfFrames), dataset.meta.Rows, dataset.meta.Columns, dataset.meta.SamplesPerPixel))
print('Actual input shape : ', image.shape)
print('Expected output shape : ', (int(dataset.meta.NumberOfFrames), dataset.meta.Rows, dataset.meta.Columns))
print('Actual ouput shape : ', mask.shape)
fig, axs = matplotlib.pyplot.subplots(
nrows=10,
ncols=len(dataset) // 5,
figsize=(10, 10),
subplot_kw={'xticks': [], 'yticks': []}
)
for i, ax in enumerate(axs.flat[:(len(axs.flat) // 2)]):
ax.imshow(image[i])
for i, ax in enumerate(axs.flat[(len(axs.flat) // 2):]):
ax.imshow(mask[i].astype(numpy.uint8) * 255)
matplotlib.pyplot.tight_layout()
algorithm = highdicom.content.AlgorithmIdentificationSequence(
name='Binary Image Segmentation Example',
family=pydicom.sr.codedict.codes.cid7162.ArtificialIntelligence,
version='v0.1.0'
)
segment_description = highdicom.seg.content.SegmentDescription(
segment_number=1,
segment_label='ROI #1',
segmented_property_category=pydicom.sr.codedict.codes.cid7150.Tissue,
segmented_property_type=pydicom.sr.codedict.codes.SCT.BodyFat,
algorithm_type=highdicom.seg.enum.SegmentAlgorithmTypeValues.AUTOMATIC,
algorithm_identification=algorithm
)
segmentation = highdicom.seg.sop.Segmentation(
source_images=[dataset.meta],
pixel_array=mask,
segmentation_type=highdicom.seg.enum.SegmentationTypeValues.FRACTIONAL,
segment_descriptions=[segment_description],
series_instance_uid=highdicom.uid.UID(),
series_number=2,
sop_instance_uid=highdicom.uid.UID(),
instance_number=1,
manufacturer='MGH Computational Pathology',
manufacturer_model_name='Example Jupyter Notebook',
software_versions=highdicom.version.__version__,
device_serial_number='XXX'
)
print(segmentation)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
Step2: Model Specification
Step3: Now we build our model, which we will present in full first, then explain each part line-by-line.
Step4: The first line,
Step5: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship
Step6: By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
Step7: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Step8: The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows
Step9: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.
Step10: Posterior analysis
Step11: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
Step12: Case study 1
Step13: Model Specification
Step14: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Step15: We can check our samples by looking at the traceplot for nu and sigma.
Step16: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
Step17: As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.
Step18: Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
Step19: The logic for the rate random variable,
Step20: In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error.
Step21: Arbitrary deterministics
Step22: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Step23: Generalized Linear Models
Step24: The model can then be very concisely specified in one line of code.
Step25: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
Step26: Backends
Step27: The stored trace can then later be loaded using the load command
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
%matplotlib inline
fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10,4))
axes[0].scatter(X1, Y)
axes[1].scatter(X2, Y)
axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');
from pymc3 import Model, Normal, HalfNormal
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
help(Normal) #try help(Model), help(Uniform) or help(basic_model)
from pymc3 import find_MAP
map_estimate = find_MAP(model=basic_model)
print(map_estimate)
from scipy import optimize
map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell)
print(map_estimate)
from pymc3 import NUTS, sample
from scipy import optimize
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# draw 2000 posterior samples
trace = sample(2000, start=start)
trace['alpha'][-5:]
from pymc3 import Slice
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = Slice(vars=[sigma])
# draw 5000 posterior samples
trace = sample(5000, step=step, start=start)
from pymc3 import traceplot
traceplot(trace);
from pymc3 import summary
summary(trace)
try:
from pandas_datareader import data
except ImportError:
!pip install pandas-datareader
from pandas_datareader import data
import pandas as pd
returns = data.get_data_yahoo('SPY', start='2008-5-1', end='2009-12-1')['Adj Close'].pct_change()
print(len(returns))
returns.plot(figsize=(10, 6))
plt.ylabel('daily returns in %');
from pymc3 import Exponential, StudentT, Deterministic
from pymc3.math import exp
from pymc3.distributions.timeseries import GaussianRandomWalk
with Model() as sp500_model:
nu = Exponential('nu', 1./10, testval=5.)
sigma = Exponential('sigma', 1./.02, testval=.1)
s = GaussianRandomWalk('s', sigma**-2, shape=len(returns))
volatility_process = Deterministic('volatility_process', exp(-2*s))
r = StudentT('r', nu, lam=1/volatility_process, observed=returns)
from pymc3 import variational
import scipy
with sp500_model:
mu, sds, elbo = variational.advi(n=100000)
step = NUTS(scaling=sp500_model.dict_to_array(sds)**2, is_cov=True)
trace = sample(2000, step, start=mu, progressbar=True)
traceplot(trace[200:], [nu, sigma]);
fig, ax = plt.subplots(figsize=(15, 8))
returns.plot(ax=ax)
ax.plot(returns.index, 1/np.exp(trace['s',::5].T), 'r', alpha=.03);
ax.set(title='volatility_process', xlabel='time', ylabel='volatility');
ax.legend(['S&P500', 'stochastic volatility process'])
disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999)
year = np.arange(1851, 1962)
plt.plot(year, disaster_data, 'o', markersize=8);
plt.ylabel("Disaster count")
plt.xlabel("Year")
from pymc3 import DiscreteUniform, Poisson
from pymc3.math import switch
with Model() as disaster_model:
switchpoint = DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900)
# Priors for pre- and post-switch rates number of disasters
early_rate = Exponential('early_rate', 1)
late_rate = Exponential('late_rate', 1)
# Allocate appropriate Poisson rates to years before and after current
rate = switch(switchpoint >= year, early_rate, late_rate)
disasters = Poisson('disasters', rate, observed=disaster_data)
from pymc3 import Metropolis
with disaster_model:
step1 = NUTS([early_rate, late_rate])
# Use Metropolis for switchpoint, and missing values since it accommodates discrete variables
step2 = Metropolis([switchpoint, disasters.missing_values[0]])
trace = sample(10000, step=[step1, step2])
traceplot(trace);
import theano.tensor as T
from theano.compile.ops import as_op
@as_op(itypes=[T.lscalar], otypes=[T.lscalar])
def crazy_modulo3(value):
if value > 0:
return value % 3
else :
return (-value + 1) % 3
with Model() as model_deterministic:
a = Poisson('a', 1)
b = crazy_modulo3(a)
from pymc3.distributions import Continuous
class Beta(Continuous):
def __init__(self, mu, *args, **kwargs):
super(Beta, self).__init__(*args, **kwargs)
self.mu = mu
self.mode = mu
def logp(self, value):
mu = self.mu
return beta_logp(value - mu)
@as_op(itypes=[T.dscalar], otypes=[T.dscalar])
def beta_logp(value):
return -1.5 * np.log(1 + (value)**2)
with Model() as model:
beta = Beta('slope', mu=0, testval=0)
# Convert X and Y to a pandas DataFrame
import pandas
df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y})
from pymc3.glm import glm
with Model() as model_glm:
glm('y ~ x1 + x2', df)
trace = sample(5000)
from pymc3.glm.families import Binomial
df_logistic = pandas.DataFrame({'x1': X1, 'y': Y > np.median(Y)})
with Model() as model_glm_logistic:
glm('y ~ x1', df_logistic, family=Binomial())
from pymc3.backends import SQLite
with Model() as model_glm_logistic:
glm('y ~ x1', df_logistic, family=Binomial())
backend = SQLite('trace.sqlite')
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(5000, step=step, start=start, trace=backend)
summary(trace, varnames=['x1'])
from pymc3.backends.sqlite import load
with basic_model:
trace_loaded = load('trace.sqlite')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That's over 10 million measurements from over 42,000 profiles from 301 floats. The load_biofloat_cache.py script examined 559 floats for valid oxygen data. All of the profile data are in this file and the data from any float can be explored as demonstrated in the Notebooks explore_cached_biofloat_data.ipynb and explore_surface_oxygen_and_WOA.ipynb.
Step2: The inf and NaN values in the statistics of the gain indicate bad data, let's restrict data to a reasonably valid range of the measured o2sat.
Step3: Plot the distribution of gains from this minimally qc'ed data — comparible to Figure 4 in Takeshita et al. (2013).
Step4: Plot time series of gain for all the floats and fit an ordinary least squares regression to detect any drift in time of the gain for all the floats.
Step5: Make a list of all the floats in qdf and assign a color to each.
Step6: Make scatter plot of float oxygen saturation vs. World Ocean Atlas oxygen saturation.
Step7: This first look at WOA oxygen calibration for all the Argo floats reveals that there are some issues with the data that warrant further exploration.
|
<ASSISTANT_TASK:>
Python Code:
from biofloat import ArgoData
from os.path import join, expanduser
ad = ArgoData(cache_file = join(expanduser('~'),
'biofloat_fixed_cache_age365_variablesDOXY_ADJUSTED-PSAL_ADJUSTED-TEMP_ADJUSTED.hdf'))
ocdf = ad.get_cache_file_oxy_count_df()
print ocdf.groupby('wmo').sum().sum()
print 'Float having DOXY_ADJUSTED data:', ocdf.wmo.count()
acdf = ad.get_cache_file_all_wmo_list()
print 'Number of floats examined:', len(acdf)
import pandas as pd
df = pd.DataFrame()
with pd.HDFStore(join(expanduser('~'), 'woa_lookup_age365.hdf')) as s:
wmo_list = ocdf.wmo
for wmo in wmo_list:
try:
fdf = s.get(('/WOA_WMO_{}').format(wmo))
except KeyError:
pass
if not fdf.dropna().empty:
df = df.append(fdf)
print df.head()
print df.describe()
print 'Number of floats with corresponding WOA data:', len(df.index.get_level_values('wmo').unique())
qdf = df.query('(o2sat > 50 ) & (o2sat < 200)')
qdf.describe()
%pylab inline
import pylab as plt
plt.rcParams['figure.figsize'] = (18.0, 4.0)
plt.style.use('ggplot')
ax = qdf.groupby('wmo').mean().gain.hist(bins=100)
ax.set_xlabel('Gain')
ax.set_ylabel('Count')
floats = qdf.index.get_level_values('wmo').unique()
ax.set_title(('Distribution of WOA calibrated gains from {} floats').format(len(floats)))
qdf.head()
plt.rcParams['figure.figsize'] = (18.0, 8.0)
ax = qdf.unstack(level='wmo').gain.plot()
ax.set_ylabel('Gain')
ax.set_title(('Calculated gain factor for {} floats').format(len(floats)))
ax.legend_.remove()
wmo_list = qdf.index.get_level_values('wmo').unique()
colors = cm.spectral(np.linspace(0, 1, len(wmo_list)))
print 'Number of floats with reasonable oxygen saturation values:', len(wmo_list)
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, 1)
for wmo, c in zip(wmo_list, colors):
ax.scatter(qdf.xs(wmo, level='wmo')['o2sat'], qdf.xs(wmo, level='wmo')['woa_o2sat'], c=c)
ax.set_xlim([40, 200])
ax.set_ylim([40, 200])
ax.set_xlabel('Float o2sat (%)')
ax.set_ylabel('WOA o2sat (%)')
%%time
ad.set_verbosity(0)
df1 = ad.get_float_dataframe(wmo_list, update_cache=False, max_profiles=4)
from mpl_toolkits.basemap import Basemap
m = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl')
m.fillcontinents(color='0.8')
df1m = df1.groupby(level=['wmo','lon','lat']).mean()
for wmo, c in zip(wmo_list, colors):
try:
lons = df1m.xs(wmo, level='wmo').index.get_level_values('lon')
lats = df1m.xs(wmo, level='wmo').index.get_level_values('lat')
try:
m.scatter(lons, lats, latlon=True, color=c)
except IndexError:
# Some floats have too few points
pass
lon, lat = lons[0], lats[0]
if lon < 0:
lon += 360
plt.text(lon, lat, wmo)
except KeyError:
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recall that the Dirichlet Process (DP) (Ferguson, 1973) is essentially a distribution over distributions, where each draw from a DP is itself a distribution and importantly for clustering applications it serves as a natural prior that lets the number of clusters grow as the data grows. The DP has a base distribution parameter $\beta$ and a strength or concentration parameter $\alpha$.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='lda_plate.png')
vocabulary = []
for cdn1 in sim.codons:
for cdn2 in sim.codons:
if cdn1 == cdn2:
continue
vocabulary.append(cdn1+"-"+cdn2)
print 'vocabulary: ', len(vocabulary)
transitions = np.zeros((N,M-1),).astype(str)
# transition relative to root
for i in range(sequences.shape[0]):
for j in range(sequences.shape[1]):
if j == sequences.shape[1] - 1:
continue
if sequences[i,0] == sequences[i,j+1]:
transitions[i,j] = '-'
else:
transitions[i,j] = sequences[i,0]+"-"+sequences[i,j+1]
print transitions
# convert words into vector
vocab = set([])
for w in range(transitions.shape[0]):
posTransitions = transitions[w,:]
for t in posTransitions:
if t != '-':
vocab.update([t])
vocab = list(vocab)
print vocab
## documents are positions in alignment
data = []
for w in range(transitions.shape[0]):
posTransitions = transitions[w,:]
document = []
for v in vocab:
document.append(len(np.where(posTransitions == v)[0]))
data.append(document)
print document
data = np.array(data)
import numpy as np
import pymc as pm
K = 3 # number of topics
V = len(vocab) # number of words
D = 5 # number of documents
#data = np.array([[1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0]])
alpha = np.ones(K)
beta = np.ones(V)
theta = pm.Container([pm.CompletedDirichlet("theta_%s" % i, pm.Dirichlet("ptheta_%s" % i, theta=alpha)) for i in range(D)])
phi = pm.Container([pm.CompletedDirichlet("phi_%s" % k, pm.Dirichlet("pphi_%s" % k, theta=beta)) for k in range(K)])
Wd = [len(doc) for doc in data]
z = pm.Container([pm.Categorical('z_%i' % d,
p = theta[d],
size=Wd[d],
value=np.random.randint(K, size=Wd[d]))
for d in range(D)])
# cannot use p=phi[z[d][i]] here since phi is an ordinary list while z[d][i] is stochastic
w = pm.Container([pm.Categorical("w_%i_%i" % (d,i),
p = pm.Lambda('phi_z_%i_%i' % (d,i),
lambda z=z[d][i], phi=phi: phi[z]),
value=data[d][i],
observed=True)
for d in range(D) for i in range(Wd[d])])
model = pm.Model([theta, phi, z, w])
#mcmc = pm.MCMC(model)
#mcmc.sample(100)
M = pm.MCMC(model)
M.sample(5000,burn=500)
pm.Matplot.plot(M)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see above, by default, BTE will query all APIs integrated.
Step2: The Registry class stores all APIs used in BTE.
Step3: check the current api list
Step4: So now, the "biolink" and "dgidb" are removed from the registry
Step5: If you look through the query log above, the API "biolink" and "dgidb" are no longer querid.
|
<ASSISTANT_TASK:>
Python Code:
from biothings_explorer.user_query_dispatcher import FindConnection
from biothings_explorer.hint import Hint
ht = Hint()
# find all potential representations of CML
cml_hint = ht.query("MONDO:0011996")
# select the correct representation of CML
cml = cml_hint['Disease'][0]
cml
# find all potential representations of imatinib
imatinib_hint = ht.query("imatinib")
# select the correct representation of imatinib
imatinib = imatinib_hint['ChemicalSubstance'][0]
imatinib
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene')
fc.connect(verbose=True)
from biothings_explorer.registry import Registry
reg = Registry()
reg.show_all_apis()
reg.remove_apis(['biolink', 'dgidb'])
reg.show_all_apis()
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene', registry=reg)
fc.connect(verbose=True)
reg.refine_api_list(["semmed_chemical", "semmed_disease"])
reg.show_all_apis()
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene', registry=reg)
fc.connect(verbose=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: data containers
Step2: Indexing starts with zero.
Step3: tuples
Step4: sets
Step5: dictionaries
Step7: Functions
Step8: function is just another object (like almost everything in python)
Step9: Guidence on how to create meaningful docstring
Step10: lambda evaluation
Step11: Don't assign lambda expressions to variables. If you need named instance - create standard function with def
Step12: vs
Step13: Numpy - scientific computing
Step14: Basic manipulations
Step15: broadcasting
Step16: forcing dtype
Step17: converting dtypes
Step18: shapes (singletons)
Step19: adding new dimension
Step20: Indexing, slicing
Step21: Guess what is the output
Step22: Guess what is the output
Step23: Reshaping
Step24: reshape always returns view!
Step25: Boolean indexing
Step26: Useful numpy functions
Step27: reducers
Step28: numpy math functions
Step29: managing output
Step30: Meshes
Step31: Scipy - scientific computing 2
Step32: How does scipy represent sparse matrix?
Step33: Sparse matrix stores only non-zero elements (and their indices)
Step34: Restoring full matrix
Step35: Popular (not sparse) matrices
Step36: Timing - measuring performance
Step37: You can also use %%timeit magic to measure run time of the whole cell
Step38: Storing timings in a separate variable
Step39: Let's make the code less redundant
Step40: timeit with -o parameter
Step41: Our new benchmark procedure
Step42: Matplotlib - plotting in python
Step43: %matplotlib inline ensures all graphs are plotted inside your notebook
Step44: Combined plot
Step45: Think, why
Step46: Even simpler way - also gives you granular control on plot objects
Step47: Plot formatting
Step48: Subplots
Step49: Manual control of subplots
Step50: Task
Step51: method 1
Step52: method 2
Step53: method 3
Step54: method 4
Step55: Task 2
Step56: Hankel matrix
|
<ASSISTANT_TASK:>
Python Code:
greeting = 'Hello'
guest = "John"
my_string = 'Hello "John"'
named_greeting = 'Hello, {name}'.format(name=guest)
named_greeting2 = '{}, {}'.format(greeting, guest)
print named_greeting
print named_greeting2
fruit_list = ['apple', 'orange', 'peach', 'mango', 'bananas', 'pineapple']
name_length = [len(fruit) for fruit in fruit_list]
print name_length
name_with_p = [fruit for fruit in fruit_list if fruit[0]=='p'] #even better: fruit.startswith('p')
numbered_fruits = []
for i, fruit in enumerate(fruit_list):
numbered_fruits.append('{}.{}'.format(i, fruit))
numbered_fruits
numbered_fruits[0] = None
numbered_fruits[1:4]
numbered_fruits[1:-1:2]
numbered_fruits[::-1]
p_fruits = (name_with_p[1], name_with_p[0])
p_fruits[1] = 'mango'
single_number_tuple = 3,
single_number_tuple
single_number_tuple + (2,) + (1, 0)
set([0, 1, 2, 1, 1, 1, 3])
fruit_list = ['apple', 'orange', 'mango', 'banana', 'pineapple']
quantities = [3, 5, 2, 3, 4]
order_fruits = {fruit: num \
for fruit, num in zip(fruit_list, quantities)}
order_fruits
order_fruits['pineapple'] = 2
order_fruits
print order_fruits.keys()
print order_fruits.values()
for fruit, amount in order_fruits.iteritems():
print 'Buy {num} {entity}s'.format(num=amount, entity=fruit)
def my_func(var1, var2, default_var1=0, default_var2 = False):
This is a generic example of python a function.
You can see this string when do call: my_func?
#do something with vars
if not default_var2:
result = var1
elif default_var1 == 0:
result = var1
else:
result = var1 + var2
return result
print 'Function {} has the following docstring:\n{}'\
.format(my_func.func_name, my_func.func_doc)
def function_over_function(func, *args, **kwargs):
function_result = func(*args, **kwargs)
return function_result
function_over_function(my_func, 3, 5, default_var1=1, default_var2=True)
function_over_function(lambda x, y, factor=10: (x+y)*factor, 1, 2, 5)
my_simple_func = lambda x: x+1
def my_simple_func(x):
return x + 1
import numpy as np
matrix_from_list = np.array([[1, 3, 4],
[2, 0, 5],
[4, 4, 1],
[0, 1, 0]])
vector_from_list = np.array([2, 1, 3])
print 'The matrix is\n{matrix}\n\nthe vector is\n{vector}'\
.format(vector=vector_from_list, matrix=matrix_from_list)
matrix_from_list.dot(vector_from_list)
matrix_from_list + vector_from_list
single_precision_vector = np.array([1, 3, 5, 2], dtype=np.float32)
single_precision_vector.dtype
vector_from_list.dtype
vector_from_list.astype(np.int16)
row_vector = np.array([[1,2,3]])
print 'New vector {} has dimensionality {}'\
.format(row_vector, row_vector.shape)
print 'The dot-product is: ', matrix_from_list.dot(row_vector)
singleton_vector = row_vector.squeeze()
print 'Squeezed vector {} has shape {}'.format(singleton_vector, singleton_vector.shape)
matrix_from_list.dot(singleton_vector)
print singleton_vector[:, np.newaxis]
mat = np.arange(12)
mat.reshape(-1, 4)
mat
print singleton_vector[:, None]
vector12 = np.arange(12)
vector12
matrix43 = vector12.reshape(4, 3)
matrix43
matrix43_copy = matrix43[:]
matrix_to_reshape = np.random.randint(10, 99, size=(6, 4))
matrix_to_reshape
reshaped_matrix = matrix_to_reshape.reshape(8, 3)
reshaped_matrix
reshaped_matrix[-1, 0] = 1
np.set_printoptions(formatter={'all':lambda x: '_{}_'.format(x) if x < 10 else str(x)})
matrix_to_reshape[:]
np.set_printoptions()
idx = matrix43 > 4
matrix43[idx]
def three_diagonal(N):
A = np.zeros((N, N), dtype=np.int)
for i in range(N):
A[i, i] = -2
if i > 0:
A[i, i-1] = 1
if i < N-1:
A[i, i+1] = 1
return A
print three_diagonal(5)
def numpy_three_diagonal(N):
main_diagonal = -2 * np.eye(N)
suddiag_value = np.ones(N-1,)
lower_subdiag = np.diag(suddiag_value, k=-1)
upper_subdiag = np.diag(suddiag_value, k=1)
result = main_diagonal + lower_subdiag + upper_subdiag
return result.astype(np.int)
numpy_three_diagonal(5)
A = numpy_three_diagonal(5)
A[0, -1] = 5
A[-1, 0] = 3
print A
print A.sum()
print A.min()
print A.max(axis=0)
print A.sum(axis=0)
print A.mean(axis=1)
print (A > 4).any(axis=1)
print np.pi
args = np.arange(0, 2.5*np.pi, 0.5*np.pi)
print np.sin(args)
print np.round(np.sin(args), decimals=2)
'{}, {:.1%}, {:e}, {:.2f}, {:.0f}'.format(*np.sin(args))
np.set_printoptions(formatter={'all':lambda x: '{:.2f}'.format(x)})
print np.sin(args)
np.set_printoptions()
linear_index = np.linspace(0, np.pi, 10, endpoint=True)
mesh_x, mesh_y = np.meshgrid(linear_index, linear_index)
values_3D = np.sin(mesh_x + mesh_y)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.plot_wireframe(mesh_x, mesh_y, values_3D)
ax.view_init(azim=-45, elev=30)
plt.title('The plot of $f(x, y) = sin(x+y)$')
import scipy.sparse as sp
def scipy_three_diagonal(N):
main_diagonal = -2 * np.ones(N, )
suddiag_values = np.ones(N-1,)
diagonals = [main_diagonal, suddiag_values, suddiag_values]
# Another option: use sp.eye(N) and add subdiagonals
offsets = [0, 1, -1]
result = sp.diags(diagonals, offsets, shape=(N, N), format='coo')
return result
my_sparse_matrix = scipy_three_diagonal(5)
my_sparse_matrix
print my_sparse_matrix
my_sparse_matrix.toarray()
my_sparse_matrix.A
from scipy.linalg import toeplitz, hankel
hankel(xrange(4), [-1, -2, -3, -4])
toeplitz(xrange(4))
N = 1000
%timeit three_diagonal(N)
%timeit numpy_three_diagonal(N)
%timeit scipy_three_diagonal(N)
%%timeit
N = 1000
calc = three_diagonal(N)
calc = scipy_three_diagonal(N)
del calc
from timeit import default_timer as timer
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
timings = {bench:[] for bench in bench_names}
for n in dims:
start_time = timer()
calc = three_diagonal(n)
time_delta = timer() - start_time
timings['loop'].append(time_delta)
start_time = timer()
calc = numpy_three_diagonal(n)
time_delta = timer() - start_time
timings['numpy'].append(time_delta)
start_time = timer()
calc = scipy_three_diagonal(n)
time_delta = timer() - start_time
timings['scipy'].append(time_delta)
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
timings = {bench_name: [] for bench_name in bench_names}
def timing_machine(func, *args, **kwargs):
start_time = timer()
result = func(*args, **kwargs)
time_delta = timer() - start_time
return time_delta
for n in dims:
timings['loop'].append(timing_machine(three_diagonal, n))
timings['numpy'].append(timing_machine(numpy_three_diagonal, n))
timings['scipy'].append(timing_machine(scipy_three_diagonal, n))
timeit_result = %timeit -q -r 5 -o three_diagonal(10)
print 'Best of {} runs: {:.8f}s'.format(timeit_result.repeat,
timeit_result.best)
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
bench_funcs = [three_diagonal, numpy_three_diagonal, scipy_three_diagonal]
timings_best = {bench_name: [] for bench_name in bench_names}
for bench_name, bench_func in zip(bench_names, bench_funcs):
print '\nMeasuring {}'.format(bench_func.func_name)
for n in dims:
print n,
time_result = %timeit -q -o bench_func(n)
timings_best[bench_name].append(time_result.best)
import matplotlib.pyplot as plt
%matplotlib inline
# plt.rcParams.update({'axes.labelsize': 'large'})
plt.rcParams.update({'font.size': 14})
plt.figure(figsize=(10,8))
for bench_name, values in timings_best.iteritems():
plt.semilogy(dims, values, label=bench_name)
plt.legend(loc='best')
plt.title('Benchmarking results with best of timeit', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
plt.figure(figsize=(10,8))
for bench_name, values in timings.iteritems():
plt.semilogy(dims, values, label=bench_name)
plt.legend(loc='best')
plt.title('Benchmarking results with default_timer', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
k = len(timings_best)
iter_xyf = [item for sublist in zip([dims]*k,
timings_best.values(),
list('rgb'))\
for item in sublist]
plt.figure(figsize=(10, 8))
plt.semilogy(*iter_xyf)
plt.legend(timings_best.keys(), loc=2, frameon=False)
plt.title('Benchmarking results - "one-liner"', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
plt.figure(figsize=(10, 8))
figs = [plt.semilogy(dims, values, label=bench_name)\
for bench_name, values in timings.iteritems()];
ax0, = figs[0]
ax0.set_dashes([5, 10, 20, 10, 5, 10])
ax1, = figs[1]
ax1.set_marker('s')
ax1.set_markerfacecolor('r')
ax2, = figs[2]
ax2.set_linewidth(6)
ax2.set_alpha(0.3)
ax2.set_color('m')
all_markers = [
'.', # point
',', # pixel
'o', # circle
'v', # triangle down
'^', # triangle up
'<', # triangle_left
'>', # triangle_right
'1', # tri_down
'2', # tri_up
'3', # tri_left
'4', # tri_right
'8', # octagon
's', # square
'p', # pentagon
'*', # star
'h', # hexagon1
'H', # hexagon2
'+', # plus
'x', # x
'D', # diamond
'd', # thin_diamond
'|', # vline
]
all_linestyles = [
'-', # solid line style
'--', # dashed line style
'-.', # dash-dot line style
':', # dotted line style
'None'# no line
]
all_colors = [
'b', # blue
'g', # green
'r', # red
'c', # cyan
'm', # magenta
'y', # yellow
'k', # black
'w', # white
]
n = len(timings)
experiment_names = timings.keys()
fig, axes = plt.subplots(1, n, sharey=True, figsize=(16,4))
colors = np.random.choice(list('rgbcmyk'), n, replace=False)
markers = np.random.choice(all_markers, n, replace=False)
lines = np.random.choice(all_linestyles, n, replace=False)
for ax_num, ax in enumerate(axes):
key = experiment_names[ax_num]
ax.semilogy(dims, timings[key], label=key,
color=colors[ax_num],
marker=markers[ax_num],
markersize=8,
linestyle=lines[ax_num],
lw=3)
ax.set_xlabel('matrix dimension')
ax.set_title(key)
axes[0].set_ylabel('Time, s')
plt.suptitle('Benchmarking results', fontsize=16, y=1.03)
plt.figure()
plt.subplot(211)
plt.plot([1,2,3])
plt.subplot(212)
plt.plot([2,5,4])
items = ['foo', 'bar', 'baz', 'foo', 'baz', 'bar']
from collections import defaultdict
item_ids = defaultdict(lambda: len(item_ids))
map(item_ids.__getitem__, items)
import pandas as pd
pd.DataFrame({'items': items}).groupby('items', sort=False).grouper.group_info[0]
import numpy as np
np.unique(items, return_inverse=True)[1]
last = 0
counts = {}
result = []
for item in items:
try:
count = counts[item]
except KeyError:
counts[item] = count = last
last += 1
result.append(count)
result
N = 1000
from itertools import permutations
%timeit list(permutations(xrange(N), 2))
import numpy as np
from scipy.linalg import hankel
def pairs_idx(n):
return np.vstack((np.repeat(xrange(n), n-1), hankel(xrange(1, n), xrange(-1, n-1)).ravel()))
%timeit pairs_idx(N)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ビジュアルアテンションを用いた画像キャプショニング
Step2: MS-COCO データセットのダウンロードと準備
Step3: オプション
Step4: InceptionV3 を使った画像の前処理
Step5: InceptionV3 を初期化し Imagenet で学習済みの重みをロードする
Step6: InceptionV3 から抽出した特徴量をキャッシュする
Step7: キャプションの前処理とトークン化
Step8: データを訓練用とテスト用に分割
Step9: 訓練用の tf.data データセットの作成
Step10: モデル
Step11: チェックポイント
Step12: 訓練
Step13: キャプション!
Step14: あなた独自の画像でためそう
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# your model focuses on during captioning
import matplotlib.pyplot as plt
import collections
import random
import numpy as np
import os
import time
import json
from PIL import Image
# Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract=True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/zips/train2014.zip',
extract=True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Group all captions together having the same image ID.
image_path_to_caption = collections.defaultdict(list)
for val in annotations['annotations']:
caption = f"<start> {val['caption']} <end>"
image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (val['image_id'])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
random.shuffle(image_paths)
# Select the first 6000 image_paths from the shuffled set.
# Approximately each image id has 5 captions associated with it, so that will
# lead to 30,000 examples.
train_image_paths = image_paths[:6000]
print(len(train_image_paths))
train_captions = []
img_name_vector = []
for image_path in train_image_paths:
caption_list = image_path_to_caption[image_path]
train_captions.extend(caption_list)
img_name_vector.extend([image_path] * len(caption_list))
print(train_captions[0])
Image.open(img_name_vector[0])
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.io.decode_jpeg(img, channels=3)
img = tf.keras.layers.Resizing(299, 299)(img)
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
caption_dataset = tf.data.Dataset.from_tensor_slices(train_captions)
# We will override the default standardization of TextVectorization to preserve
# "<>" characters, so we preserve the tokens for the <start> and <end>.
def standardize(inputs):
inputs = tf.strings.lower(inputs)
return tf.strings.regex_replace(inputs,
r"!\"#$%&\(\)\*\+.,-/:;=?@\[\\\]^_`{|}~", "")
# Max word count for a caption.
max_length = 50
# Use the top 5000 words for a vocabulary.
vocabulary_size = 5000
tokenizer = tf.keras.layers.TextVectorization(
max_tokens=vocabulary_size,
standardize=standardize,
output_sequence_length=max_length)
# Learn the vocabulary from the caption data.
tokenizer.adapt(caption_dataset)
# Create the tokenized vectors
cap_vector = caption_dataset.map(lambda x: tokenizer(x))
# Create mappings for words to indices and indicies to words.
word_to_index = tf.keras.layers.StringLookup(
mask_token="",
vocabulary=tokenizer.get_vocabulary())
index_to_word = tf.keras.layers.StringLookup(
mask_token="",
vocabulary=tokenizer.get_vocabulary(),
invert=True)
img_to_cap_vector = collections.defaultdict(list)
for img, cap in zip(img_name_vector, cap_vector):
img_to_cap_vector[img].append(cap)
# Create training and validation sets using an 80-20 split randomly.
img_keys = list(img_to_cap_vector.keys())
random.shuffle(img_keys)
slice_index = int(len(img_keys)*0.8)
img_name_train_keys, img_name_val_keys = img_keys[:slice_index], img_keys[slice_index:]
img_name_train = []
cap_train = []
for imgt in img_name_train_keys:
capt_len = len(img_to_cap_vector[imgt])
img_name_train.extend([imgt] * capt_len)
cap_train.extend(img_to_cap_vector[imgt])
img_name_val = []
cap_val = []
for imgv in img_name_val_keys:
capv_len = len(img_to_cap_vector[imgv])
img_name_val.extend([imgv] * capv_len)
cap_val.extend(img_to_cap_vector[imgv])
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
# Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
# Load the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int64]),
num_parallel_calls=tf.data.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# attention_hidden_layer shape == (batch_size, 64, units)
attention_hidden_layer = (tf.nn.tanh(self.W1(features) +
self.W2(hidden_with_time_axis)))
# score shape == (batch_size, 64, 1)
# This gives you an unnormalized score for each image feature.
score = self.V(attention_hidden_layer)
# attention_weights shape == (batch_size, 64, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, tokenizer.vocabulary_size())
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([word_to_index('<start>')] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 20
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
average_batch_loss = batch_loss.numpy()/int(target.shape[1])
print(f'Epoch {epoch+1} Batch {batch} Loss {average_batch_loss:.4f}')
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print(f'Epoch {epoch+1} Loss {total_loss/num_steps:.6f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0],
-1,
img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([word_to_index('<start>')], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input,
features,
hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()
predicted_word = tf.compat.as_text(index_to_word(predicted_id).numpy())
result.append(predicted_word)
if predicted_word == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for i in range(len_result):
temp_att = np.resize(attention_plot[i], (8, 8))
grid_size = max(int(np.ceil(len_result/2)), 2)
ax = fig.add_subplot(grid_size, grid_size, i+1)
ax.set_title(result[i])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([tf.compat.as_text(index_to_word(i).numpy())
for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print('Real Caption:', real_caption)
print('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url)
result, attention_plot = evaluate(image_path)
print('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are [0.485, 0.456, 0.406] and the standard deviations are [0.229, 0.224, 0.225].
Step2: We can load in a model such as DenseNet. Let's print out the model architecture so we can see what's going on.
Step3: This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer (classifier)
Step4: With our model built, we need to train the classifier. However, now we're using a really deep neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
Step5: You can write device agnostic code which will automatically use CUDA if it's enabled like so
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
model = models.densenet121(pretrained=True)
model
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
# Use GPU if it's available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have to import the numpy module in order to get help on any of its functions which we attempt below. But if we ask for help on all of np (numpy), it blocks it due to simply too much output for the browser to handle. Besides, we won't be able to read it all in one sitting. So see the Help menu in the notebook web page for numpy which today points to
Step2: but we can execute commands from the Linux shell like this
Step3: We can count the number of lines of output of whatever command we want. Here it is the simple one-liner of python that is generating two lines of output
Step4: And here is why we got the IOPub data rate exceeded error above! There is a lot of output from the help on the np module, and we do not want to see it all, anyhow.
Step5: But we can explore some of it by using basic Linux shell commands
Step6: We can ignore the big traceback from python. All it is telling us that it failed to write to the stdout, because the head command is closing its side of the pipe after 20 lines of output received.
Step7: Getting help on numpy functions
Step8: Getting help on scipy functions
Step9: It seems that the np.info function is close to the same as the builtin help function
Step10: Finding numpy functions with keywords in doc strings
|
<ASSISTANT_TASK:>
Python Code:
help(abs)
import numpy as np
help(np)
!echo this is output from the echo command from the Linux shell
!python --version
!python -c 'print("foo"); print("bar")'
!python -c 'print("foo"); print("bar")' | wc -l
!python -c 'import numpy as np; help(np)' | wc -l
!python -c 'import numpy as np; help(np)' | head -20
!python -c 'import numpy as np; help(np)' 2>/dev/null | head -20
import numpy as np
help(np.linspace)
import numpy as np
import matplotlib as mpl
from scipy import linalg, optimize
# I don't know what difference there is between np.info and just Python's built-in help:
# np.info(optimize.fmin)
help(optimize.fmin)
help(np.info)
help(np.lookfor)
np.lookfor('root')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We must first nagivate to the directory containing the spreadsheets, which for this notebook is the subdirectory 'files'.
Step2: We must now open the workbook file.
Step3: Once the workbook is loaded, we can interact with specific sheets by loading them via workbook methods.
Step4: We can also use the .get_sheet_names() method to print all sheet names, in case we aren't sure.
Step5: We can now interact with specific cells by creating cell objects, referenced via a sheet method.
Step6: The .value method returns the actual value in the cell.
Step7: This particular cell returns a datetime reference from Excel via Python's own datetime module. A string value is available by passing into the str() function
Step8: All cell values inherit their data types from Excel.
Step9: You can also reference cells via rows and columns. Excel rows start at 1 and columns at A.
Step10: This can be useful for iterative or looping operations.
|
<ASSISTANT_TASK:>
Python Code:
import openpyxl
# Import OS module to navigate directories
import os
# Change the directory to the excel file location, using relative and absolute paths as previously discussed.
os.chdir('files')
os.listdir()
workbook = openpyxl.load_workbook('example.xlsx')
type(workbook)
sheet = workbook.get_sheet_by_name('Sheet1')
type(sheet)
workbook.get_sheet_names()
# Just references an object exists; requires an additional method to interact with
sheet['A1']
cell = sheet['A1']
cell.value
print(str(cell.value))
print(str(sheet['A1'].value))
print("The value in cell %s is '%s' and is type %s." %('A1', sheet['A1'].value, type(sheet['A1'].value)))
print("The value in cell %s is '%s' and is type %s." %('B1', sheet['B1'].value, type(sheet['B1'].value)))
print("The value in cell %s is '%s' and is type %s." %('C1', sheet['C1'].value, type(sheet['C1'].value)))
# B1 Cell
sheet.cell(row = 1, column = 2)
for i in range(1,8):
print(i, sheet.cell(row=i, column=2).value)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bungee jumping
Step3: Now here's a version of make_system that takes a Params object as a parameter.
Step4: Let's make a System
Step6: spring_force computes the force of the cord on the jumper.
Step7: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
Step9: drag_force computes drag as a function of velocity
Step10: Here's the drag force at 60 meters per second.
Step11: Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
Step13: Now here's the slope function
Step14: As always, let's test the slope function with the initial params.
Step15: And then run the simulation.
Step16: Here's the plot of position as a function of time.
Step17: After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.
Step18: At the lowest point, the jumper is still too high, so we'll need to increase L or decrease k.
Step19: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver.
Step20: And we can compute the maximum acceleration the jumper experiences
Step21: Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
Step22: Under the hood
Step24: Solving for length
Step25: As usual, we should test it with the initial conditions.
Step26: Now we can test it and confirm that it stops at the bottom of the jump.
Step27: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m,
zero_force = 0 * N)
def make_system(params):
Makes a System object for the given params.
params: Params object
returns: System object
area, mass = params.area, params.mass
g, rho = params.g, params.rho
v_init, v_term = params.v_init, params.v_term
y_attach = params.y_attach
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
system = make_system(params)
def spring_force(y, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
y_attach, L, k = system.y_attach, system.L, system.k
distance_fallen = y_attach - y
if distance_fallen <= L:
return system.zero_force
extension = distance_fallen - L
f_spring = k * extension
return f_spring
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
def drag_force(v, system):
Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
v = -60 * m/s
f_drag = drag_force(v, system)
a_drag = f_drag / system.mass
def slope_func(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
mass, g = system.mass, system.g
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
slope_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func)
details
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
min(results.y)
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
max_acceleration = max(a) * m/s**2
max_acceleration / system.g
source_code(gradient)
def event_func(state, t, system):
Return velocity.
y, v = state
return v
event_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func, events=event_func)
plot_position(results)
min(results.y)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Die Fitroutine minimiert das $\chi^2$, welches aus den Quadraten der Residuen (=Abweichung der Modellvorhersage von den Messwerten) berechnet wird. Die folgende Funktion bestimmt für ein beliebiges Modell die dazugehörige Residuenfunktion. Es ist nicht wichtig, diesen Quellcode zu verstehen. Im Zweifel reicht es, diese Zelle einfach auszuführen, damit die Funktion definiert ist.
Step3: Zusätzlich ändern wir einige globale Optionen zur Formatierung der Plots. Es ist sinnvoll, das auch am Anfang des Notebooks zu tun, damit diese Änderungen für alle Plots wirksam sind.
Step4: Nun zur eigentlichen Auswertung. Wir beginnen damit, die Daten von Aufgabe 1 zu importieren.
Step5: Jetzt sind die Daten importiert und wir schauen sie uns an.
Step6: Das sind nur Einstellungen von dem Oszilloskop, die brauchen wir gar nicht! wir betrachten noch ein paar mehr der ersten Zeilen, um herauszufinden, wo die Daten anfangen.
Step7: Offensichtlich fangen die Daten in Zeile 16 an. Wir können also getrost die ersten 16 Zeilen beim Import überspringen (die erste Zeile der Datei wird standardmäßig für die Spaltennamen verwendet).
Step8: Jetzt betrachten wir erneut die ersten Zeilen der importierten Daten.
Step9: Wir brauchen auch nur die vierte und fünfte Spalte, also importieren wir nur diese. Achtung! Python beginnt immer bei 0 mit Zählen, also hat die vierte Spalte den Index 3 und die fünfte Spalte den Index 4. Außerdem betrachten wir direkt die ersten Zeilen der importierten Daten.
Step10: Jetzt ergänzen wir noch sinnvolle Spaltenüberschriften in unserem Import-Befehl und betrachten die Daten.
Step11: Das sieht gut aus, damit können wir arbeiten. Jetzt plotten wir die Daten. Es ist an dieser Stelle sinnvoll, die Zeitdaten direkt nach dem Import in Nanosekunden umzurechnen.
Step12: Fitten
Step13: Jetzt erstellen wir einen $\chi^2$-Minimierer und führen den Fit durch. Definiert man das Modell immer in der Variable modell, die Parameter immer in params und die Messwerte in (X, Y, Yerr). Dann kann die nächste Zelle für jeden Fit 1zu1 übernommen werden.
Step14: Jetzt plotten wir das Resultat.
Step15: Jetzt fitten wir die Kurve für die Spule. Die Dokumentation wird abgekürzt.
Step16: Mit Messunsicherheiten rechnen
Step17: Aufgabe 2
Step18: Bestimmung von Periodendauer, Frequenz und Kreisfrequenz.
Step19: Bestimmung des logarithmischen Dekrements und der Güte
Step20: Aufgabe 3
Step21: Resonanzüberhöhung bzw. Güten berechnen
Step22: Zeitkonstante des Anschwingens bestimmen
Step23: Plotten der übrigen Anschwingkurven
Step24: Aufgabe 4
Step25: Aufgabe 5
Step26: Vergrößerter Plot um die Resonanz
|
<ASSISTANT_TASK:>
Python Code:
# matplotlib und numpy importieren
%pylab nbagg
# Arbeiten mit Daten
import pandas as pd
# Zum Fitten
import lmfit
# Fehlerrechnung
import uncertainties as uct
from uncertainties.umath import *
def residual(userfcn):
Gibt für eine Modellfunktion die entsprechende Residuenfunktion
zurück, die für gewisse Parameter und x/y-Wertepaare die Residuen
(modell(x) - y) berechnet. Wird ein Fehler angegeben, dann werden
die Residuen noch mit dem Kehrwert des Fehlers gewichtet.
Parameters
----------
userfcn : callable
The model function for which the residuals should be calculated.
This should take two arguments params and x, where params
denotes the parameters (lmfit.Parameters.parameter instance) to
be used in the model and x denotes the values for which the model
function should be evaluated.
Returns
-------
residualfunction : callable
This function calculates the residuals . It takes the model parameters and
x/y-values as arguments and optionally the uncertainties of the y-values
as a keyword argument 'eps='.
Note
----
In Summary the userfcn should be a callable of the form
model(params, x)
which returns a ndarray with the y-values corresponding to the x-values and the
model with the given parameters. The residual function which is returned is of
the form
residual(params, x, y, eps=yerr)
and returns a ndarray with the corresponding residuals for the parameters and the
given x/y-value pairs.
def out(*args, **kwargs):
p, x, y = args
assert isinstance(p, lmfit.parameter.Parameters)
if "eps" in kwargs:
eps = kwargs.pop("eps")
else:
eps = None
modell = userfcn(p.valuesdict(), x)
if eps is None:
residual = abs(modell-y)
else:
residual = abs((modell-y)/eps)
return residual
return out
plt.rcParams['font.size'] = 10
plt.rcParams['axes.grid'] = True
plt.rcParams['figure.figsize'] = (4,3)
plt.rcParams['figure.dpi'] = 144
data = pd.read_csv('dat/a1-100ohm.csv')
# Betrachte die ersten 4 (=Standardwert) Zeilen
data.head()
data.head(20)
data = pd.read_csv('dat/a1-100ohm.csv', skiprows=16)
data.head()
data = pd.read_csv('dat/a1-100ohm.csv', skiprows=16, usecols=[3, 4])
data.head()
data = pd.read_csv('dat/a1-100ohm.csv', skiprows=16, usecols=[3, 4], names=["t", "U"])
data.head()
data = pd.read_csv('dat/a1-100ohm.csv', skiprows=16, usecols=[3, 4], names=["t", "U"])
# Zeit von Sekunden in Nanosekunden umrechnen
data.t *= 1e9
# Daten plotten, um einen Überblick zu gewinnen
data.plot(x="t", y="U")
# Fitmodell definieren
def aufladekurve_kondensator(params, t):
try :
p = params # needed for fitting with lmfit
except:
p = params.valuesdict() # needed later for plotting a fitresult
return p["U0"] * (1 - np.exp(-(t - p["t0"]) / p["tau"])) + p["offset"]
modell = aufladekurve_kondensator
# Startparameter festlegen
params = lmfit.Parameters()
params.add("U0", value=11.6, vary=True)
params.add("tau", 500)
params.add("t0", -50)
params.add("offset", -12)
# Messwerte für das Fitten auswählen
# Die Daten für t<0 werden für den Fit hier ignoriert, da zu sehen ist,
# dass der Anstieg der Spannung nicht durch unser Modell beschrieben wird.
# Das ist hier auch nicht schlimm, da noch genügend Messdaten vorliegen.
# Es ist auch sinnvoll, da die Rechteckspannung an den Rändern nicht perfekt
# ist.
fitdata = data[(data.t > 0)]
# Fitdaten auswählen
X, Y = fitdata.t, fitdata.U
# Fehler der y-Werte definieren
Yerr = 0.03 * fitdata.U
# chi2 minimizer erstellen
minimizer = lmfit.Minimizer(userfcn=residual(modell),
params=params,
fcn_args=(X,Y),
fcn_kws=dict(eps=Yerr),
nan_policy="omit"
)
# fit auch tatsächlich durchführen
result = minimizer.minimize()
# ergebnis ausgeben
print(lmfit.fit_report(result))
# plotten
fig, ax = plt.subplots()
data.plot(ax=ax, x="t", y="U", label="Messwerte")
ax.plot(data.t, modell(result.params, data.t), label="Fit", c="r")
ax.legend(loc="lower right", title='Kondensator ($R=100\,\Omega$)')
ax.set_ylim(-20,20)
ax.set_xlim(-500,None)
ax.set_xlabel('Zeit $t$ [ns]')
ax.set_ylabel('Spannung $U$ [V]')
plt.tight_layout()
plt.savefig('a1-kondensator-100ohm.pdf')
plt.show()
# Daten importieren
data = pd.read_csv('data/a1-aufladekurve-spule-100ohm.csv', skiprows=16, usecols=[3,4], names=["t", "U"])
# Zeit von Sekunden in millisekunden umrechnen
data.t = data.t * 1e6
# Fitmodell definieren
def aufladekurve_spule(params, t):
try :
p = params # needed for fitting with lmfit
except:
p = params.valuesdict() # needed later for plotting a fitresult
return p["U0"] * (np.exp(-(t - p["t0"]) / p["tau"])) + p["offset"]
modell = aufladekurve_spule
# Startparameter festlegen
params = lmfit.Parameters()
params.add("U0", 22.5, vary='False')
params.add("tau", 100)
params.add("t0", -50)
params.add("offset", -12)
# Fitten
fitdata = data[(data.t > 10)]
X, Y = fitdata.t, fitdata.U
Yerr = 0.03 * fitdata.U
minimizer = lmfit.Minimizer(userfcn=residual(modell),
params=params,
fcn_args=(X,Y),
fcn_kws=dict(eps=Yerr),
nan_policy="omit"
)
result = minimizer.minimize()
print(lmfit.fit_report(result))
# Plot erstellen
fig, ax = plt.subplots()
data.plot(ax=ax, x="t", y="U", label="Messwerte")
ax.plot(data.t, aufladekurve_spule(result.params, data.t), label="Fit", c="r")
ax.legend(loc="upper right", title='Kondensator ($R=100\,\Omega$)')
ax.set_xlim(-50,200)
ax.set_ylim(-5,25)
ax.set_xlabel('Zeit $t$ [$\mu$s]')
ax.set_ylabel('Spannung $U$ [V]')
plt.tight_layout()
plt.savefig('a1-spule-100ohm.pdf')
plt.show()
# Definiere Zeitkonstanten und Widerstand mit Unsicherheiten aus den Fits
tauC= uct.ufloat(242.68,1.27)
tauL = uct.ufloat(50.67, 0.25)
R = uct.ufloat(100, 100*0.05)
# Berechne Kapazität und Induktivität sowie Resonanzfrequenz
C = tauC/R * 1e-9
L = tauL * R * 1e-6
omega = 1 / (sqrt(L*C))
nu = omega / (2*pi)
# Gebe Ergebnis aus
for i,j in zip(['C = ', 'L = ', 'omega = ', 'nu = '], [C, L, omega, nu]):
print('{}{}'.format(i,j))
# Daten plotten
data = pd.read_csv('dat/a2-100ohm.csv', skiprows=15, usecols=[3,4], names=["t", "U"])
data.t *= 1e6
fig, ax = plt.subplots(dpi=144)
data.plot(ax=ax, x='t', legend=False)
ax.set_xlabel('Zeit $t$ [$\mu$s]')
ax.set_ylabel('Spannung $U$ [V]')
ax.set_ylim(-.3,.3)
plt.tight_layout()
plt.savefig('a2-100ohm.pdf')
plt.show()
# abgelesene Werte zweier Punkte gleicher Phase nach N Perioden
t1 = uct.ufloat(12.09, .5)
t2 = uct.ufloat(194.6, 1)
N = 8
# periodendauer
T = (t2 - t1) / N
# frequenz
f = 1/T
# kreisfrequenz
omega = 2*pi*f
# ausgabe der werte
for i,j in zip(['T', 'f', 'omega'], [T,f,omega]):
print('{} = '.format(i), '{:f}'.format(j))
# Werte aus Graphen ablesen
U1 = uct.ufloat(0.25, 0.01)
U2 = uct.ufloat(0.05, 0.01)
N = 8
# log Dekrement und Güte berechen
delta = 1 / N * log(U1 / U2)
q = pi / delta
tau = q / omega
# Ausgabe der Werte
for i,j in zip(['delta', 'q', 'tau'], [delta, q, tau]):
print('{} = '.format(i), '{:f}'.format(j))
# Plot erstellen
data1 = pd.read_csv('dat/a3-a-kleinefrequenz-100ohm.csv', skiprows=16, usecols=[3,4], names=["t", "U"])
data1.t *= 1e6
data2 = pd.read_csv('dat/a3-a-resonanzfrequenz-100ohm.csv', skiprows=16, usecols=[3,4], names=["t", "U"])
data2.t *= 1e6
fig, (ax1, ax2) = plt.subplots(2)
data1.plot(ax=ax1, x='t', title='$\\omega \\ll \\omega_0$', legend=False)
data2.plot(ax=ax2, x='t', title='$\\omega = \\omega_0$', legend=False)
for ax in (ax1, ax2):
ax.set_xlabel('Zeit $t$ [$\mu$s]')
ax.set_ylabel('Spannung $U$ [V]')
ax1.set_yticks([-2,-1,0,1,2])
ax2.set_yticks([-40,-20,0,20,40])
plt.tight_layout()
plt.savefig('a3-a-100ohm.pdf')
plt.show()
# Werte ablesen
Uklein = uct.ufloat(1.87, 0.1)
Ures = uct.ufloat(32.5, 1.2)
# Güte bestimmen
q = Ures / Uklein
# Ausgabe
print("q = ", q)
# Plot
data = pd.read_csv('dat/a3-b-resonanz.csv')
data.columns = ['t', 'U']
fig, ax = plt.subplots()
data.plot(ax=ax, x='t', legend=False)
ax.set_xlabel('Zeit $t$ [$\mu$s]')
ax.set_ylabel('Spannung $U$ [V]')
plt.tight_layout()
plt.savefig('a3-b-resonanz.pdf')
plt.show()
# Ablesen der Werte
Umax = Ures
U1 = uct.ufloat(4, 2)
U2 = uct.ufloat(30,2)
N = 14
t1 = uct.ufloat(44e-6,1e-6)
t2 = uct.ufloat(469e-6, 1e-6)
Nperioden = 19
T = (t2 - t1) / Nperioden
omega = 2 * pi / T
# Berechnung der Zeitkonstanten
delta_anschwing = 1 / N * log((Umax - U1) / (Umax - U2))
q = pi / delta_anschwing
tau = pi / (delta_anschwing * omega)
# Ausgabe
for i,j, in zip(['delta_anschwing', 'q', 'T', 'omega', 'tau'], [delta_anschwing, q, T, omega, tau]):
print('{} = {}'.format(i,j))
data1 = pd.read_csv('dat/a3-b-schwebung+5kHz-100ohm.csv', skiprows=16, usecols=[3,4], names=['t', 'U'])
data2 = pd.read_csv('dat/a3-b-schwebung-5kHz-100ohm.csv', skiprows=16, usecols=[3,4], names=['t', 'U'])
data3 = pd.read_csv('dat/a3-b-schwebung+10kHz-100ohm.csv', skiprows=16, usecols=[3,4], names=['t', 'U'])
data4 = pd.read_csv('dat/a3-b-schwebung-10kHz-100ohm.csv', skiprows=16, usecols=[3,4], names=['t', 'U'])
for data in [data1, data2, data3, data4]:
data.t *= 1e6
fig, ((ax1, ax3), (ax2, ax4)) = plt.subplots(2,2, figsize=(8,6), dpi=100)
for data, ax, title in zip(
[data1, data2, data3, data4],
[ax1, ax2, ax3, ax4],
['$+5$ kHz', '$-5$ kHz', '$+10$ kHz', '$-10$ kHz']
):
data.plot(ax=ax, x='t', legend=False, title=title)
for ax in (ax1, ax2, ax3, ax4):
ax.set_xlim(0,400)
ax.set_ylim(-12, 12)
for ax in (ax1, ax2):
ax.set_ylabel('Spannung $U$ [V]')
for ax in (ax2, ax4):
ax.set_xlabel('Zeit $t$ [$\mu$s]')
for ax in (ax1, ax3):
ax.set_xlabel('')
plt.tight_layout()
plt.savefig('a3-b-schwebungen.pdf')
plt.show()
# Daten einlesen
data = pd.read_csv('dat/a4-100ohm.csv', comment='#')
# div's in Spannungen umrechnen
data.U0 = data.U0 * data.skal
data.U1 = data.U1 * data.skal
# alpha berechnen
data['alpha'] = arcsin(np.sqrt(data.U1 / data.U0)) / pi
for i in arange(14,28):
data.ix[i,'alpha'] = 1 - data.ix[i,'alpha']
data
# fehler für alpha und für f berechnen
deltaU = 0.2 * data.skal
data['alpha_err'] = np.sqrt(
deltaU**2 / data.U0**2 / (1 - (data.U1**2 / data.U0**2)) \
+ deltaU**2 * data.U1**2 / data.U0**4 / (1 - (data.U1**2 / data.U0**2))
)
data['f_err'] = 0.1
# Plotten
fig, ax = plt.subplots()
data.plot(ax=ax, kind='scatter', x='f',y='alpha', yerr='alpha_err', legend=False, edgecolor='none', s=10)
ax.set_xlabel('Frequenz $f$ [kHz]')
ax.set_ylabel('Phasenverschiebung $\\alpha$')
ax.set_yticks([0,0.25,0.5,0.75,1])
ax.set_yticklabels(['$0$', '$\\frac{\\pi}{4}$', '$\\frac{\\pi}{2}$', '$\\frac{3\\pi}{4}$', '$\\pi$'])
ax.set_ylim(0,1)
plt.tight_layout()
plt.savefig('a4-100ohm.pdf')
plt.show()
data.U0.max()/sqrt(2)
fres = uct.ufloat(44.70, 0.01)
# fmin und fmax aus graph ablesen
fmin = uct.ufloat(42.70, 0.01)
fmax = uct.ufloat(46.65, 0.01)
q = fres / (fmax - fmin)
tau = q / fres
for i, j in zip(['fres', 'fmin', 'fmax', 'q', 'tau'], [fres, fmin, fmax, q, tau]):
print('{} = {}'.format(i,j))
data = pd.read_csv('dat/a5-100ohm.csv', skiprows=16, usecols=[3,4,10], names=['t', 'CH1', 'CH2'])
# Bestimmung der Frequenzskalierung
# ---------------------------------
# Diese Frequenzen habt ihr (theoretisch) mit einer Rechteckspannung am Wobblegenerator bestimmt
fmin = uct.ufloat(10, 2)
fmax = uct.ufloat(83, 2)
# Den Wert hättet ihr (theoretisch) beim Anlegen der Rechteckspannung ablesen sollen.
# Er war aber bei jeder Praktikumsgruppe identisch mit dem der Dreieckspannung
Urechteck = uct.ufloat(40, 0.5)
# Hier muss natürlich die zweifache Amplitude der Dreieckspannung abgelesen werden,
# da die Frequenzen ja auch vom Minimum bis zum Maximum der Dreieckspannung durch-
# laufen werden
Udreieck = uct.ufloat(40, 0.5)
# Der Umrechnungsfaktor zwischen Rechteck und Dreieckspannung, da die Grenzfrequenzen
# (eigentlich) bei Rechteckspannung bestimmt werden, aber die Messung bei Dreieckspannung
# durchgeführt wird
gamma = Urechteck / Udreieck
# Die Resonanzspannung von CH1 ist eigentlich nur wichtig, um die Höhe bei dem
# 1/sqrt(2)-fachen einzuzeichnen
Ures = uct.ufloat(12.33, 0.5)
# Die Anzahl der Kästchen auf der x-Achse
smin = uct.ufloat(0.069, 0.001)
smax = uct.ufloat(0.717, 0.001)
sres = uct.ufloat(0.393, 0.001)
s = smax - smin
# Skalierung berechnen
skal = (fmax * gamma - fmin) / s
# Bestimmung von delta omega
Ured = Ures / sqrt(2)
# sklein und sgross können natürlich erst mithilfe des unteren Plots abgelesen werden, indem
# die axhline bei y=Ures/sqrt(2) hinzugefügt wird
sklein = uct.ufloat(0.382, 0.001)
sgross = uct.ufloat(0.405, 0.001)
# Güte bestimmen
delta_omega = (sgross - sklein) * skal
omega = fmin + (sres - smin) * skal
q = omega / delta_omega
for i,j in zip(['gamma', 's', 'skal', 'delta_omega', 'omega', 'q'], [gamma, s, skal, delta_omega, omega, q]):
print('{} = {}'.format(i,j))
# Plot
fig, ax = plt.subplots()
data.plot(ax=ax, x='t', y="CH2", color="g", legend=False)
ax2 = ax.twinx()
data.plot(ax=ax2, x='t', y="CH1", color="b", legend=False)
ax2.set_ylabel("Amplitude CH1 [div]")
ax.set_xlabel('Frequenzachse [div]')
ax.set_ylabel('Amplitude CH2 [div]')
ax.set_xlim(0.,0.8)
ax.set_ylim(-21,21)
ax.axhline(y=Ures.nominal_value / sqrt(2), xmin=0.2, xmax=0.8, c='r', linewidth=1.)
ax.axvline(x=smin.nominal_value, c='g', linewidth=1., alpha=0.5)
ax.axvline(x=smax.nominal_value, c='g', linewidth=1., alpha=0.5)
ax.axvline(x=sklein.nominal_value, c='y', linewidth=1., alpha=0.5)
ax.axvline(x=sgross.nominal_value, c='y', linewidth=1., alpha=0.5)
ax.axvline(x=sres.nominal_value, c='m', linewidth=1., alpha=0.5)
ax.set_title('Resonanzkurve, 100$\Omega$')
ax.annotate('$\\frac{U_{\\rm{res}}}{\\sqrt{2}}$',
xy=(0.655, Ures.nominal_value / sqrt(2)),
xycoords='data',
ha='left', va='center',
color="r")
handles, labels = ax.get_legend_handles_labels()
hndls2, lbls2 = ax2.get_legend_handles_labels()
handles.append(hndls2[0])
labels.append(lbls2[0])
ax.legend(reversed(handles), reversed(labels), loc="upper left", fontsize="small")
plt.grid()
plt.tight_layout()
plt.savefig('a5-100ohm.pdf')
plt.show()
# Plot
fig, ax = plt.subplots()
data.plot(ax=ax, x='t', y="CH2", color="g", legend=False)
ax2 = ax.twinx()
data.plot(ax=ax2, x='t', y="CH1", color="b", legend=False)
ax2.set_ylabel("Amplitude CH1 [div]")
ax.set_xlabel('Frequenzachse [div]')
ax.set_ylabel('Amplitude CH2 [div]')
ax.set_xlim(0.3,0.5)
ax.set_ylim(-21,21)
ax.axhline(y=Ures.nominal_value / sqrt(2), xmin=0.2, xmax=0.8, c='r', linewidth=1.)
ax.axvline(x=sklein.nominal_value, c='y', linewidth=1., alpha=0.5)
ax.axvline(x=sgross.nominal_value, c='y', linewidth=1., alpha=0.5)
ax.axvline(x=sres.nominal_value, c='m', linewidth=1., alpha=0.5)
ax.set_title('Resonanzkurve (vergrößert), 100$\Omega$')
ax.annotate('$\\frac{U_{\\rm{res}}}{\\sqrt{2}}$',
xy=(0.655, Ures.nominal_value / sqrt(2)),
xycoords='data',
ha='left', va='center',
color="r")
handles, labels = ax.get_legend_handles_labels()
hndls2, lbls2 = ax2.get_legend_handles_labels()
handles.append(hndls2[0])
labels.append(lbls2[0])
ax.legend(reversed(handles), reversed(labels), loc="upper left", fontsize="small")
plt.grid()
plt.tight_layout()
plt.savefig('a5-100ohm.pdf')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: get viewshed suffix (for filenames)
Step2: Import statements
Step3: GRASS import statements
Step4: Function declarations
Step5: extract point from vector map
Step6: create buffer around point
Step7: convert viewshed from raster to vector map
Step8: overlay a vector map on an underlying vector map using 'and' selection operator
Step9: add area column to vector map
Step10: create table showing total area by landcover type
Step11: Utility functions
Step12: get info about a vector map
Step13: Compute landcover areas
Step14: loop through sample points
Step15: Export naturalness area table
|
<ASSISTANT_TASK:>
Python Code:
naturalness = 'naturalness'
naturalness_value_field = 'value'
n_types = 7
sample_points = 'sample_points_field'
radius = 500
use_viewshed = False
def getViewshedSuffix():
if use_viewshed:
viewshed_suffix = '_viewshed'
else:
viewshed_suffix = ''
return viewshed_suffix
naturalness_area_table_filename = "/home/ubuntu/naturalness_areas_{0}m{1}.csv".format(radius, getViewshedSuffix())
import numpy as np
import pandas
import pyprind
import grass.script as gscript
from grass.pygrass.vector.geometry import Point
from grass.pygrass.vector import Vector
from grass.pygrass.vector import VectorTopo
from grass.pygrass.vector.table import DBlinks
def connectToAttributeTable(map):
vector = VectorTopo(map)
vector.open(mode='r')
dblinks = DBlinks(vector.c_mapinfo)
link = dblinks[0]
return link.table()
def extractPoint(input, ID, output):
where = 'ID = {0}'.format(ID)
type = 'point'
gscript.read_command('v.extract',
input=input,
where=where,
output=output,
type=type,
overwrite=True)
def bufferPoint(input, output, radius):
gscript.read_command('v.buffer',
input=input,
output=output,
type='point',
distance=radius,
overwrite=True)
def vectorizeViewshed(input, ID, output):
type = 'area'
column = 'visible'
gscript.read_command('r.to.vect',
input=input,
output=output,
type=type,
column=column,
overwrite=True)
def overlay(overlay, underlay, output):
operator='and'
gscript.read_command('v.overlay',
ainput=overlay,
binput=underlay,
operator=operator,
output=output,
overwrite=True)
def calculateAreas(map):
#add new area column
gscript.read_command('v.db.addcolumn',
map=map,
columns="area_square_meters DOUBLE PRECISION")
#compute area and insert into area column
gscript.read_command('v.to.db',
map=map,
type='centroid',
option='area',
columns='area_square_meters',
unit='meters')
def getNaturalnessAreaByValue(map):
#get area data
table = connectToAttributeTable(map=map)
table.filters.select()
columns = table.columns.names()
cursor = table.execute()
result = np.array(cursor.fetchall())
cursor.close()
data = pandas.DataFrame(result, columns=columns).set_index('cat')
#make sure naturalness_value_field is a numeric data type
data['b_' + naturalness_value_field] = pandas.to_numeric(data['b_' + naturalness_value_field])
#create naturalness column
data['naturalness'] = np.nan
#define naturalness value categories (round values)
for index, row in data.iterrows():
naturalness = np.round(row['b_' + naturalness_value_field])
data.set_value(index, 'naturalness', int(naturalness))
#sum areas by naturalness value
data['area_square_meters'] = pandas.to_numeric(data['area_square_meters'])
areas = data[['naturalness', 'area_square_meters']].groupby(by='naturalness').sum()
#calculate mean
total_area = data['area_square_meters'].sum()
percent_area = data['area_square_meters'] / total_area
weighted_area = percent_area * data['b_' + naturalness_value_field]
mean = weighted_area.sum()
#add to areas dataframe
areas = areas.set_value('mean', 'area_square_meters', mean)
return areas
def exportVectorToShapefile(map, output):
gscript.read_command('v.out.ogr',
input=map,
format='ESRI_Shapefile',
output=output,
flags='e',
overwrite=True)
def getVectorMapInfo(map):
return gscript.read_command('v.info', map=map)
point_table = connectToAttributeTable(map=sample_points)
point_table.filters.select()
columns = point_table.columns.names()
cursor = point_table.execute()
result = np.array(cursor.fetchall())
cursor.close()
point_data = pandas.DataFrame(result, columns=columns).set_index('cat')
with Vector(sample_points, mode='r') as points:
#setup progress bar
progress_bar = pyprind.ProgBar(points.n_lines, bar_char='█', title='Naturalness analysis progress', monitor=True, stream=1, width=50)
#iterate through points
for point in points:
#get point ID (SiteID)
ID = point_data['ID'][point.cat-1]
#update progress bar
progress_bar.update(item_id=ID)
#buffer current point
extractPoint(input='sample_points_field', ID=ID, output='tmp_buffer_point')
bufferPoint(input='tmp_buffer_point', output='tmp_point_buffer', radius=radius)
#set buffer as overlay
overlay_input = 'tmp_point_buffer'
#consider only visible naturalness if 'use_viewshed' = True
if use_viewshed:
viewshed = 'vect_{0}_viewshed'.format(ID)
visible_viewshed = 'vect_{0}_viewshed_{1}m'.format(ID, radius)
#vectorize viewshed
vectorizeViewshed(input='{0}_viewshed'.format(ID), ID=ID, output=viewshed)
#overlay buffer on viewshed
overlay(overlay=overlay_input,
underlay=viewshed,
output=visible_viewshed)
#set overlay to the visible viewshed
overlay_input = visible_viewshed
overlay_output = 'vect_{0}_naturalness_{1}m{2}'.format(ID, radius, getViewshedSuffix())
#overlay naturalness
overlay(overlay=overlay_input,
underlay=naturalness,
output=overlay_output)
#calculate naturalness area
calculateAreas(map=overlay_output)
#create table
index_start = 0
''' set the first index number,
allowing easier insertion into a database table that already contains
area calculations with other parameters
(i.e radius and use_viewshed)'''
columns = ['ID', 'SiteID', 'IncludedArea']
columns = columns + [ str(n) for n in range(1,n_types+1) ] + ['mean']
area_table = pandas.DataFrame(columns=columns)
#set naming variables
included_area = '{0}m{1}'.format(radius, getViewshedSuffix()) #0=radius, 1=viewshed_suffix
map_pattern = 'vect_{0}_naturalness_{1}m{2}' #0=ID, 1=radius, 2=viewshed_suffix
#iterate through points
for index, point in point_data.iterrows():
ID = point['ID']
map = map_pattern.format(ID, radius, getViewshedSuffix())
#initiate row
row = {'ID':"{0:.3g}".format(int(index) + index_start),
'SiteID': str(ID),
'IncludedArea': included_area}
#get naturalness areas
areas = getNaturalnessAreaByValue(map)
#iterate through area types
for index, area in areas.iterrows():
#add area to row
try:
row["{0:.3g}".format(int(index))] = area['area_square_meters']
except ValueError:
row[index] = area['area_square_meters']
#append row to table
area_table = area_table.append(row, ignore_index=True)
area_table.set_index('ID', inplace=True)
#export table to file
area_table.to_csv(naturalness_area_table_filename, header=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cargamos el archivo a un DataFrame en memoria
Step2: Demos un vistazo a los primeros 5 registros
Step3: y a los ultimos 10
Step4: Que tamaño tiene nuestro DataFrame?
Step5: En español?
Step6: Limpieza de data
Step7: Normalizar o limpiar los nombres de las columnas
Step8: La columna Salario parece tener un espacio de mas. Esto nos importa porque por comodidad queremos acceder a los valores de cada columna haciendo algo como
Step9: Arreglemos eso de una forma que se asegure que todos los nombres de columnas esten "limpios". Para eso
Step10: Ahora todas las columnas tienen nombres sencillos, faciles de utilizar.
Step11: La columna salario tiene un tipo de dato object. Esto normalmente se refiere a texto libre. Para nuestro analisis necesitamos interpretar los valores como números.
Step12: Veamos como quedo
Step13: Resumen de 7 números
Step14: Limpie su puesto
Step15: 271 puestos diferentes? Vamos a asegurarnos que no se abulte esta cantidad debido a que un mismo puesto este escrito con espacios de más como paso con la columna Salario (o sin acentos).
Step16: Oh sorpresa! Ahora son 269 puestos en lugar de 271.
Step17: Y los meses tambien
Step18: Empezar a explorar un poco
Step19: High-rollers
Step20: Si se seleccionamos las columnas y despues filtramos, tambien funciona?
Step21: Por que?
Step22: Resulta que la condicion devuelve una serie de booleanos indicando cuales registros cumplen con la condicion.
Step23: groupby nos da un colección que relaciona el nombre de cada grupo con la parte de la data que pertenece a dicho grupo.
Step24: Pero tambien nos permite hacerle preguntas a los distintos grupos a la vez. En efecto llevando a cabo los pasos
Step25: Podemos agrupar por cualquier numero de columnas que querramos, formando una jerarquia de profundidad arbitraria.
Step26: Proporción de la empleomanía por genero
Step27: Proporción de la nomina por genero
Step28: Se fijaron bien?
Step29: Mejoremos el look un poco con Seaborn
Step30: Proporción de la nómina asignada a mujeres y hombres (reloaded)
Step31: Histogramas del Salario
Step32: Histograma por genero (lado a lado)
Step33: Diagrama de cajas (box & whiskers)
Step34: Una tangente (a)normal
Step35: Empecemos a enfocar nuestra exploración
Step36: Parece ser que el salario promedio de las mujeres supero en 2601.05 DOP al de los hombres...
Step37: Sin controlar por ningun otro factor
Step38: En el primer cuartil de salarios la mujer promedio gano 4.30% (783.97 DOP) menos que el hombre promedio.
Step39: Que pasa cuando miramos los grupos que mas ganan?
Step40: En el ultimo cuartil la mujer promedio gano 9% (7294.20 DOP) menos que el hombre promedio. Pero, notan algo extraño?
Step41: Cual es el impacto de la persona que mas gana en la diferencia?
Step42: Ahora la diferencia se redujo a 7.10% (5,632.26 DOP) menos en salario promedio para las mujeres que los hombres.
Step43: Profundicemos de cuartiles a deciles
Step44: Graficamente es mas bonito
Step45: Soy yo a las líneas están superpuestas en todos los deciles excepto el primero y el último?
Step46: Cuales puestos quedaron?
Step47: Cómo se ve el salario en general por género de esta subpoblación?
Step48: Y el salario para cada puesto por género?
Step49: Calculemos la diferencia entre los promedios de cada puesto
Step50: En terminos relativos (%)?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# DataFrame cariñosamente df
df = pd.read_csv('cs_DGII_Nomina_2016.csv',sep=';', encoding="ISO-8859-1") # encoding???
df.head()
df.tail(10)
df.shape
df.info()
df.isnull().any()
df.columns
df.Mes.head()
df.columns = [x.strip().lower() for x in df.columns.values]
df.columns
df.salario.head()
limpiar_salario = lambda x: pd.to_numeric(x.replace(',','').strip(), errors='coerce')
df.salario = df.salario.apply(limpiar_salario)
df.salario.head()
df.salario.describe()
df.puesto.describe()
from unidecode import unidecode
import re
def limpiar_puesto(pstr):
pstr = unidecode(pstr).strip().lower()
pstr = re.sub('\(.*?\)','', pstr)
pstr = pstr.translate({ord('.'): None, ord(','): None})
return pstr
# tambien podemos acceder a las columnas con esta notación df['columna']
df['puesto_clean'] = df.puesto.apply(limpiar_puesto)
df.puesto_clean.describe()
df.genero.value_counts()
df.genero.describe()
df.mes.value_counts()
df.mes.describe()
df.to_csv('dgii_clean.csv')
ene = df[df.mes == 'ene-16']
feb = df[df.mes == 'feb-16']
print(ene.shape)
print(feb.shape)
ene[ene.salario > 250000][['puesto_clean','salario','genero']]
ene[['puesto_clean','salario','genero']][ene.salario > 250000]
(ene.salario > 250000).head()
por_mes = df.groupby(by='mes')
por_mes
for mes, data_mes in por_mes:
print(mes, data_mes.shape)
por_mes.salario.describe()
df.groupby(by=['mes','genero']).salario.describe()
feb_por_gen = feb.groupby(by='genero')
total_empleados = feb.salario.count()
feb_por_gen.salario.count().apply(lambda x: x/total_empleados)
total_nomina = feb.salario.sum()
feb_por_gen.salario.sum().apply(lambda x: x/total_nomina)
# esto es para que genere los graficos aqui dentro
%matplotlib inline
feb_por_gen.salario.sum().apply(lambda x: x/total_nomina).plot(kind='bar')
import seaborn as sns
sns.countplot(x='genero',data=feb)
feb_por_gen.salario.sum().apply(lambda x: x/total_nomina).plot(kind='bar')
sns.distplot(feb.salario, kde=False)
sns.FacetGrid(data=feb, col='genero', size=8).map(sns.distplot,'salario', kde=False)
sns.factorplot(data=feb, x='genero',y='salario',hue='genero', kind='box',size=6)
import numpy as np
sns.distplot(np.random.normal(feb.salario.mean(),feb.salario.std(),feb.shape[0]))
feb_por_gen.salario.mean()
salario_promedio_f, salario_promedio_m = feb_por_gen.salario.mean()[['F','M']]
salario_promedio_f - salario_promedio_m
salario_promedio_f/salario_promedio_m
(salario_promedio_f/salario_promedio_m - 1)*100
bottom25 = feb[feb.salario <= feb.salario.quantile(.25)]
bottom25_by_gender = bottom25.groupby(by='genero')
bottom25_by_gender.salario.describe()
bottom25_by_gender.salario.mean()['F'] - bottom25_by_gender.salario.mean()['M']
(bottom25_by_gender.salario.mean()['F']/bottom25_by_gender.salario.mean()['M'] - 1) * 100
sns.factorplot(data=bottom25, x='genero',y='salario',hue='genero', kind='box',size=8)
top25 = feb[feb.salario >= feb.salario.quantile(.75)]
top25_by_gender = top25.groupby(by='genero')
top25_by_gender.salario.describe()
top25_by_gender.salario.mean()['F'] - top25_by_gender.salario.mean()['M']
(top25_by_gender.salario.mean()['F']/top25_by_gender.salario.mean()['M'] - 1) * 100
sns.factorplot(data=top25, x='genero',y='salario',hue='genero', kind='box',size=6)
trunc_top25 = feb[(feb.salario >= feb.salario.quantile(.75)) & (feb.salario < feb.salario.max())]
trunc_top25_by_gender = trunc_top25.groupby(by='genero')
trunc_top25_by_gender.salario.describe()
trunc_top25_by_gender.salario.mean()['F'] - trunc_top25_by_gender.salario.mean()['M']
(trunc_top25_by_gender.salario.mean()['F']/trunc_top25_by_gender.salario.mean()['M'] - 1) * 100
sns.factorplot(data=trunc_top25, x='genero',y='salario',hue='genero', kind='box',size=6)
pd.set_option('mode.chained_assignment',None) # incantacion arcana que podemos ignorar por ahora
feb['salario_bin'] = pd.qcut(feb.salario,10,precision=0)
# guardemos esto para graficarlo mas abajo
epg_mean = pd.pivot_table(data=feb,index=['salario_bin'],columns=['genero'],values='salario',aggfunc=np.mean)
epg_mean
from matplotlib import pyplot as plt
# Pandas nos genera el grafico inicial
ax = epg_mean.plot(grid=True,figsize=(10,8))
# Pero con matplotlib modificamos los nombres, las unidades y marcas de los ejes
ax.set_xlabel('Deciles de Salario')
ax.set_ylabel('Salario Promedio')
ticks = ax.set_xticklabels(ax.get_xticklabels(),rotation=45)
# Finalmente agregemos el promedio general como linea punteada para referencia
plt.axhline(feb.salario.mean(),linestyle='dashed')
spg = feb.groupby(by=['puesto_clean']).filter(\
lambda x: \
len(x) > 19 and \
len(x[x.genero == 'F']) > 0 and \
len(x[x.genero == 'M']) > 0)
print(spg.shape[0])
spg.puesto_clean.nunique()
spg.puesto_clean.value_counts()
spg.groupby(by='genero').salario.describe()
sns.factorplot(data=spg, x='genero',y='salario',hue='genero', kind='box',size=6)
spg_mean_pivot = pd.pivot_table(data=spg,values='salario',index=['puesto_clean'],columns=['genero'],aggfunc=np.mean)
spg_mean_pivot.sort_values('F',inplace=True)
spg_mean_pivot.head(10)
spg_mean_pivot.plot(kind='barh',figsize=(10,15))
spg_diff_pivot = spg_mean_pivot.apply(lambda x: x - x['M'], axis = 1)
spg_diff_pivot.sort_values('F',inplace=True)
spg_diff_pivot['F'].plot(kind='barh',figsize=(10,15))
spg_diff_pivot
spg_rel_pivot = spg_mean_pivot.apply(lambda x: (x/x['M'] - 1) * 100, axis=1)
spg_rel_pivot['mas'] = spg_diff_pivot.F.map(lambda x: x > 0)
spg_rel_pivot.sort_values('F',inplace=True)
f = plt.figure(figsize=(12,12))
ax = sns.barplot(data=spg_rel_pivot,y=spg_rel_pivot.index,x='F',hue='mas',orient='h')
ax.set_xlabel('% diferencia promedio')
spg_rel_pivot
print("En {:d} de {:d} puestos el salario promedio de las mujeres es mayor que el de los hombres".format(spg_rel_pivot.mas.sum(), spg_rel_pivot.shape[0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.
Step 1 is new
Step2: Iteratively building a bokeh app in the notebook
Step3: Next we define a Counter stream which we will use to select taxi trips by hour.
Step4: Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change "8888" to whatever port your jupyter session is using, in case your URL bar doesn't say "localhost
Step5: We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input
Step6: You can stop this ongoing process by clearing the cell displaying the app.
Step7: Combining HoloViews with bokeh models
|
<ASSISTANT_TASK:>
Python Code:
with open('./apps/server_app.py', 'r') as f:
print(f.read())
# Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve
# Tip: Refer to the previous notebook
import holoviews as hv
import geoviews as gv
import dask.dataframe as dd
from holoviews.operation.datashader import datashade, aggregate, shade
from bokeh.models import WMTSTileSource
hv.extension('bokeh', logo=False)
usecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y']
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'], usecols=usecols)
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
ddf = ddf.persist()
stream = hv.streams.Counter()
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
dmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)),
streams=[stream])
shaded = datashade(dmap)
hv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]')
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = gv.WMTS(WMTSTileSource(url=url))
overlay = wmts * shaded
renderer = hv.renderer('bokeh')
server = renderer.app(overlay, show=True, websocket_origin='localhost:8888')
dmap.periodic(1)
# Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve
# Hint: Use hv.renderer and renderer.server_doc
# Note that you have to run periodic **after** creating the bokeh document
# Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot
# Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20))
# then use renderer.get_plot and hist_plot.state and add it to the layout
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's take a paragraph from the Wikipedia page of Ada Lovelace as an example. We need to put the text in triple quotes since the text itself contains quoting characters.
Step2: First we need to tokenize the text and then we apply the NER tagger. Let's try both, the 3 class version and the 7 class version.
Step3: We see that each word is tagged. Tags are for instance ORGANIZATION or PERSON. Very prominently, the O tag appears often. This is the other class (everything that is not an organisation or person, etc.).
|
<ASSISTANT_TASK:>
Python Code:
from nltk.tag import StanfordNERTagger
from nltk.tokenize import word_tokenize
# Adapt those lines to your installation
jar_location = '/Users/sech/stanford-ner-2018-10-16/stanford-ner.jar'
model_location_3classes = '/Users/sech/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz'
model_location_7classes = '/Users/sech/stanford-ner-2018-10-16/classifiers/english.muc.7class.distsim.crf.ser.gz'
st3 = StanfordNERTagger(model_location_3classes,jar_location,encoding='utf-8')
st7 = StanfordNERTagger(model_location_7classes,jar_location,encoding='utf-8')
print(st3)
print(st7)
text = '''Lovelace became close friends with her tutor Mary Somerville, who introduced her to Charles Babbage in 1833. She had a strong respect and affection for Somerville, and they corresponded for many years. Other acquaintances included the scientists Andrew Crosse, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens. She was presented at Court at the age of seventeen "and became a popular belle of the season" in part because of her "brilliant mind." By 1834 Ada was a regular at Court and started attending various events. She danced often and was able to charm many people, and was described by most people as being dainty, although John Hobhouse, Byron's friend, described her as "a large, coarse-skinned young woman but with something of my friend's features, particularly the mouth". This description followed their meeting on 24 February 1834 in which Ada made it clear to Hobhouse that she did not like him, probably because of the influence of her mother, which led her to dislike all of her father's friends. This first impression was not to last, and they later became friends.'''
print(text)
tokenized_text = word_tokenize(text)
text_ner3 = st3.tag(tokenized_text)
text_ner7 = st7.tag(tokenized_text)
print(text_ner3)
print(text_ner7)
from itertools import groupby
print("**** 3 classes ****")
for tag, chunk in groupby(text_ner3, lambda x:x[1]):
if tag != "O":
print("%-12s"%tag, " ".join(w for w, t in chunk))
print("**** 7 classes ****")
for tag, chunk in groupby(text_ner7, lambda x:x[1]):
if tag != "O":
print("%-12s"%tag, " ".join(w for w, t in chunk))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!--Empty Space for separating topics-->
Step2: Bravo! The method <code>torch.tensor()</code> works perfectly.Now, let us try other functions we studied in the <i>Previous Lab</i>.
Step3: Because it is a 2D 3X3 tensor, the outputs are correct.
Step4: The result shows the tensor has successfully been converted to a numpy array and then converted back to a tensor.
Step5: <!--Empty Space for separating topics-->
Step6: Double-click <b>here</b> for the solution.
Step7: As we can see, both methods return the true value (the same value as the picture above). Therefore, both of the methods work.
Step8: We get the result as <code>tensor([11, 12])</code> successfully.
Step9: See the results and dimensions in 2 and 3 are the same. Both of them contains the 3rd row in the <code>tensor_example</code>, but not the last two values in the 3rd column.
Step10: Fortunately, the code <code>tensor_obj[begin_row_number
Step11: Double-click <b>here</b> for the solution.
Step12: Like the result shown in the picture above. The result is <code>[[3, 1], [1, 3]]</code>
Step13: <!--Empty Space for separating topics-->
Step14: This is a simple calculation. The result from the code matches the result shown in the picture.
Step15: <!--Empty Space for separating topics-->
|
<ASSISTANT_TASK:>
Python Code:
# These are the libraries will be used for this lab.
import numpy as np
import matplotlib.pyplot as plt
import torch
import pandas as pd
# Convert 2D List to 2D Tensor
twoD_list = [[11, 12, 13], [21, 22, 23], [31, 32, 33]]
twoD_tensor = torch.tensor(twoD_list)
print("The New 2D Tensor: ", twoD_tensor)
# Try tensor_obj.ndimension(), tensor_obj.shape, tensor_obj.size()
print("The dimension of twoD_tensor: ", twoD_tensor.ndimension())
print("The shape of twoD_tensor: ", twoD_tensor.shape)
print("The shape of twoD_tensor: ", twoD_tensor.size())
# Convert tensor to numpy array; Convert numpy array to tensor
twoD_numpy = twoD_tensor.numpy()
print("Tensor -> Numpy Array:")
print("The numpy array after converting: ", twoD_numpy)
print("Type after converting: ", twoD_numpy.dtype)
print("================================================")
new_twoD_tensor = torch.from_numpy(twoD_numpy)
print("Numpy Array -> Tensor:")
print("The tensor after converting:", new_twoD_tensor)
print("Type after converting: ", new_twoD_tensor.dtype)
# Try to convert the Panda Dataframe to tensor
df = pd.DataFrame({'a':[11,21,31],'b':[12,22,312]})
print("Pandas Dataframe to numpy: ", df.values)
print("Type BEFORE converting: ", df.values.dtype)
print("================================================")
new_tensor = torch.from_numpy(df.values)
print("Tensor AFTER converting: ", new_tensor)
print("Type AFTER converting: ", new_tensor.dtype)
# Practice: try to convert Pandas Series to tensor
df = pd.DataFrame({'A':[11, 33, 22],'B':[3, 3, 2]})
# Use tensor_obj[row, column] and tensor_obj[row][column] to access certain position
tensor_example = torch.tensor([[11, 12, 13], [21, 22, 23], [31, 32, 33]])
print("What is the value on 2nd-row 3rd-column? ", tensor_example[1, 2])
print("What is the value on 2nd-row 3rd-column? ", tensor_example[1][2])
# Use tensor_obj[begin_row_number: end_row_number, begin_column_number: end_column number]
# and tensor_obj[row][begin_column_number: end_column number] to do the slicing
tensor_example = torch.tensor([[11, 12, 13], [21, 22, 23], [31, 32, 33]])
print("What is the value on 1st-row first two columns? ", tensor_example[0, 0:2])
print("What is the value on 1st-row first two columns? ", tensor_example[0][0:2])
# Give an idea on tensor_obj[number: number][number]
tensor_example = torch.tensor([[11, 12, 13], [21, 22, 23], [31, 32, 33]])
sliced_tensor_example = tensor_example[1:3]
print("1. Slicing step on tensor_example: ")
print("Result after tensor_example[1:3]: ", sliced_tensor_example)
print("Dimension after tensor_example[1:3]: ", sliced_tensor_example.ndimension())
print("================================================")
print("2. Pick an index on sliced_tensor_example: ")
print("Result after sliced_tensor_example[1]: ", sliced_tensor_example[1])
print("Dimension after sliced_tensor_example[1]: ", sliced_tensor_example[1].ndimension())
print("================================================")
print("3. Combine these step together:")
print("Result: ", tensor_example[1:3][1])
print("Dimension: ", tensor_example[1:3][1].ndimension())
# Use tensor_obj[begin_row_number: end_row_number, begin_column_number: end_column number]
tensor_example = torch.tensor([[11, 12, 13], [21, 22, 23], [31, 32, 33]])
print("What is the value on 3rd-column last two rows? ", tensor_example[1:3, 2])
# Practice: Use slice and index to change the values on the matrix tensor_ques.
tensor_ques = torch.tensor([[11, 12, 13], [21, 22, 23], [31, 32, 33]])
# Calculate [[1, 0], [0, 1]] + [[2, 1], [1, 2]]
X = torch.tensor([[1, 0],[0, 1]])
Y = torch.tensor([[2, 1],[1, 2]])
X_plus_Y = X + Y
print("The result of X + Y: ", X_plus_Y)
# Calculate 2 * [[2, 1], [1, 2]]
Y = torch.tensor([[2, 1], [1, 2]])
two_Y = 2 * Y
print("The result of 2Y: ", two_Y)
# Calculate [[1, 0], [0, 1]] * [[2, 1], [1, 2]]
X = torch.tensor([[1, 0], [0, 1]])
Y = torch.tensor([[2, 1], [1, 2]])
X_times_Y = X * Y
print("The result of X * Y: ", X_times_Y)
# Calculate [[0, 1, 1], [1, 0, 1]] * [[1, 1], [1, 1], [-1, 1]]
A = torch.tensor([[0, 1, 1], [1, 0, 1]])
B = torch.tensor([[1, 1], [1, 1], [-1, 1]])
A_times_B = torch.mm(A,B)
print("The result of A * B: ", A_times_B)
# Practice: Calculate the product of two tensors (X and Y) with different sizes
# Type your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1. Initialise General Settings
Step2: Common variables
Step3: <br/><br/>
Step4: <br/><br/>
Step5: 2.3. Load Features Names
Step6: 2.4. Load the fitted model
Step7: Load the model
Step8: <font style="font-weight
Step9: <br/><br/>
Step10: <br/><br/>
Step11: Replace NaN appears in the Charlson-Index feature
Step12: Combine (join by PatientID)
Step13: <font style="font-weight
Step14: <br/><br/>
Step15: <font style="font-weight
Step16: <br/><br/>
Step17: 3.3. Fit
Step18: Fit Performance
Step19: 3.4. Predict
Step20: 3.5. Cross-Validate
Step21: 3.6. Save
Step22: <br/><br/>
Step23: <br/><br/>
Step24: 5.2. Population Statistics
Step25: 5.2.1. Most Prevalent Diagnoses Groups
Step26: 5.2.2. Major Comorbidity Groups
Step27: 5.2.3. Charlson Comorbidity Groups
Step28: 5.2.4. Most Prevalent Operatons
Step29: 5.2.4. Most Prevalent Main Speciality
Step30: 5.2.5. Other Variables
Step31: <br/><br/>
Step32: 5.3.1. ROC
Step33: 5.3.2. Precision Recall
Step34: 5.3.3. Learning Curve
Step35: 5.3.4. Validation Curve
|
<ASSISTANT_TASK:>
Python Code:
# reload modules
# Reload all modules (except those excluded by %aimport) every time before executing the Python code typed.
%load_ext autoreload
%autoreload 2
# import libraries
import logging
import os
import sys
import gc
import pandas as pd
import numpy as np
import random
import statistics
from datetime import datetime
from collections import OrderedDict
from sklearn import preprocessing
from scipy.stats import stats
from IPython.display import display, HTML
from pprint import pprint
from pivottablejs import pivot_ui
from IPython.display import clear_output
import imblearn.over_sampling as oversampling
import matplotlib.pyplot as plt
# import local classes
from Configs.CONSTANTS import CONSTANTS
from Configs.Logger import Logger
from Features.Variables import Variables
from ReadersWriters.ReadersWriters import ReadersWriters
from Stats.PreProcess import PreProcess
from Stats.FeatureSelection import FeatureSelection
from Stats.TrainingMethod import TrainingMethod
from Stats.Plots import Plots
from Stats.Stats import Stats
# Check the interpreter
print("\nMake sure the correct Python interpreter is used!")
print(sys.version)
print("\nMake sure sys.path of the Python interpreter is correct!")
print(os.getcwd())
# init paths & directories
config_path = os.path.abspath("ConfigInputs/CONFIGURATIONS.ini")
io_path = os.path.abspath("../../tmp/TCARER/Basic_prototype")
schema = "parr_sample_prototype"
app_name = "T-CARER"
print("Output path:", io_path)
# init logs
if not os.path.exists(io_path):
os.makedirs(io_path, exist_ok=True)
logger = Logger(path=io_path, app_name=app_name, ext="log")
logger = logging.getLogger(app_name)
# init constants
CONSTANTS.set(io_path, app_name)
# initialise other classes
readers_writers = ReadersWriters()
plots = Plots()
# other Constant variables
submodel_name = "hesIp"
submodel_input_name = "tcarer_model_features_ip"
# set print settings
pd.set_option('display.width', 1600, 'display.max_colwidth', 800)
# settings
feature_table = 'tcarer_features'
featureExtra_table = 'tcarer_featuresExtra'
result = readers_writers.load_mysql_procedure("tcarer_set_featuresExtra", [feature_table, featureExtra_table], schema)
# select the target variable
target_feature = "label365" # "label365", "label30"
method_name = "rfc" # "rfc", "gbrt", "randLogit", "wdnn"
rank_models = ["rfc"] # ["rfc", "gbrt", "randLogit"]
file_name = "Step_07_Features"
features = readers_writers.load_serialised_compressed(path=CONSTANTS.io_path, title=file_name)
# print
print("File size: ", os.stat(os.path.join(CONSTANTS.io_path, file_name + ".bz2")).st_size)
print("Number of columns: ", len(features["train_indep"].columns))
print("features: {train: ", len(features["train_indep"]), ", test: ", len(features["test_indep"]), "}")
file_name = "Step_07_Top_Features_rfc_adhoc"
features_names_selected = readers_writers.load_csv(path=CONSTANTS.io_path, title=file_name, dataframing=False)[0]
features_names_selected = [f.replace("\n", "") for f in features_names_selected]
display(pd.DataFrame(features_names_selected))
training_method = TrainingMethod(method_name)
# file name
file_name = "Step_09_Model_" + method_name + "_" + target_feature
training_method.load(path=CONSTANTS.io_path, title=file_name)
class TrainingMethodTensorflow:
def __init__(self, summaries, features_names, num_features, cut_off, train_size, test_size):
self.model_predict = {"train": {'score': [], 'model_labels': []},
"test": {'score': [], 'model_labels': []}}
self.__stats = Stats()
# summaries["fit"]["get_variable_names"]
# summaries["fit"]["get_variable_value"]
# summaries["fit"]["get_params"]
# summaries["fit"]["export"]
# summaries["fit"]["get_variable_names()"]
# summaries["fit"]["params"]
# summaries["fit"]["dnn_bias_"]
# summaries["fit"]["dnn_weights_"]
# summaries["train"]["results"]
# summaries["test"]["results"]
self.model_predict["train"]['pred'] = np.asarray([1 if i[1] >= 0.5 else 0 for i in summaries["train"]["predict_proba"]][0:train_size])
self.model_predict["test"]['pred'] = np.asarray([1 if i[1] >= 0.5 else 0 for i in summaries["test"]["predict_proba"]][0:test_size])
self.model_predict["train"]['score'] = np.asarray([i[1] for i in summaries["train"]["predict_proba"]][0:train_size])
self.model_predict["test"]['score'] = np.asarray([i[1] for i in summaries["test"]["predict_proba"]][0:test_size])
self.model_predict["train"]['score_0'] = np.asarray([i[0] for i in summaries["train"]["predict_proba"]][0:train_size])
self.model_predict["test"]['score_0'] = np.asarray([i[0] for i in summaries["test"]["predict_proba"]][0:test_size])
def train_summaries(self):
return {"feature_importances_": self.__weights}
def predict_summaries(self, feature_target, sample_name):
return self.__stats.predict_summaries(self.model_predict[sample_name], feature_target)
file_name = "model_tensorflow_summaries_" + target_feature
summaries = readers_writers.load_serialised_compressed(path=CONSTANTS.io_path, title=file_name)
num_features = 300
cut_off = 0.5
training_method = TrainingMethodTensorflow(summaries, features_names_selected, num_features, cut_off,
len(features["train_indep"].index), len(features["test_indep"].index))
# train
o_summaries = training_method.predict_summaries(features["train_target"][target_feature], "train")
for k in o_summaries.keys():
print(k, o_summaries[k])
print("\n")
# test
o_summaries = training_method.predict_summaries(features["test_target"][target_feature], "test")
for k in o_summaries.keys():
print(k, o_summaries[k])
table = 'tcarer_featuresExtra'
features_extra_dtypes = {'patientID': 'U32', 'trigger_charlsonFoster': 'i4', 'trigger_los': 'i4', 'trigger_age': 'i4', 'prior_admiOther': 'i4', 'prior_admiAcute': 'i4',
'prior_spells': 'i4', 'prior_asthma': 'i4', 'prior_copd': 'i4', 'prior_depression': 'i4', 'prior_diabetes': 'i4', 'prior_hypertension': 'i4', 'prior_cancer': 'i4', 'prior_chd': 'i4', 'prior_chf': 'i4',
'diagCci_01_myocardial_freq': 'i4', 'diagCci_02_chf_freq': 'i4', 'diagCci_03_pvd_freq': 'i4', 'diagCci_04_cerebrovascular_freq': 'i4', 'diagCci_05_dementia_freq': 'i4', 'diagCci_06_cpd_freq': 'i4', 'diagCci_07_rheumatic_freq': 'i4', 'diagCci_08_ulcer_freq': 'i4', 'diagCci_09_liverMild_freq': 'i4', 'diagCci_10_diabetesNotChronic_freq': 'i4', 'diagCci_11_diabetesChronic_freq': 'i4', 'diagCci_12_hemiplegia_freq': 'i4', 'diagCci_13_renal_freq': 'i4', 'diagCci_14_malignancy_freq': 'i4', 'diagCci_15_liverSevere_freq': 'i4', 'diagCci_16_tumorSec_freq': 'i4', 'diagCci_17_aids_freq': 'i4', 'diagCci_18_depression_freq': 'i4', 'diagCci_19_cardiac_freq': 'i4', 'diagCci_20_valvular_freq': 'i4', 'diagCci_21_pulmonary_freq': 'i4', 'diagCci_22_vascular_freq': 'i4', 'diagCci_23_hypertensionNotComplicated_freq': 'i4', 'diagCci_24_hypertensionComplicated_freq': 'i4', 'diagCci_25_paralysis_freq': 'i4', 'diagCci_26_neuroOther_freq': 'i4', 'diagCci_27_pulmonaryChronic_freq': 'i4', 'diagCci_28_diabetesNotComplicated_freq': 'i4', 'diagCci_29_diabetesComplicated_freq': 'i4', 'diagCci_30_hypothyroidism_freq': 'i4', 'diagCci_31_renal_freq': 'i4', 'diagCci_32_liver_freq': 'i4', 'diagCci_33_ulcerNotBleeding_freq': 'i4', 'diagCci_34_psychoses_freq': 'i4', 'diagCci_35_lymphoma_freq': 'i4', 'diagCci_36_cancerSec_freq': 'i4', 'diagCci_37_tumorNotSec_freq': 'i4', 'diagCci_38_rheumatoid_freq': 'i4', 'diagCci_39_coagulopathy_freq': 'i4', 'diagCci_40_obesity_freq': 'i4', 'diagCci_41_weightLoss_freq': 'i4', 'diagCci_42_fluidDisorder_freq': 'i4', 'diagCci_43_bloodLoss_freq': 'i4', 'diagCci_44_anemia_freq': 'i4', 'diagCci_45_alcohol_freq': 'i4', 'diagCci_46_drug_freq': 'i4'}
features_extra_name = features_extra_dtypes.keys()
# Read features from the MySQL
features_extra = dict()
features_extra['train'] = readers_writers.load_mysql_table(schema, table, dataframing=True)
features_extra['train'].astype(dtype=features_extra_dtypes)
features_extra['test'] = features_extra['train']
print("Number of columns: ", len(features_extra['train'].columns), "; Total records: ", len(features_extra['train'].index))
features_extra['train'].loc[:, "trigger_charlsonFoster"] = np.nan_to_num(features_extra['train']["trigger_charlsonFoster"])
features_extra['test'].loc[:, "trigger_charlsonFoster"] = np.nan_to_num(features_extra['test']["trigger_charlsonFoster"])
features_extra['train'] = features_extra['train'].merge(
pd.concat([features['train_id'], features['train_target'],
pd.DataFrame({'score': training_method.model_predict["train"]['score']}), features['train_indep']], axis=1),
how="inner", on="patientID")
features_extra['test'] = features_extra['test'].merge(
pd.concat([features['test_id'], features['test_target'],
pd.DataFrame({'score': training_method.model_predict["test"]['score']}), features['test_indep']], axis=1),
how="inner", on="patientID")
features = None
gc.collect()
charlson_method_name = "rfc"
kwargs = {"n_estimators": 20, "criterion": 'gini', "max_depth": None, "min_samples_split": 100,
"min_samples_leaf": 50, "min_weight_fraction_leaf": 0.0, "max_features": 'auto',
"max_leaf_nodes": None, "bootstrap": True, "oob_score": False, "n_jobs": -1, "random_state": None,
"verbose": 0, "warm_start": False, "class_weight": "balanced_subsample"}
charlson_method_name = "lr"
kwargs = {"penalty": 'l2', "dual": False, "tol": 0.0001, "C": 1, "fit_intercept": True, "intercept_scaling": 1,
"class_weight": None, "random_state": None, "solver": 'liblinear', "max_iter": 100, "multi_class": 'ovr',
"verbose": 0, "warm_start": False, "n_jobs": -1}
# set features
charlson_features_names = ['trigger_charlsonFoster']
# select the target variable
charlson_target_feature = "label30" # "label30", "label365"
# file name
file_name = "report_Model_Charlson_" + charlson_method_name + "_" + charlson_target_feature
# initialise
charlson_training_method = TrainingMethod(charlson_method_name)
o_summaries = dict()
# Fit
model = charlson_training_method.train(features_extra["train"][charlson_features_names], features_extra["train"][target_feature], **kwargs)
charlson_training_method.save_model(path=CONSTANTS.io_path, title=file_name)
# load model
# charlson_training_method.load(path=CONSTANTS.io_path, title=file_name)
# short summary
o_summaries = charlson_training_method.train_summaries()
o_summaries = dict()
model = charlson_training_method.predict(features_extra["train"][charlson_features_names], "train")
# short summary
o_summaries = charlson_training_method.predict_summaries(pd.Series(features_extra["train"][target_feature]), "train")
print("ROC AUC:", o_summaries['roc_auc_score_1'], "\n", o_summaries['classification_report'])
for k in o_summaries.keys():
print(k, o_summaries[k])
o_summaries = dict()
model = charlson_training_method.predict(features_extra["test"][charlson_features_names], "test")
# short summary
o_summaries = charlson_training_method.predict_summaries(pd.Series(features_extra["test"][target_feature]), "test")
print("ROC AUC:", o_summaries['roc_auc_score_1'], "\n", o_summaries['classification_report'])
for k in o_summaries.keys():
print(k, o_summaries[k])
o_summaries = dict()
score = charlson_training_method.cross_validate(features_extra["test"][charlson_features_names], features_extra["test"][target_feature],
scoring="neg_mean_squared_error", cv=10)
# short summary
o_summaries = charlson_training_method.cross_validate_summaries()
print("Scores: ", o_summaries)
charlson_training_method.save_model(path=CONSTANTS.io_path, title=file_name)
def features_importance_rank(fitting_method, ranking_file_name=None, rank_models=["rfc", "gbrt", "randLogit"]):
# Fitting weight
o_summaries = pd.DataFrame({"Name": fitting_method.model_labels,
"Fitting Weight": fitting_method.train_summaries()["feature_importances_"]},
index = fitting_method.model_labels)
o_summaries = o_summaries.sort_values("Fitting Weight", ascending=False)
o_summaries = o_summaries.reset_index(drop=True)
# Ranking scores
if ranking_file_name is not None:
for rank_model in rank_models:
o_summaries_ranks = readers_writers.load_serialised_compressed(
path=CONSTANTS.io_path, title=ranking_file_name + rank_model)
for trial in range(len(o_summaries_ranks)):
o_summaries_rank = pd.DataFrame(o_summaries_ranks[trial])
o_summaries_rank.columns = ["Name", "Importance - " + rank_model + " - Trial_" + str(trial),
"Order - " + rank_model + " - Trial_" + str(trial)]
o_summaries = o_summaries.merge(o_summaries_rank, how="outer", on="Name")
return o_summaries
file_name = "Step_07_Model_Train_model_rank_summaries_"
o_summaries = features_importance_rank(fitting_method=training_method, ranking_file_name=file_name, rank_models=rank_models)
file_name = "report_weights_ranks"
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name, data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
display(o_summaries.head())
measures = ["accuracy_score", "precision_score", "recall_score",
"roc_auc_score_1", "f1_score", "fbeta_score", "average_precision_score",
"log_loss", "zero_one_loss", "hamming_loss", "jaccard_similarity_score", "matthews_corrcoef"]
# train
o_summaries = training_method.predict_summaries(features_extra["train"][target_feature], "train")
o_summaries = np.array([(m, o_summaries[m]) for m in measures])
report_performance = pd.DataFrame({"Measure": o_summaries[:, 0],
"Sample Train": o_summaries[:, 1],
"Sample Test": [None] * len(measures)})
# test
o_summaries = training_method.predict_summaries(features_extra["test"][target_feature], "test")
o_summaries = np.array([(m, o_summaries[m]) for m in measures])
report_performance["Sample Test"] = o_summaries[:, 1]
# print
file_name = "report_performance_" + method_name + "_" + target_feature
display(report_performance)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name, data=report_performance, append=False)
def population_statistics(df, diagnoses, cutpoints=[0.50, 0.60, 0.70, 0.80, 0.90]):
o_summaries = pd.DataFrame(columns=['Name'], index=diagnoses)
o_summaries['Name'] = diagnoses
for diagnose in diagnoses:
o_summaries.loc[diagnose, 'Total'] = len(df.index)
if diagnose not in df:
continue
o_summaries.loc[diagnose, 'Total - diagnose'] = len(df.loc[(df[diagnose] > 0)].index)
o_summaries.loc[diagnose, 'Total - diagnose - label_1'] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 1'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_1_cnt'] > 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 2'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_2_cnt'] > 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 3'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_3_cnt'] > 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Prior Spells'] = len(df.loc[(df[diagnose] > 0) & (df['prior_spells'] > 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Male - perc'] = len(df.loc[(df[diagnose] > 0) & (df['gender_1'] > 0) & (df[target_feature] > 0)].index)
age = df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)]['trigger_age'].describe(percentiles=[.25, .5, .75])
o_summaries.loc[diagnose, 'Age - IQR_min'] = age['min']
o_summaries.loc[diagnose, 'Age - IQR_25'] = age['25%']
o_summaries.loc[diagnose, 'Age - IQR_50'] = age['50%']
o_summaries.loc[diagnose, 'Age - IQR_75'] = age['75%']
o_summaries.loc[diagnose, 'Age - IQR_max'] = age['max']
los = df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)]['trigger_los'].describe(percentiles=[.25, .5, .75])
o_summaries.loc[diagnose, 'LoS - IQR_min'] = los['min']
o_summaries.loc[diagnose, 'LoS - IQR_25'] = los['25%']
o_summaries.loc[diagnose, 'LoS - IQR_50'] = los['50%']
o_summaries.loc[diagnose, 'LoS - IQR_75'] = los['75%']
o_summaries.loc[diagnose, 'LoS - IQR_max'] = los['max']
for cutpoint in cutpoints:
o_summaries.loc[diagnose, 'score - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df['score'] > cutpoint)].index)
o_summaries.loc[diagnose, 'TP - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0) & (df['score'] > cutpoint)].index)
o_summaries.loc[diagnose, 'FP - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] == 0) & (df['score'] > cutpoint)].index)
o_summaries.loc[diagnose, 'FN - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0) & (df['score'] <= cutpoint)].index)
o_summaries.loc[diagnose, 'TN - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] == 0) & (df['score'] <= cutpoint)].index)
o_summaries.loc[diagnose, 'Charlson - 0'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0)].index)
o_summaries.loc[diagnose, 'Charlson - 0 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Charlson - 1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 1)].index)
o_summaries.loc[diagnose, 'Charlson - 1 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 1) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Charlson - 2'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 2)].index)
o_summaries.loc[diagnose, 'Charlson - 2 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 2) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Charlson - 3'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 3)].index)
o_summaries.loc[diagnose, 'Charlson - 3 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 3) & (df[target_feature] > 0)].index)
o_summaries.loc[diagnose, 'Charlson - 4+'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] >= 4)].index)
o_summaries.loc[diagnose, 'Charlson - 4+ - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] >= 4) & (df[target_feature] > 0)].index)
for cutpoint in cutpoints:
o_summaries.loc[diagnose, 'Charlson - 0 - label_1 - TP - ' + str(cutpoint)] = \
len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0) & (df[target_feature] > 0) & (df['score'] > cutpoint)].index)
return o_summaries
diagnoses = ['diagCCS_0t30d_others_cnt', 'diagCCS_0t30d_prevalence_1_cnt', 'diagCCS_0t30d_prevalence_2_cnt', 'diagCCS_0t30d_prevalence_3_cnt', 'diagCCS_0t30d_prevalence_4_cnt', 'diagCCS_0t30d_prevalence_5_cnt', 'diagCCS_0t30d_prevalence_6_cnt', 'diagCCS_0t30d_prevalence_7_cnt', 'diagCCS_0t30d_prevalence_8_cnt', 'diagCCS_0t30d_prevalence_9_cnt', 'diagCCS_0t30d_prevalence_10_cnt', 'diagCCS_0t30d_prevalence_11_cnt', 'diagCCS_0t30d_prevalence_12_cnt', 'diagCCS_0t30d_prevalence_13_cnt', 'diagCCS_0t30d_prevalence_14_cnt', 'diagCCS_0t30d_prevalence_15_cnt', 'diagCCS_0t30d_prevalence_16_cnt', 'diagCCS_0t30d_prevalence_17_cnt', 'diagCCS_0t30d_prevalence_18_cnt', 'diagCCS_0t30d_prevalence_19_cnt', 'diagCCS_0t30d_prevalence_20_cnt', 'diagCCS_0t30d_prevalence_21_cnt', 'diagCCS_0t30d_prevalence_22_cnt', 'diagCCS_0t30d_prevalence_23_cnt', 'diagCCS_0t30d_prevalence_24_cnt', 'diagCCS_0t30d_prevalence_25_cnt', 'diagCCS_0t30d_prevalence_26_cnt', 'diagCCS_0t30d_prevalence_27_cnt', 'diagCCS_0t30d_prevalence_28_cnt', 'diagCCS_0t30d_prevalence_29_cnt', 'diagCCS_0t30d_prevalence_30_cnt'
, 'diagCCS_30t90d_others_cnt', 'diagCCS_30t90d_prevalence_1_cnt', 'diagCCS_30t90d_prevalence_2_cnt', 'diagCCS_30t90d_prevalence_3_cnt', 'diagCCS_30t90d_prevalence_4_cnt', 'diagCCS_30t90d_prevalence_5_cnt', 'diagCCS_30t90d_prevalence_6_cnt', 'diagCCS_30t90d_prevalence_7_cnt', 'diagCCS_30t90d_prevalence_8_cnt', 'diagCCS_30t90d_prevalence_9_cnt', 'diagCCS_30t90d_prevalence_10_cnt', 'diagCCS_30t90d_prevalence_11_cnt', 'diagCCS_30t90d_prevalence_12_cnt', 'diagCCS_30t90d_prevalence_13_cnt', 'diagCCS_30t90d_prevalence_14_cnt', 'diagCCS_30t90d_prevalence_15_cnt', 'diagCCS_30t90d_prevalence_16_cnt', 'diagCCS_30t90d_prevalence_17_cnt', 'diagCCS_30t90d_prevalence_18_cnt', 'diagCCS_30t90d_prevalence_19_cnt', 'diagCCS_30t90d_prevalence_20_cnt', 'diagCCS_30t90d_prevalence_21_cnt', 'diagCCS_30t90d_prevalence_22_cnt', 'diagCCS_30t90d_prevalence_23_cnt', 'diagCCS_30t90d_prevalence_24_cnt', 'diagCCS_30t90d_prevalence_25_cnt', 'diagCCS_30t90d_prevalence_26_cnt', 'diagCCS_30t90d_prevalence_27_cnt', 'diagCCS_30t90d_prevalence_28_cnt', 'diagCCS_30t90d_prevalence_29_cnt', 'diagCCS_30t90d_prevalence_30_cnt'
, 'diagCCS_90t180d_others_cnt', 'diagCCS_90t180d_prevalence_1_cnt', 'diagCCS_90t180d_prevalence_2_cnt', 'diagCCS_90t180d_prevalence_3_cnt', 'diagCCS_90t180d_prevalence_4_cnt', 'diagCCS_90t180d_prevalence_5_cnt', 'diagCCS_90t180d_prevalence_6_cnt', 'diagCCS_90t180d_prevalence_7_cnt', 'diagCCS_90t180d_prevalence_8_cnt', 'diagCCS_90t180d_prevalence_9_cnt', 'diagCCS_90t180d_prevalence_10_cnt', 'diagCCS_90t180d_prevalence_11_cnt', 'diagCCS_90t180d_prevalence_12_cnt', 'diagCCS_90t180d_prevalence_13_cnt', 'diagCCS_90t180d_prevalence_14_cnt', 'diagCCS_90t180d_prevalence_15_cnt', 'diagCCS_90t180d_prevalence_16_cnt', 'diagCCS_90t180d_prevalence_17_cnt', 'diagCCS_90t180d_prevalence_18_cnt', 'diagCCS_90t180d_prevalence_19_cnt', 'diagCCS_90t180d_prevalence_20_cnt', 'diagCCS_90t180d_prevalence_21_cnt', 'diagCCS_90t180d_prevalence_22_cnt', 'diagCCS_90t180d_prevalence_23_cnt', 'diagCCS_90t180d_prevalence_24_cnt', 'diagCCS_90t180d_prevalence_25_cnt', 'diagCCS_90t180d_prevalence_26_cnt', 'diagCCS_90t180d_prevalence_27_cnt', 'diagCCS_90t180d_prevalence_28_cnt', 'diagCCS_90t180d_prevalence_29_cnt', 'diagCCS_90t180d_prevalence_30_cnt'
, 'diagCCS_180t365d_others_cnt', 'diagCCS_180t365d_prevalence_1_cnt', 'diagCCS_180t365d_prevalence_2_cnt', 'diagCCS_180t365d_prevalence_3_cnt', 'diagCCS_180t365d_prevalence_4_cnt', 'diagCCS_180t365d_prevalence_5_cnt', 'diagCCS_180t365d_prevalence_6_cnt', 'diagCCS_180t365d_prevalence_7_cnt', 'diagCCS_180t365d_prevalence_8_cnt', 'diagCCS_180t365d_prevalence_9_cnt', 'diagCCS_180t365d_prevalence_10_cnt', 'diagCCS_180t365d_prevalence_11_cnt', 'diagCCS_180t365d_prevalence_12_cnt', 'diagCCS_180t365d_prevalence_13_cnt', 'diagCCS_180t365d_prevalence_14_cnt', 'diagCCS_180t365d_prevalence_15_cnt', 'diagCCS_180t365d_prevalence_16_cnt', 'diagCCS_180t365d_prevalence_17_cnt', 'diagCCS_180t365d_prevalence_18_cnt', 'diagCCS_180t365d_prevalence_19_cnt', 'diagCCS_180t365d_prevalence_20_cnt', 'diagCCS_180t365d_prevalence_21_cnt', 'diagCCS_180t365d_prevalence_22_cnt', 'diagCCS_180t365d_prevalence_23_cnt', 'diagCCS_180t365d_prevalence_24_cnt', 'diagCCS_180t365d_prevalence_25_cnt', 'diagCCS_180t365d_prevalence_26_cnt', 'diagCCS_180t365d_prevalence_27_cnt', 'diagCCS_180t365d_prevalence_28_cnt', 'diagCCS_180t365d_prevalence_29_cnt', 'diagCCS_180t365d_prevalence_30_cnt'
, 'diagCCS_365t730d_others_cnt', 'diagCCS_365t730d_prevalence_1_cnt', 'diagCCS_365t730d_prevalence_2_cnt', 'diagCCS_365t730d_prevalence_3_cnt', 'diagCCS_365t730d_prevalence_4_cnt', 'diagCCS_365t730d_prevalence_5_cnt', 'diagCCS_365t730d_prevalence_6_cnt', 'diagCCS_365t730d_prevalence_7_cnt', 'diagCCS_365t730d_prevalence_8_cnt', 'diagCCS_365t730d_prevalence_9_cnt', 'diagCCS_365t730d_prevalence_10_cnt', 'diagCCS_365t730d_prevalence_11_cnt', 'diagCCS_365t730d_prevalence_12_cnt', 'diagCCS_365t730d_prevalence_13_cnt', 'diagCCS_365t730d_prevalence_14_cnt', 'diagCCS_365t730d_prevalence_15_cnt', 'diagCCS_365t730d_prevalence_16_cnt', 'diagCCS_365t730d_prevalence_17_cnt', 'diagCCS_365t730d_prevalence_18_cnt', 'diagCCS_365t730d_prevalence_19_cnt', 'diagCCS_365t730d_prevalence_20_cnt', 'diagCCS_365t730d_prevalence_21_cnt', 'diagCCS_365t730d_prevalence_22_cnt', 'diagCCS_365t730d_prevalence_23_cnt', 'diagCCS_365t730d_prevalence_24_cnt', 'diagCCS_365t730d_prevalence_25_cnt', 'diagCCS_365t730d_prevalence_26_cnt', 'diagCCS_365t730d_prevalence_27_cnt', 'diagCCS_365t730d_prevalence_28_cnt', 'diagCCS_365t730d_prevalence_29_cnt', 'diagCCS_365t730d_prevalence_30_cnt']
file_name = "report_population_prevalent_diagnoses_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
diagnoses = ['prior_admiOther', 'prior_admiAcute', 'prior_spells', 'prior_asthma', 'prior_copd', 'prior_depression', 'prior_diabetes', 'prior_hypertension', 'prior_cancer', 'prior_chd', 'prior_chf']
file_name = "report_population_comorbidity_diagnoses_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
diagnoses = ['diagCci_01_myocardial_freq', 'diagCci_02_chf_freq', 'diagCci_03_pvd_freq', 'diagCci_04_cerebrovascular_freq', 'diagCci_05_dementia_freq', 'diagCci_06_cpd_freq', 'diagCci_07_rheumatic_freq', 'diagCci_08_ulcer_freq', 'diagCci_09_liverMild_freq', 'diagCci_10_diabetesNotChronic_freq', 'diagCci_11_diabetesChronic_freq', 'diagCci_12_hemiplegia_freq', 'diagCci_13_renal_freq', 'diagCci_14_malignancy_freq', 'diagCci_15_liverSevere_freq', 'diagCci_16_tumorSec_freq', 'diagCci_17_aids_freq', 'diagCci_18_depression_freq', 'diagCci_19_cardiac_freq', 'diagCci_20_valvular_freq', 'diagCci_21_pulmonary_freq', 'diagCci_22_vascular_freq', 'diagCci_23_hypertensionNotComplicated_freq', 'diagCci_24_hypertensionComplicated_freq', 'diagCci_25_paralysis_freq', 'diagCci_26_neuroOther_freq', 'diagCci_27_pulmonaryChronic_freq', 'diagCci_28_diabetesNotComplicated_freq', 'diagCci_29_diabetesComplicated_freq', 'diagCci_30_hypothyroidism_freq', 'diagCci_31_renal_freq', 'diagCci_32_liver_freq', 'diagCci_33_ulcerNotBleeding_freq', 'diagCci_34_psychoses_freq', 'diagCci_35_lymphoma_freq', 'diagCci_36_cancerSec_freq', 'diagCci_37_tumorNotSec_freq', 'diagCci_38_rheumatoid_freq', 'diagCci_39_coagulopathy_freq', 'diagCci_40_obesity_freq', 'diagCci_41_weightLoss_freq', 'diagCci_42_fluidDisorder_freq', 'diagCci_43_bloodLoss_freq', 'diagCci_44_anemia_freq', 'diagCci_45_alcohol_freq', 'diagCci_46_drug_freq']
file_name = "report_population_charlson_diagnoses_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
diagnoses = ['operOPCSL1_0t30d_others_cnt', 'operOPCSL1_0t30d_prevalence_1_cnt', 'operOPCSL1_0t30d_prevalence_2_cnt', 'operOPCSL1_0t30d_prevalence_3_cnt', 'operOPCSL1_0t30d_prevalence_4_cnt', 'operOPCSL1_0t30d_prevalence_5_cnt', 'operOPCSL1_0t30d_prevalence_6_cnt', 'operOPCSL1_0t30d_prevalence_7_cnt', 'operOPCSL1_0t30d_prevalence_8_cnt', 'operOPCSL1_0t30d_prevalence_9_cnt', 'operOPCSL1_0t30d_prevalence_10_cnt', 'operOPCSL1_0t30d_prevalence_11_cnt', 'operOPCSL1_0t30d_prevalence_12_cnt', 'operOPCSL1_0t30d_prevalence_13_cnt', 'operOPCSL1_0t30d_prevalence_14_cnt', 'operOPCSL1_0t30d_prevalence_15_cnt', 'operOPCSL1_0t30d_prevalence_16_cnt', 'operOPCSL1_0t30d_prevalence_17_cnt', 'operOPCSL1_0t30d_prevalence_18_cnt', 'operOPCSL1_0t30d_prevalence_19_cnt', 'operOPCSL1_0t30d_prevalence_20_cnt', 'operOPCSL1_0t30d_prevalence_21_cnt', 'operOPCSL1_0t30d_prevalence_22_cnt', 'operOPCSL1_0t30d_prevalence_23_cnt', 'operOPCSL1_0t30d_prevalence_24_cnt', 'operOPCSL1_0t30d_prevalence_25_cnt', 'operOPCSL1_0t30d_prevalence_26_cnt', 'operOPCSL1_0t30d_prevalence_27_cnt', 'operOPCSL1_0t30d_prevalence_28_cnt', 'operOPCSL1_0t30d_prevalence_29_cnt', 'operOPCSL1_0t30d_prevalence_30_cnt'
, 'operOPCSL1_30t90d_others_cnt', 'operOPCSL1_30t90d_prevalence_1_cnt', 'operOPCSL1_30t90d_prevalence_2_cnt', 'operOPCSL1_30t90d_prevalence_3_cnt', 'operOPCSL1_30t90d_prevalence_4_cnt', 'operOPCSL1_30t90d_prevalence_5_cnt', 'operOPCSL1_30t90d_prevalence_6_cnt', 'operOPCSL1_30t90d_prevalence_7_cnt', 'operOPCSL1_30t90d_prevalence_8_cnt', 'operOPCSL1_30t90d_prevalence_9_cnt', 'operOPCSL1_30t90d_prevalence_10_cnt', 'operOPCSL1_30t90d_prevalence_11_cnt', 'operOPCSL1_30t90d_prevalence_12_cnt', 'operOPCSL1_30t90d_prevalence_13_cnt', 'operOPCSL1_30t90d_prevalence_14_cnt', 'operOPCSL1_30t90d_prevalence_15_cnt', 'operOPCSL1_30t90d_prevalence_16_cnt', 'operOPCSL1_30t90d_prevalence_17_cnt', 'operOPCSL1_30t90d_prevalence_18_cnt', 'operOPCSL1_30t90d_prevalence_19_cnt', 'operOPCSL1_30t90d_prevalence_20_cnt', 'operOPCSL1_30t90d_prevalence_21_cnt', 'operOPCSL1_30t90d_prevalence_22_cnt', 'operOPCSL1_30t90d_prevalence_23_cnt', 'operOPCSL1_30t90d_prevalence_24_cnt', 'operOPCSL1_30t90d_prevalence_25_cnt', 'operOPCSL1_30t90d_prevalence_26_cnt', 'operOPCSL1_30t90d_prevalence_27_cnt', 'operOPCSL1_30t90d_prevalence_28_cnt', 'operOPCSL1_30t90d_prevalence_29_cnt', 'operOPCSL1_30t90d_prevalence_30_cnt'
, 'operOPCSL1_90t180d_others_cnt', 'operOPCSL1_90t180d_prevalence_1_cnt', 'operOPCSL1_90t180d_prevalence_2_cnt', 'operOPCSL1_90t180d_prevalence_3_cnt', 'operOPCSL1_90t180d_prevalence_4_cnt', 'operOPCSL1_90t180d_prevalence_5_cnt', 'operOPCSL1_90t180d_prevalence_6_cnt', 'operOPCSL1_90t180d_prevalence_7_cnt', 'operOPCSL1_90t180d_prevalence_8_cnt', 'operOPCSL1_90t180d_prevalence_9_cnt', 'operOPCSL1_90t180d_prevalence_10_cnt', 'operOPCSL1_90t180d_prevalence_11_cnt', 'operOPCSL1_90t180d_prevalence_12_cnt', 'operOPCSL1_90t180d_prevalence_13_cnt', 'operOPCSL1_90t180d_prevalence_14_cnt', 'operOPCSL1_90t180d_prevalence_15_cnt', 'operOPCSL1_90t180d_prevalence_16_cnt', 'operOPCSL1_90t180d_prevalence_17_cnt', 'operOPCSL1_90t180d_prevalence_18_cnt', 'operOPCSL1_90t180d_prevalence_19_cnt', 'operOPCSL1_90t180d_prevalence_20_cnt', 'operOPCSL1_90t180d_prevalence_21_cnt', 'operOPCSL1_90t180d_prevalence_22_cnt', 'operOPCSL1_90t180d_prevalence_23_cnt', 'operOPCSL1_90t180d_prevalence_24_cnt', 'operOPCSL1_90t180d_prevalence_25_cnt', 'operOPCSL1_90t180d_prevalence_26_cnt', 'operOPCSL1_90t180d_prevalence_27_cnt', 'operOPCSL1_90t180d_prevalence_28_cnt', 'operOPCSL1_90t180d_prevalence_29_cnt', 'operOPCSL1_90t180d_prevalence_30_cnt'
, 'operOPCSL1_180t365d_others_cnt', 'operOPCSL1_180t365d_prevalence_1_cnt', 'operOPCSL1_180t365d_prevalence_2_cnt', 'operOPCSL1_180t365d_prevalence_3_cnt', 'operOPCSL1_180t365d_prevalence_4_cnt', 'operOPCSL1_180t365d_prevalence_5_cnt', 'operOPCSL1_180t365d_prevalence_6_cnt', 'operOPCSL1_180t365d_prevalence_7_cnt', 'operOPCSL1_180t365d_prevalence_8_cnt', 'operOPCSL1_180t365d_prevalence_9_cnt', 'operOPCSL1_180t365d_prevalence_10_cnt', 'operOPCSL1_180t365d_prevalence_11_cnt', 'operOPCSL1_180t365d_prevalence_12_cnt', 'operOPCSL1_180t365d_prevalence_13_cnt', 'operOPCSL1_180t365d_prevalence_14_cnt', 'operOPCSL1_180t365d_prevalence_15_cnt', 'operOPCSL1_180t365d_prevalence_16_cnt', 'operOPCSL1_180t365d_prevalence_17_cnt', 'operOPCSL1_180t365d_prevalence_18_cnt', 'operOPCSL1_180t365d_prevalence_19_cnt', 'operOPCSL1_180t365d_prevalence_20_cnt', 'operOPCSL1_180t365d_prevalence_21_cnt', 'operOPCSL1_180t365d_prevalence_22_cnt', 'operOPCSL1_180t365d_prevalence_23_cnt', 'operOPCSL1_180t365d_prevalence_24_cnt', 'operOPCSL1_180t365d_prevalence_25_cnt', 'operOPCSL1_180t365d_prevalence_26_cnt', 'operOPCSL1_180t365d_prevalence_27_cnt', 'operOPCSL1_180t365d_prevalence_28_cnt', 'operOPCSL1_180t365d_prevalence_29_cnt', 'operOPCSL1_180t365d_prevalence_30_cnt'
, 'operOPCSL1_365t730d_others_cnt', 'operOPCSL1_365t730d_prevalence_1_cnt', 'operOPCSL1_365t730d_prevalence_2_cnt', 'operOPCSL1_365t730d_prevalence_3_cnt', 'operOPCSL1_365t730d_prevalence_4_cnt', 'operOPCSL1_365t730d_prevalence_5_cnt', 'operOPCSL1_365t730d_prevalence_6_cnt', 'operOPCSL1_365t730d_prevalence_7_cnt', 'operOPCSL1_365t730d_prevalence_8_cnt', 'operOPCSL1_365t730d_prevalence_9_cnt', 'operOPCSL1_365t730d_prevalence_10_cnt', 'operOPCSL1_365t730d_prevalence_11_cnt', 'operOPCSL1_365t730d_prevalence_12_cnt', 'operOPCSL1_365t730d_prevalence_13_cnt', 'operOPCSL1_365t730d_prevalence_14_cnt', 'operOPCSL1_365t730d_prevalence_15_cnt', 'operOPCSL1_365t730d_prevalence_16_cnt', 'operOPCSL1_365t730d_prevalence_17_cnt', 'operOPCSL1_365t730d_prevalence_18_cnt', 'operOPCSL1_365t730d_prevalence_19_cnt', 'operOPCSL1_365t730d_prevalence_20_cnt', 'operOPCSL1_365t730d_prevalence_21_cnt', 'operOPCSL1_365t730d_prevalence_22_cnt', 'operOPCSL1_365t730d_prevalence_23_cnt', 'operOPCSL1_365t730d_prevalence_24_cnt', 'operOPCSL1_365t730d_prevalence_25_cnt', 'operOPCSL1_365t730d_prevalence_26_cnt', 'operOPCSL1_365t730d_prevalence_27_cnt', 'operOPCSL1_365t730d_prevalence_28_cnt', 'operOPCSL1_365t730d_prevalence_29_cnt', 'operOPCSL1_365t730d_prevalence_30_cnt']
file_name = "report_population_operations_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
diagnoses = ['mainspef_0t30d_others_cnt', 'mainspef_0t30d_prevalence_1_cnt', 'mainspef_0t30d_prevalence_2_cnt', 'mainspef_0t30d_prevalence_3_cnt', 'mainspef_0t30d_prevalence_4_cnt', 'mainspef_0t30d_prevalence_5_cnt', 'mainspef_0t30d_prevalence_6_cnt', 'mainspef_0t30d_prevalence_7_cnt', 'mainspef_0t30d_prevalence_8_cnt', 'mainspef_0t30d_prevalence_9_cnt', 'mainspef_0t30d_prevalence_10_cnt'
, 'mainspef_30t90d_others_cnt', 'mainspef_30t90d_prevalence_1_cnt', 'mainspef_30t90d_prevalence_2_cnt', 'mainspef_30t90d_prevalence_3_cnt', 'mainspef_30t90d_prevalence_4_cnt', 'mainspef_30t90d_prevalence_5_cnt', 'mainspef_30t90d_prevalence_6_cnt', 'mainspef_30t90d_prevalence_7_cnt', 'mainspef_30t90d_prevalence_8_cnt', 'mainspef_30t90d_prevalence_9_cnt', 'mainspef_30t90d_prevalence_10_cnt'
, 'mainspef_90t180d_others_cnt', 'mainspef_90t180d_prevalence_1_cnt', 'mainspef_90t180d_prevalence_2_cnt', 'mainspef_90t180d_prevalence_3_cnt', 'mainspef_90t180d_prevalence_4_cnt', 'mainspef_90t180d_prevalence_5_cnt', 'mainspef_90t180d_prevalence_6_cnt', 'mainspef_90t180d_prevalence_7_cnt', 'mainspef_90t180d_prevalence_8_cnt', 'mainspef_90t180d_prevalence_9_cnt', 'mainspef_90t180d_prevalence_10_cnt'
, 'mainspef_180t365d_others_cnt', 'mainspef_180t365d_prevalence_1_cnt', 'mainspef_180t365d_prevalence_2_cnt', 'mainspef_180t365d_prevalence_3_cnt', 'mainspef_180t365d_prevalence_4_cnt', 'mainspef_180t365d_prevalence_5_cnt', 'mainspef_180t365d_prevalence_6_cnt', 'mainspef_180t365d_prevalence_7_cnt', 'mainspef_180t365d_prevalence_8_cnt', 'mainspef_180t365d_prevalence_9_cnt', 'mainspef_180t365d_prevalence_10_cnt'
, 'mainspef_365t730d_others_cnt', 'mainspef_365t730d_prevalence_1_cnt', 'mainspef_365t730d_prevalence_2_cnt', 'mainspef_365t730d_prevalence_3_cnt', 'mainspef_365t730d_prevalence_4_cnt', 'mainspef_365t730d_prevalence_5_cnt', 'mainspef_365t730d_prevalence_6_cnt', 'mainspef_365t730d_prevalence_7_cnt', 'mainspef_365t730d_prevalence_8_cnt', 'mainspef_365t730d_prevalence_9_cnt', 'mainspef_365t730d_prevalence_10_cnt']
file_name = "report_population_operations_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
diagnoses = ['gapDays_0t30d_avg', 'gapDays_30t90d_avg', 'gapDays_90t180d_avg', 'gapDays_180t365d_avg', 'gapDays_365t730d_avg',
'epidur_0t30d_avg', 'epidur_30t90d_avg', 'epidur_90t180d_avg', 'epidur_180t365d_avg', 'epidur_365t730d_avg',
'preopdur_0t30d_avg', 'preopdur_30t90d_avg', 'preopdur_90t180d_avg', 'preopdur_180t365d_avg', 'preopdur_365t730d_avg',
'posopdur_0t30d_avg', 'posopdur_30t90d_avg', 'posopdur_90t180d_avg', 'posopdur_180t365d_avg', 'posopdur_365t730d_avg']
file_name = "report_population_other_variables_" + method_name + "_" + target_feature + "_"
o_summaries = population_statistics(features_extra['train'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
o_summaries = population_statistics(features_extra['test'], diagnoses)
readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns)
file_name = "report_population_" + method_name + "_" + target_feature + "_"
fig, summaries = plots.roc(training_method.model_predict["test"], features_extra["test"][target_feature],
title="ROC Curve", lw=2)
display(fig)
# save
plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_roc" + ".pdf"),
dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf",
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None)
fig, summaries = plots.precision_recall(training_method.model_predict["test"],
features_extra["test"][target_feature],
title="Precision-Recall Curve", lw=2)
display(fig)
# save
plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_precision_recall" + ".pdf"),
dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf",
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None)
fig, summaries = plots.learning_curve(training_method.model_train,
features_extra["test"][features_names_selected],
features_extra["test"][target_feature],
title="Learning Curve", ylim=None, cv=None, n_jobs=-1, train_sizes=np.linspace(.1, 1.0, 5))
display(fig)
# save
plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_learning_curve" + ".pdf"),
dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf",
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None)
# method metadata
if method_name == "lr":
param_name = "clf__C"
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
elif method_name == "rfc":
param_name = "max_features"
param_range = range(1, 4, 1) # range(1, 20, 1)
elif method_name == "nn":
param_name = "alpha"
param_range = range(1e4, 1e6, 9e4)
fig, summaries = plots.validation_curve(training_method.model_train,
features_extra["test"][features_names_selected],
features_extra["test"][target_feature],
param_name, param_range,
title="Learning Curve", ylim=None, cv=None, lw=2, n_jobs=-1)
display(fig)
# save
plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_validation_curve" + ".pdf"),
dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf",
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Order statistics
Step2: Q2. Return the maximum value of x along the second axis. Reduce the second axis to the dimension with size one.
Step3: Q3. Calcuate the difference between the maximum and the minimum of x along the second axis.
Step4: Q4. Compute the 75th percentile of x along the second axis.
Step5: Averages and variances
Step6: Q6. Compute the weighted average of x.
Step7: Q7. Compute the mean, standard deviation, and variance of x along the second axis.
Step8: Correlating
Step9: Q9. In the above covariance matrix, what does the -1 mean?
Step10: Q11. Compute cross-correlation of x and y.
Step11: Histograms
Step12: Q13. Compute the 2d histogram of x and y.
Step13: Q14. Count number of occurrences of 0 through 7 in x.
Step14: Q15. Return the indices of the bins to which each value in x belongs.
|
<ASSISTANT_TASK:>
Python Code:
__author__ = "kyubyong. kbpark.linguist@gmail.com"
import numpy as np
np.__version__
x = np.arange(4).reshape((2, 2))
print("x=\n", x)
x = np.arange(4).reshape((2, 2))
print("x=\n", x)
x = np.arange(10).reshape((2, 5))
print("x=\n", x)
x = np.arange(1, 11).reshape((2, 5))
print("x=\n", x)
x = np.arange(1, 10).reshape((3, 3))
print("x=\n", x)
x = np.arange(5)
weights = np.arange(1, 6)
x = np.arange(5)
print("x=\n",x)
x = np.array([0, 1, 2])
y = np.array([2, 1, 0])
x = np.array([0, 1, 3])
y = np.array([2, 4, 5])
x = np.array([0, 1, 3])
y = np.array([2, 4, 5])
x = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
bins = np.array([0, 1, 2, 3])
print("ans=\n", ...)
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(x, bins=bins)
plt.show()
xedges = [0, 1, 2, 3]
yedges = [0, 1, 2, 3, 4]
x = np.array([0, 0.1, 0.2, 1., 1.1, 2., 2.1])
y = np.array([0, 0.1, 0.2, 1., 1.1, 2., 3.3])
...
plt.scatter(x, y)
plt.grid()
x = np.array([0, 1, 1, 3, 2, 1, 7])
x = np.array([0.2, 6.4, 3.0, 1.6])
bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trace of ensemble visited by a replica
Step2: Replica flow
Step3: Trips
Step4: Transition matrix
Step5: If you would like to set a different order, that can be done by providing a list of the ensembles in whatever order you choose
Step6: Mixing matrix
Step7: Making a pretty picture
Step8: Blue is a minus interface, red is a normal interface. Multiple state outer interfaces (not in this example) would be green.
|
<ASSISTANT_TASK:>
Python Code:
sset0 = storage.samplesets[0]
numeric_labels = { s.ensemble : s.replica for s in sset0}
string_labels = { s.ensemble : str(s.replica) for s in sset0 }
numeric_to_string = { numeric_labels[e] : string_labels[e] for e in numeric_labels.keys()}
%%time
trace_1 = paths.trace_ensembles_for_replica(0, storage.steps)
plt.plot([numeric_labels[e] for e in trace_1])
repx_net = paths.ReplicaNetwork(scheme, storage.steps)
print repx_net
flow = repx_net.flow(bottom=retis.minus_ensemble, top=retis.ensembles[-1])
print flow
flow_num = {numeric_labels[k] : flow[k] for k in flow.keys()}
print flow_num
sorted_vals = []
for k in sorted(flow_num.keys()):
sorted_vals.append(flow_num[k])
plt.plot(sorted(flow_num.keys()), sorted_vals)
repx_net.trips(bottom=retis.minus_ensemble, top=retis.ensembles[-1])
repx_net.transition_matrix()
import numpy as np
perm = np.random.permutation(len(mstis.all_ensembles))
print perm
order = [mstis.all_ensembles[p] for p in perm]
repx_net.transition_matrix(index_order=order)
repx_net.mixing_matrix()
repxG = paths.ReplicaNetworkGraph(repx_net)
# draw('graphviz') gives better results, but requires pygraphviz
repxG.draw('spring')
transitions = repx_net.transitions_from_traces(storage.steps)
for (k1, k2) in transitions.keys():
print numeric_labels[k1], numeric_labels[k2], transitions[(k1, k2)]
for (k1, k2) in repx_net.analysis['n_accepted'].keys():
print numeric_labels[k1], numeric_labels[k2], repx_net.analysis['n_accepted'][(k1, k2)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: spaCy tokenizes words, then treats each token as a Token object. Each token has an integer and string representation. Each token also has things like
Step2: Given a test sentence (in this case
Step3: Find similar words to 'quickly' via cosine similarity
Step4: We can focus on one meaning of quickly and find similar words if we average over related words
Step5: Let's look at other parts of speech from our original sentence
Step6: How closely does one test noun match each noun found in our sentence? That is, if we say, "barrier", is it closer to "wall," "Frank", or "robot"? How about "car" or "agent"?
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import unicode_literals # If Python 2
import spacy.en
from spacy.tokens import Token
from spacy.parts_of_speech import ADV
nlp = spacy.en.English()
# Find log probability of Nth most frequent word
probs = [lex.prob for lex in nlp.vocab]
probs.sort()
words = [w for w in nlp.vocab if w.has_repvec]
tokens = nlp(u'"I ran to the wall quickly," Frank explained to the robot.')
ran = tokens[2]
quickly = tokens[6]
run = nlp(moved.lemma_)[0]
# the integer and string representations of "moved" and its head
print (ran.orth, ran.orth_, ran.head.lemma, ran.head.lemma_)
print (quickly.orth, quickly.orth_, quickly.lemma, quickly.lemma_,)
print (quickly.head.orth_, quickly.head.lemma_)
print (ran.prob, run.prob, quickly.prob)
print (ran.cluster, run.cluster, quickly.cluster)
is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
str_ = u'"I ran to the wall quickly," Frank explained to the robot.'
tokens = nlp(str_)
print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
quickly = tokens[6]
from numpy import dot
from numpy.linalg import norm
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
words.sort(key=lambda w: cosine(w.repvec, quickly.repvec))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n100-110:')
print('\n'.join(w.orth_ for w in words[100:110]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
print('\n50000-50010:')
print('\n'.join(w.orth_ for w in words[50000:50010]))
say_adverbs = ['quickly', 'swiftly', 'speedily', 'rapidly']
say_vector = sum(nlp.vocab[adverb].repvec for adverb in say_adverbs) / len(say_adverbs)
words.sort(key=lambda w: cosine(w.repvec, say_vector))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
from spacy.parts_of_speech import NOUN
is_noun = lambda tok: tok.pos == NOUN and tok.prob < probs[-1000]
print u''.join(tok.string.upper() if is_noun(tok) else tok.string for tok in tokens)
nouns = [tok for tok in tokens if is_noun(tok)]
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
barrier = nlp('barrier')[0]
car = nlp('car')[0]
agent = nlp('android')[0]
test_nouns = nouns + [barrier] + [car] + [agent]
n = len(test_nouns)
barrier_relations = np.zeros(n)
car_relations = np.zeros(n)
agent_relations = np.zeros(n)
for i, noun in enumerate(test_nouns):
barrier_relations[i] = cosine(barrier.repvec, noun.repvec)
car_relations[i] = cosine(car.repvec, noun.repvec)
agent_relations[i] = cosine(agent.repvec, noun.repvec)
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n)
bar_width = 0.2
opacity = 0.4
rects1 = plt.bar(index, barrier_relations, bar_width,
alpha=opacity,
color='b',
label=barrier.orth_)
rects2 = plt.bar(index + bar_width, car_relations, bar_width,
alpha=opacity,
color='r',
label=car.orth_)
rects3 = plt.bar(index + 2 * bar_width, agent_relations, bar_width,
alpha=opacity,
color='g',
label=agent.orth_)
labels = [tok.orth_ for tok in test_nouns]
plt.xlabel('Test Word')
plt.ylabel('Similarity')
plt.title('Similarity of words')
plt.xticks(index + bar_width, labels)
plt.legend()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data from Google Clod Storage
Step2: Prepare data for ARIMA
Step3: Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.
Step4: Test for stationarity of the udiff series
Step5: With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.
Step6: The table below summarizes the patterns of the ACF and PACF.
Step7: Our model doesn't do a good job predicting variance in the original data (peaks and valleys).
Step8: Let's make a forecast 2 weeks ahead
|
<ASSISTANT_TASK:>
Python Code:
!pip install --user statsmodels
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%config InlineBackend.figure_format = 'retina'
df = pd.read_csv('gs://cloud-training/ai4f/AAPL10Y.csv')
df['date'] = pd.to_datetime(df['date'])
df.sort_values('date', inplace=True)
df.set_index('date', inplace=True)
print(df.shape)
df.head()
df_week = # TODO: Use the df DataFrame to resample the 'close' column to a weekly granularity. Use the mean as the aggregator.
df_week = df_week[['close']]
df_week.head()
df_week['weekly_ret'] = np.log(df_week['close']).diff()
df_week.head()
# drop null rows
df_week.dropna(inplace=True)
df_week.weekly_ret.plot(kind='line', figsize=(12, 6));
udiff = df_week.drop(['close'], axis=1)
udiff.head()
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
rolmean = udiff.rolling(20).mean()
rolstd = udiff.rolling(20).std()
plt.figure(figsize=(12, 6))
orig = plt.plot(udiff, color='blue', label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std Deviation')
plt.title('Rolling Mean & Standard Deviation')
plt.legend(loc='best')
plt.show(block=False)
# Perform Dickey-Fuller test
dftest = sm.tsa.adfuller(udiff.weekly_ret, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])
for key, value in dftest[4].items():
dfoutput['Critical Value ({0})'.format(key)] = value
dfoutput
from statsmodels.graphics.tsaplots import plot_acf
# the autocorrelation chart provides just the correlation at increasing lags
fig, ax = plt.subplots(figsize=(12,5))
plot_acf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.graphics.tsaplots import plot_pacf
fig, ax = plt.subplots(figsize=(12,5))
plot_pacf(udiff.values, lags=10, ax=ax)
plt.show()
from statsmodels.tsa.arima.model import ARIMA
# Notice that you have to use udiff - the differenced data rather than the original data.
ar1 = # TODO: Fit an ARIMA model to the differenced data
ar1.summary()
# TODO: Plot the ARMA fitted values on the same plot as the differenced time series
forecast = # TODO: Use the ARMA model to create a forecast two weeks into the future
plt.figure(figsize=(12, 8))
plt.plot(udiff.values, color='blue')
preds = ar1.fittedvalues
plt.plot(preds, color='red')
plt.plot(pd.DataFrame(np.array([preds[-1],forecast[0]]).T,index=range(len(udiff.values)+1, len(udiff.values)+3)), color='green')
plt.plot(pd.DataFrame(forecast,index=range(len(udiff.values)+1, len(udiff.values)+1+steps)), color='green')
plt.title('Display the predictions with the ARIMA model')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
Step5: Convert SFrame to NumPy array
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
Step8: Derivative of log likelihood with respect to a single coefficient
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Step14: Compare convergence behavior of stochastic gradient ascent
Step15: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
Step16: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
Step17: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step18: Smoothing the stochastic gradient ascent curve
Step19: Checkpoint
Step20: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step21: Quiz Question
Step22: Plotting the log likelihood as a function of passes for each step size
Step23: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
products = graphlab.SFrame('amazon_baby_subset.gl/')
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = ...
return derivative
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = ...
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = ...
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = ...
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += ...
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = ...
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = ...
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = ...
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
batch_size = 100
num_passes = 10
num_iterations = ...
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = ...
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numeric Vs String
Step2: As shown above only "*" multiplication is possible between string & real numeric value
Step3: Complex Vs Numeric
Step4: Thus we can see that following operations are not possible with complex data type floor (//), mod (%), compare (<, >, >=, <=).
Step5: Real Number
Step6: Complex Number
Step7: Boolean Number
Step8: Relational Operators
Step9: | Syntax | Math | Operation Name |
Step10: Numeric Vs String
Step11: Numeric Vs Complex
Step12: Complex Vs Complex
Step13: Complex Vs String
Step14: String Vs String
Step15: Logical/Boolean Operators
Step16: Besides boolean operators, there are the functions all(), which returns true when all of the items in the sequence passed as parameters are true, and any(), which returns true if any item is true.
Step17: 1011
Step19: Order of Operations
|
<ASSISTANT_TASK:>
Python Code:
a = 10
b = 22
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
print("a + b:\t", a + b)
print("a - b:\t", a - b)
print("a * b:\t", a * b)
print("a / b:\t", a / b)
print("a//b:\t", a//b)
print("a % b:\t", a % b)
print("-a:\t", -a)
print("a < b:\t", a < b)
print("a > b:\t", a > b)
print("a <= b:\t", a <= b)
print("a >= b:\t", a >= b)
print("abs(a):\t", abs(a))
import math
print("sqrt(a):", math.sqrt(a))
a = "10"
b = 22
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
# print("a + b:\t", a + b)
# print("a - b:\t", a - b)
print("a * b:\t", a * b)
# print("a / b:\t", a / b)
# print("a//b:\t", a//b)
# print("a % b:\t", a % b)
# print("-a:\t", -a)
# print("a < b:\t", a < b)
# print("a > b:\t", a > b)
# print("a <= b:\t", a <= b)
# print("a >= b:\t", a >= b)
# print("abs(a):\t", abs(a))
# import math
# print("sqrt(a):", math.sqrt(a))
a = "10"
b = 22 + 4j
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
try:
print("a * b:\t", a * b)
except Exception as e:
print(e)
a = 10 + 4j
b = 10
print("a =", a, ", b =", b)
print("~~~~~~~~~~~~~~~~~")
print("a + b:\t", a + b)
print("a - b:\t", a - b)
print("a * b:\t", a * b)
print("a / b:\t", a / b)
# print("a//b:\t", a//b)
# print("a % b:\t", a % b)
print("-a:\t", -a)
# print("a < b:\t", a < b)
# print("a > b:\t", a > b)
# print("a <= b:\t", a <= b)
# print("a >= b:\t", a >= b)
print("abs(a):\t", abs(a))
import math
# print("sqrt(a):", math.sqrt(a))
a = 10
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
a = 10.20
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
a = 10 + 20j
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
# c //= a
# print("c =", c)
# c %= a
# print("c =", c)
a = True
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
c /= a
print("c =", c)
c **= a
print("c =", c)
c //= a
print("c =", c)
c %= a
print("c =", c)
a = False
c = 0
c += a
print("c =", c)
c -= a/2
print("c =", c)
c *= a
print("c =", c)
# c /= a
# print("c =", c)
c **= a
print("c =", c)
# c //= a
# print("c =", c)
# c %= a
# print("c =", c)
There are following relational operators supported by Python language
a = 10
b = 21.22
print(a < b)
print(a > b)
print(a <= b)
print(a >= b)
print(a == b)
print(a != b)
a = 10
b = "Mayank Shrivastava"
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
a = 10
b = 10 + 21j
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
a = 10 + 20j
b = 10 + 21j
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
a = 10 + 20j
b = "Rishi Rai"
# print(a < b)
# print(a > b)
# print(a <= b)
# print(a >= b)
print(a == b)
print(a != b)
a = "Manish Nandle"
b = "Saurabh Dubey"
print(a < b)
print(a > b)
print(a <= b)
print(a >= b)
print(a == b)
print(a != b)
print (0 and 3) # Shows 0
print (2 and 3 )# Shows 3
print (0 or 3) # Shows 3
print (2 or 3) # Shows 2
print (not 0) # Shows True
print (not 2) # Shows False
print (2 in (2, 3)) # Shows True
print (2 is 3) # Shows False
a = "Sunil Kumar Bhele"
x = 10 #-> 1010
y = 11 #-> 1011
x = 10 #-> 1010
y = 11 #-> 1011
print("x << 2 = ", x<<2)
print("x =", x)
print("x >> 2 = ", x>>2)
print("x &y = ", x&y)
print("x | y = ", x|y)
print("x^y = ", x^y)
print("x =", x)
print("~x = ", ~x)
print("~y = ", ~y)
print (round(3.14159265, 2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The image object
Step2: Indexing into an Image object works as it does for other array types in
Step3: Images can be displayed using the Show method. normal sets the range, 0-255 in this case. By default, the image is linearly stretched to the minimum and maximum values.
Step4: Operators are generally applied on a per-pixel basis.
Step5: Images can be indexed by binary images, resulting in a 1D image containing only the selected pixels. Note that the order depends on the internal linear pixel storage order, which is not consistent with any specific (row-major or column-major) order due to operations such as Rotation90 and Mirror.
Step6: Binary image indexing can also be used for assignment.
Step7: An Image object uses the NumPy buffer interface. This means that you can use an Image object everywhere where you can use a NumPy array, and you can use a NumPy array anywhere where you would use an Image object.
Step8: Then we add the NumPy array to our image
Step9: Then we call a NumPy function with an Image as input, and a PyDIP function with a NumPy array as input
Step10: Note that the Image object can contain images of any number of dimensions, including 0 (a single pixel). The Show method only displays 1D or 2D images, for images with more dimensions, it will display a projection. Read this method's help to learn how to control the projection mode, etc. Alternatively, use dip.viewer.Show for an interactive image display that shows images with any number of dimensions.
Step11: Loading images from file
Step12: PyDIP knows a lot of color spaces. The Show method automatically converts to RGB for display.
Step13: Filtering
Step14: Some filters are specific for gray-value images (these are called "scalar images" everywhere in the documentation). For example, all morphological filters currently require a scalar image.
Step15: Measurement
Step16: Next, we threshold and label the image, then measure some basic features. Because 'Solidity' depends on the 'ConvexArea' measurement, we get that one too in the output.
Step17: The dip.Measurement object m can be indexed in three levels
Step18: Leaving out one of the indices returns the full row or column
Step19: These objects can be indexed further as above, or be converted to a NumPy array
Step20: We can paint the objects with one of the measurements, which can be useful for display
|
<ASSISTANT_TASK:>
Python Code:
import diplib as dip
a = dip.Image((10,20), 1)
a.Fill(3)
b = a[0:4, 4:-1]
b.Fill(55)
a[:3, :10] = 100
a[5:7, 10:15] = 200
a.Show('normal')
m = a >= 100
m.Show()
a[m].Show('normal')
a[m] = 176
a.Show('normal')
import numpy as np
b = np.random.rand(a.Size(1), a.Size(0))
dip.Show(b)
a += 30 * b
a.Show()
print(np.mean(a))
print(dip.Mean(b)[0][0])
import matplotlib.pyplot as pp
a = dip.ImageRead('../cermet.ics')
a.Show()
b = dip.Image(pp.imread('../erika.tif'))
b.Show()
a = dip.ImageRead('../DIP.tif')
print(a)
print(a[50:100, :]) # spatial indexing is the same as for other 2D images!
print(a(0)) # this is the red channel
print(a[128, 45]) # this returns a Python list with all the values for the pixel
print(a[128, 45][0]) # this returns the red value of one pixel
print(a(0)[128, 45]) # this also, but as a Python list
print(a.ColorSpace())
b = dip.ColorSpaceManager.Convert(a, 'Lab')
print(b.ColorSpace())
b.Show()
a(2).Show()
b(2).Show()
dip.Gauss(a, 5).Show()
b = dip.BilateralFilter(a, spatialSigmas=5, tonalSigma=30)
b.Show()
b = dip.ColorSpaceManager.Convert(b, 'gray')
dip.Canny(b, upper=0.99).Show()
a = dip.ImageReadICS('../cermet')
a.SetPixelSize(1, "um")
print(a.PixelSize())
a.Show()
b = a < 120
b = dip.EdgeObjectsRemove(b)
b = dip.Label(b, minSize=30)
m = dip.MeasurementTool.Measure(b, a, ['Size', 'Solidity', 'Statistics'])
print(m)
print(m['Statistics'][30])
print(m['Statistics'][30][2])
print(m[30]['Statistics'][2])
print(m[30])
print(m['Solidity'])
import numpy as np
print(np.array(m[30]))
print(np.array(m['Solidity']).transpose())
c = dip.ObjectToMeasurement(b, m['Solidity'])
c.Show(colormap='viridis')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vision ML on video
Step3: Text
|
<ASSISTANT_TASK:>
Python Code:
!gsutil cp gs://ml-design-patterns/audio_train/00353774.wav cello.wav
!gsutil cp gs://ml-design-patterns/audio_train/001ca53d.wav sax.wav
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import numpy as np
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
for idx, instr in enumerate(['sax', 'cello']):
sample_rate, samples = wavfile.read(instr + '.wav')
ax[idx][0].plot(samples)
_, _, spectro = signal.spectrogram(samples, sample_rate)
img = np.log(spectro)
ax[idx][1].imshow(img, cmap='gray', aspect='auto');
ax[idx][1].set_title(instr)
print(img.shape)
!curl -O https://www.engr.colostate.edu/me/facil/dynamics/files/flame.avi
## Frame-by-frame
import cv2
import numpy as np
import matplotlib.pyplot as plt
cap = cv2.VideoCapture('flame.avi')
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(num_frames)
fig, ax = plt.subplots(1, 4, figsize=(20, 10))
for i in range(num_frames):
ret, frame = cap.read()
if ret:
img = np.asarray(frame)
if i%30 == 0:
ax[i//30].imshow(img)
## Rolling average of 30 frames at a time
def rolling_average(cap, N):
img = None;
n = 0
for i in range(N):
ret, frame = cap.read()
if ret:
frame = np.asarray(frame)
if n > 0:
img = frame + img
else:
img = frame
n += 1
if n > 0:
return img / n
return img
cap = cv2.VideoCapture('flame.avi')
fig, ax = plt.subplots(1, 4, figsize=(20, 10))
for i in range(4):
img = rolling_average(cap, 25)
ax[i].imshow(img)
# read into a 4D shape
import tensorflow as tf
def read_video(filename):
cap = cv2.VideoCapture(filename)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frames = []
for i in range(num_frames):
ret, frame = cap.read()
if ret:
frames.append(np.asarray(frame))
return tf.convert_to_tensor(frames)
img4d = read_video('flame.avi')
print(img4d.shape)
import tensorflow_hub as hub
paragraph =
Siddhartha gave his clothes to a poor Brahmin on the road and
only retained his loincloth andearth-colored unstitched cloak.
He only ate once a day and never cooked food. He fasted fourteen
days. He fasted twenty-eight days. The flesh disappeared from
his legs and cheeks. Strange dreams were reflected in his enlarged
eyes. The nails grew long on his thin fingers and a dry, bristly
beard appeared on his chin. His glance became icy when he
encountered women; his lips curled with contempt when he passed
through a town of well-dressed people. He saw businessmen trading,
princes going to the hunt, mourners weeping over their dead,
prostitutes offering themselves, doctors attending the sick,
priests deciding the day for sowing, lovers making love, mothers
soothing their children -and all were not worth a passing glance,
everything lied, stank of lies; they were all illusions of sense,
happiness and beauty. All were doomed to decay. The world tasted
bitter. Life was pain.
print(paragraph.split('.'))
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
embeddings = embed(paragraph.split('.'))
import matplotlib.pyplot as plt
plt.figure(figsize=(5,10))
plt.imshow(embeddings.numpy(), aspect=25.0, cmap='gray');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.2.1 Start up
Step2: 1.2.2 Creating the file
Step3: 1.2.3 Reading the file
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from datetime import datetime
from matplotlib import pyplot as plt
from scipy import misc
import h5py
import json
import numpy as np
import os
import pandas as pd
import sys
# local
sys.path.insert(0, os.path.dirname(os.getcwd()))
from pywim.utils.dsp.synthetic_data.sensor_data import gen_truck_raw_data
# generates a synthetic data
sample_rate = 2000
sensors_distance = [1, 2]
data = gen_truck_raw_data(
sample_rate=sample_rate, speed=20, vehicle_layout='O--O------O-',
sensors_distance=sensors_distance, p_signal_noise=100.0
)
data.plot()
plt.show()
data.head()
date_time = datetime.now()
site_id = '001'
lane_id = '01'
collection_type = 'day' # stored per day
f_id = 'wim_{}_{}_{}_{}'.format(
collection_type, site_id, lane_id,
date_time.strftime('%Y%m%d')
)
f = h5py.File('/tmp/{}.h5'.format(f_id), 'w')
print(f_id)
dset_id = 'run_{}_{}_{}'.format(
site_id, lane_id, date_time.strftime('%Y%M%d_%H%M%S')
)
print(dset_id)
dset = f.create_dataset(
dset_id, shape=(data.shape[0],),
dtype=np.dtype([
(k, float) for k in ['index'] + list(data.keys())
])
)
dset['index'] = data.index
for k in data.keys():
dset[k] = data[k]
# check if all values are the same
df = pd.DataFrame(dset[tuple(data.keys())], index=dset['index'])
np.all(df == data)
dset.attrs['sample_rate'] = sample_rate
dset.attrs['date_time'] = date_time.strftime('%Y-%M-%d %H:%M:%S')
dset.attrs['site_id'] = site_id
dset.attrs['lane_id'] = lane_id
dset.attrs['temperature'] = 28.5
dset.attrs['license_plate'] = 'AAA9999' # license plate number
dset.attrs['calibration_constant'] = [0.98, 0.99, 0.75]
dset.attrs['sensors_distance'] = sensors_distance
dset.attrs['sensor_type'] = 'mixed'
dset.attrs['sensors_layout'] = '|||'
dset.attrs['channel_configuration'] = json.dumps({
'a0': 'polymer', 'a1': 'ceramic', 'a2': 'polymer'
})
# flush its data to disk and close
f.flush()
f.close()
print('/tmp/{}.h5'.format(f_id))
f = h5py.File('/tmp/{}.h5'.format(f_id), 'r')
for dset_id in f.keys():
dset = f[dset_id]
paddle = len(max(dset.attrs, key=lambda v: len(v)))
print('')
print('='*80)
print(dset_id)
print('='*80)
for k in dset.attrs:
print('{}:'.format(k).ljust(paddle, ' '), dset.attrs[k], sep='\t')
pd.DataFrame(dset[dset.dtype.names[1:]], index=dset['index']).plot()
plt.show()
# f.__delitem__(dset_id)
f.flush()
f.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Support Vector Machines
Step2: You can see in the data description above that the range of values for each of the columns is quite a bit different. For instance, the mean sepal length is almost twice as big as the mean sepal width.
Step3: We can now transform the data by applying the scaler.
Step4: Since we scaled the data, the column names are now a bit deceiving. These are no longer unaltered centimeters, but normalized lengths. Let's rename the columns to get "(cm)" out of the names.
Step5: We could use all of the features to train our model, but in this case we are going to pick two features so that we can make some nice visualizations later on in the colab.
Step6: Now we can create and train a classifier. There are multiple ways to create an SVM model in scikit-learn. We are going to use the linear support vector classifier.
Step7: We can now use our model to make predictions. We'll make predictions on the data we just trained on in order to get an F1 score.
Step8: We can visualize the decision boundaries using the pyplot contourf function.
Step9: Exercises
Step10: Exercise 2
Step11: Exercise 3
Step12: Exercise 4
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pandas as pd
from sklearn.datasets import load_iris
iris_bunch = load_iris()
iris_df = pd.DataFrame(iris_bunch.data, columns=iris_bunch.feature_names)
iris_df['species'] = iris_bunch.target
iris_df.describe()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(iris_df[iris_bunch.feature_names])
scaler.mean_
iris_df[iris_bunch.feature_names] = scaler.transform(
iris_df[iris_bunch.feature_names])
iris_df.describe()
iris_df = iris_df.rename(index=str, columns={
'sepal length (cm)': 'sepal_length',
'sepal width (cm)': 'sepal_width',
'petal length (cm)': 'petal_length',
'petal width (cm)': 'petal_width'})
iris_df.head()
features = ['petal_length', 'petal_width']
target = 'species'
from sklearn.svm import LinearSVC
classifier = LinearSVC()
classifier.fit(iris_df[features], iris_df[target])
from sklearn.metrics import f1_score
predictions = classifier.predict(iris_df[features])
f1_score(iris_df[target], predictions, average='micro')
import matplotlib.pyplot as plt
import numpy as np
# Find the smallest value in the feature data. We are looking across both
# features since we scaled them. Make the min value a little smaller than
# reality in order to better see all of the points on the chart.
min_val = min(iris_df[features].min()) - 0.25
# Find the largest value in the feature data. Make the max value a little bigger
# than reality in order to better see all of the points on the chart.
max_val = max(iris_df[features].max()) + 0.25
# Create a range of numbers from min to max with some small step. This will be
# used to make multiple predictions that will create the decision boundary
# outline.
rng = np.arange(min_val, max_val, .02)
# Create a grid of points.
xx, yy = np.meshgrid(rng, rng)
# Make predictions on every point in the grid.
predictions = classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Reshape the predictions for plotting.
zz = predictions.reshape(xx.shape)
# Plot the predictions on the grid.
plt.contourf(xx, yy, zz)
# Plot each class of iris with a different marker.
# Class 0 with circles
# Class 1 with triangles
# Class 2 with squares
for species_and_marker in ((0, 'o'), (1, '^'), (2, 's')):
plt.scatter(
iris_df[iris_df[target] == species_and_marker[0]][features[0]],
iris_df[iris_df[target] == species_and_marker[0]][features[1]],
marker=species_and_marker[1])
plt.show()
# Your code goes here
# Your code goes here
# Your code goes here
# Your code goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Le but de la fonction stupid_generator est de lister les entiers inférieurs à end. Cependant, elle ne retourne pas directement la liste mais un générateur sur cette liste. Comparez avec la fonction suivante.
Step2: Pour récupérer les objets de stupid_generator, il faut le transformer explicitement en liste ou alors parcourir les objets à travers une boucle.
Step3: Remarque
Step5: Exercice
Step6: Votre fonction doit passer les tests suivants
Step7: Dans les cas précédent, le générateur s'arrête de lui même au bout d'un certain temps. Cependant, il est aussi possible d'écrire des générateurs infinis. Dans ce cas, la responsabilité de l'arrêt revient à la l'appelant.
Step10: Exercice
Step11: Mots binaires
Step12: Les fonctions suivantes génèrent les mots binaires de taille 0,1, et 2.
Step14: En vous inspirant des fonctions précédentes (mais sans les utiliser) ou en reprenant la fonction du cours, implantez de façon récursive la fonction suivante qui engendre l'ensemble des mots binaires d'une taille donnée.
Step16: Sur le même modèle, implanter la fonction suivante. (un peu plus dur)
Step18: Et pour finir
Step19: Exécutez la ligne suivante et copiez la liste des nombres obentus dans Google.
|
<ASSISTANT_TASK:>
Python Code:
def stupid_generator(end):
i = 0
while i < end:
yield i
i+=1
stupid_generator(3)
def stupid_list(end):
i = 0
result = []
while i < end:
result.append(i)
i+=1
return result
stupid_list(3)
it = stupid_generator(3)
it.next()
list(stupid_generator(3))
for v in stupid_generator(3):
print v
def test_generator():
print "Cette instruction est exécutée lors de l'appel du premier objet"
yield 1
print "Cette instruction est exécutée lors de l'appel du deuxième objet"
yield 2
print "Cette instruction est exécutée lors de l'appel du troisième objet"
yield 3
it = test_generator()
it.next()
it.next()
it.next()
it.next()
def first_fibonacci_generator(n):
Return a generator for the first ``n`` Fibonacci numbers
# write code here
import types
assert(type(first_fibonacci_generator(3)) == types.GeneratorType)
assert(list(first_fibonacci_generator(0)) == [])
assert(list(first_fibonacci_generator(1)) == [0])
assert(list(first_fibonacci_generator(2)) == [0,1])
assert(list(first_fibonacci_generator(8)) == [0,1,1,2,3,5,8,13])
def powers2():
v = 1
while True:
yield v
v*=2
for v in powers2():
print v
if v > 1000000:
break
def fibonacci_generator():
Return an infinite generator for Fibonacci numbers
# write code here
it = fibonacci_generator()
it.next()
def fibonacci_finder(n):
Return the first Fibonacci number greater than or equal to n
# write code here
assert(fibonacci_finder(10) == 13)
assert(fibonacci_finder(100) == 144)
assert(fibonacci_finder(1000) == 1597)
assert(fibonacci_finder(1000000) == 1346269)
binaires1 = ["0", "1"]
binaires2 = ["00", "01", "10", "11"]
def binary_word_generator0():
yield ""
def binary_word_generator1():
yield "0"
yield "1"
def binary_word_generator2():
for b in binary_word_generator1():
yield b + "0"
yield b + "1"
list(binary_word_generator2())
def binary_word_generator(n):
Return a generator on binary words of size n in lexicographic order
Input :
- n, the length of the words
# write code here
list(binary_word_generator(3))
# tests
import types
assert(type(binary_word_generator(0)) == types.GeneratorType)
assert(list(binary_word_generator(0)) == [''])
assert(list(binary_word_generator(1)) == ['0', '1'])
assert(list(binary_word_generator(2)) == ['00', '01', '10', '11'])
assert(list(binary_word_generator(3)) == ['000', '001', '010', '011', '100', '101', '110', '111'])
assert(len(list(binary_word_generator(4))) == 16)
assert(len(list(binary_word_generator(7))) == 128)
def binary_kword_generator(n,k):
Return a generator on binary words of size n such that each word contains exacty k occurences of 1
Input :
- n, the size of the words
- k, the number of 1
# write code here
list(binary_kword_generator(4,2))
# tests
import types
assert(type(binary_kword_generator(0,0)) == types.GeneratorType)
assert(list(binary_kword_generator(0,0)) == [''])
assert(list(binary_kword_generator(0,1)) == [])
assert(list(binary_kword_generator(1,1)) == ['1'])
assert(list(binary_kword_generator(4,4)) == ['1111'])
assert(list(binary_kword_generator(4,0)) == ['0000'])
assert(set(binary_kword_generator(4,2)) == set(['0011', '0101', '1001', '0110', '1010', '1100']))
assert(len(list(binary_kword_generator(7,3))) == 35)
def dyck_prefix_generator(n,k):
Return a generator on binary words of size n such that each word contains exacty k occurences of 1,
and in any prefix, the number of 1 is greater than or equal to the number of 0.
Input :
- n, the size of the words
- k, the number of 1
# write code here
list(dyck_prefix_generator(4,2))
assert(len(list(dyck_prefix_generator(0,0))) == 1)
assert(len(list(dyck_prefix_generator(0,1))) == 0)
assert(len(list(dyck_prefix_generator(1,0))) == 0)
assert(list(dyck_prefix_generator(1,1)) == ['1'])
assert(set(dyck_prefix_generator(3,2)) == set(["110","101"] ))
assert(len(set(dyck_prefix_generator(10,5))) == 42)
[len(set(dyck_prefix_generator(2*n, n))) for n in xrange(8)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the accepted trials with recrossing does not account for how long the trial remained active. It also doesn't tell us whether the trial represented a new recrossing event, or was correlated with the previous recrossing event.
Step2: Now let's see how many recrossing events there are in each accepted trial. If there's one recrossing, then the trajectory must go $\alpha\to\beta\to\alpha\to\beta$ to be accepted. Two recrossings would mean $\alpha\to\beta\to\alpha\to\beta\to\alpha\to\beta$.
Step3: Comparing the fixed and flexible simulations
Step4: Identifying different mechanisms using custom ensembles
Step5: Next, we'll create ensembles for the "increasing" and "decreasing" transitions. These transitions mark a crossing of either the nml_plus or the nml_minus. These aren't necessarily $\alpha\to\beta$ transitions. However, any $\alpha\to\beta$ transition must contain at least one subtrajectory which satsifies one of these ensembles.
Step6: Finally, we'll write a little function that characterizes a set of trajectories according to these ensembles. It returns a dictionary mapping the ensemble (increasing or decreasing) to a list of trajectories that have a subtrajectory that satisfies it (at least one entry in ensemble.split(trajectory)). That dictionary also contains keys for 'multiple' matched ensembles and None if no ensemble was matched. Trajectories for either of these keys would need to be investigated further.
Step7: With that function defined, let's use it!
Step8: Comparing to the flexible length simulation
Step9: So the fixed length sampling is somehow capturing both kinds of transitions (probably because they are not really that rare). Let's see what the path length distribution from only the decreasing transitions looks
Step10: Still a little off, although this might be due to bad sampling. Let's see how many of the decorrelated trajectories have this kind of transition.
Step11: So this is based off of 11 decorrelated trajectory transitions. That's not a lot of statistics.
Step12: Let's also check whether we go back and forth between the increasing transition and the decreasing transition, or whether there's just a single change from one type to the other.
Step13: So there are a lot of switches early in the simulation, and then it gets stuck in one state for much longer.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
%matplotlib inline
import openpathsampling as paths
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import os
import openpathsampling.visualize as ops_vis
from IPython.display import SVG
%%time
flexible = paths.Storage("ad_tps.nc")
%%time
fixed = paths.Storage("ad_fixed_tps.nc")
flex_network = flexible.networks['tps_network']
# create the ensemble that identifies recrossings
alpha = fixed.volumes['C_7eq']
beta = fixed.volumes['alpha_R']
recrossing_ensemble = paths.SequentialEnsemble([
paths.LengthEnsemble(1) & paths.AllInXEnsemble(beta),
paths.OptionalEnsemble(paths.AllOutXEnsemble(alpha | beta)),
paths.LengthEnsemble(1) & paths.AllInXEnsemble(alpha)
])
%%time
# now we check each step to see if its trial has a recrossing
steps_with_recrossing = []
for step in tqdm(fixed.steps):
# trials is a list of samples: with shooting, only one in the list
recrossings = [] # default for initial empty move (no trials in step[0].change)
for trial in step.change.trials:
recrossings = recrossing_ensemble.split(trial.trajectory)
# recrossing contains a list with the recrossing trajectories
# (len(recrossing) == 0 if no recrossings)
if len(recrossings) > 0:
steps_with_recrossing += [step] # save for later analysis
accepted_recrossings = [step for step in steps_with_recrossing if step.change.accepted is True]
print("Trials with recrossings:", len(steps_with_recrossing))
print("Accepted trials with recrossings:", len(accepted_recrossings))
psi = fixed.cvs['psi']
trajectory = accepted_recrossings[0].active[0].trajectory
in_alpha_indices = [trajectory.index(s) for s in trajectory if alpha(s)]
in_alpha_psi = [psi(trajectory)[i] for i in in_alpha_indices]
in_beta_indices = [trajectory.index(s) for s in trajectory if beta(s)]
in_beta_psi = [psi(trajectory)[i] for i in in_beta_indices]
plt.plot(psi(trajectory), 'k-')
plt.plot(in_alpha_indices, in_alpha_psi, 'ro') # alpha in red
plt.plot(in_beta_indices, in_beta_psi, 'bo') # beta in blue
recrossings_per = []
for step in accepted_recrossings:
for test in step.change.trials:
recrossings_per.append(len(recrossing_ensemble.split(test.trajectory)))
print(recrossings_per)
# these numbers come from accepted trial steps, not all steps
print(sum(recrossings_per))
print(len(recrossings_per))
print(len([x for x in recrossings_per if x==2]))
%%time
# transition path length distribution
flex_ens = flex_network.sampling_ensembles[0]
fixed_transition_segments = sum([flex_ens.split(step.active[0].trajectory)
for step in tqdm(fixed.steps)],[])
fixed_transition_length = [len(traj) for traj in fixed_transition_segments]
flexible_transition_length = [len(s.active[0].trajectory) for s in flexible.steps]
print(len(fixed_transition_length))
bins = np.linspace(0, 400, 80);
plt.hist(flexible_transition_length, bins, alpha=0.5, density=True, label="flexible");
plt.hist(fixed_transition_length, bins, alpha=0.5, density=True, label="fixed");
plt.legend(loc='upper right');
# first, we fully subdivide the Ramachandran space
phi = fixed.cvs['phi']
deg = 180.0/np.pi
nml_plus = paths.PeriodicCVDefinedVolume(psi, -160/deg, -100/deg, -np.pi, np.pi)
nml_minus = paths.PeriodicCVDefinedVolume(psi, 0/deg, 100/deg, -np.pi, np.pi)
nml_alpha = (paths.PeriodicCVDefinedVolume(phi, 0/deg, 180/deg, -np.pi, np.pi) &
paths.PeriodicCVDefinedVolume(psi, 100/deg, 200/deg, -np.pi, np.pi))
nml_beta = (paths.PeriodicCVDefinedVolume(phi, 0/deg, 180/deg, -np.pi, np.pi) &
paths.PeriodicCVDefinedVolume(psi, -100/deg, 0/deg, -np.pi, np.pi))
#TODO: plot to display where these volumes are
increasing = paths.SequentialEnsemble([
paths.AllInXEnsemble(alpha | nml_alpha),
paths.AllInXEnsemble(nml_plus),
paths.AllInXEnsemble(beta | nml_beta)
])
decreasing = paths.SequentialEnsemble([
paths.AllInXEnsemble(alpha | nml_alpha),
paths.AllInXEnsemble(nml_minus),
paths.AllInXEnsemble(beta | nml_beta)
])
def categorize_transitions(ensembles, trajectories):
results = {ens : [] for ens in ensembles + ['multiple', None]}
for traj in trajectories:
matched_ens = None
for ens in ensembles:
if len(ens.split(traj)) > 0:
if matched_ens is not None:
matched_ens = 'multiple'
else:
matched_ens = ens
results[matched_ens].append(traj)
return results
categorized = categorize_transitions(ensembles=[increasing, decreasing],
trajectories=fixed_transition_segments)
print("increasing:", len(categorized[increasing]))
print("decreasing:", len(categorized[decreasing]))
print(" multiple:", len(categorized['multiple']))
print(" None:", len(categorized[None]))
flex_trajs = [step.active[0].trajectory for step in flexible.steps]
flex_categorized = categorize_transitions(ensembles=[increasing, decreasing],
trajectories=flex_trajs[::10])
print("increasing:", len(flex_categorized[increasing]))
print("decreasing:", len(flex_categorized[decreasing]))
print(" multiple:", len(flex_categorized['multiple']))
print(" None:", len(flex_categorized[None]))
plt.hist([len(traj) for traj in flex_categorized[decreasing]], bins, alpha=0.5, density=True);
plt.hist([len(traj) for traj in categorized[decreasing]], bins, alpha=0.5, density=True);
full_fixed_tree = ops_vis.PathTree(
fixed.steps,
ops_vis.ReplicaEvolution(replica=0)
)
full_history = full_fixed_tree.generator
# start with the decorrelated tragectories
fixed_decorrelated = full_history.decorrelated_trajectories
# find the A->B transitions from the decorrelated trajectories
decorrelated_transitions = sum([flex_ens.split(traj) for traj in fixed_decorrelated], [])
# find the A->B transition from these which are decreasing
decorrelated_decreasing = sum([decreasing.split(traj) for traj in decorrelated_transitions], [])
print(len(decorrelated_decreasing))
plt.hist([len(traj) for traj in categorized[increasing]], bins, density=True, alpha=0.5, color='g');
def find_switches(ensembles, trajectories):
switches = []
last_category = None
traj_num = 0
for traj in trajectories:
category = None
for ens in ensembles:
if len(ens.split(traj)) > 0:
if category is not None:
category = 'multiple'
else:
category = ens
if last_category != category:
switches.append((category, traj_num))
traj_num += 1
last_category = category
return switches
switches = find_switches([increasing, decreasing], fixed_transition_segments)
print([switch[1] for switch in switches], len(fixed_transition_segments))
every_50th_trajectory = [step.active[0].trajectory for step in fixed.steps[::50]]
# make a set to remove duplicates, if trajs aren't decorrelated
every_50th_traj_snapshots = list(set(sum(every_50th_trajectory, [])))
# sadly, it looks like that trick with set doesn't do any good here
temperatures = [snap.instantaneous_temperature for snap in tqdm(every_50th_traj_snapshots)]
plt.plot([T / T.unit for T in temperatures])
mean_T = np.mean(temperatures)
plt.plot([mean_T / mean_T.unit]*len(temperatures), 'r')
print("Mean temperature:", np.mean(temperatures).format("%.2f"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def change_base(x: int, base: int):
ret = ""
while x > 0:
ret = str(x % base) + ret
x //= base
return ret
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The darks_interpolated data-cube consists of the interpolated darks that have been subtracted from the raw image cube for this observation. They are originally named background_dark but I find that confusing with detector_dark.
Step2: defining dark1 and 2. (Could be 2 and 3rd of a set of 3, with 2 before light images, or just 1 and 1.
Step3: some dark stats
Step4: dark histograms
Step5: As one can see, the first 2 darks are very similar and the last dark taken 47 minutes later has a quite different histogram. Let's see how it looks if we just push up the histogram by the mean value difference of the 2 darks
Step6: The remaining difference in the shape of the histogram make me believe that a pure additive fix can never make one dark to be subtractable by the other one.
Step7: What do interpolated darks do
Step8: How do the darks ratio with each other?
Step9: Dark 0 / Dark 2 (last one)
Step10: Showing again how it is important to subtract the right dark
Step11: Multiplicative comparison of darks
Step12: Animation a la Nick's analysis
|
<ASSISTANT_TASK:>
Python Code:
from iuvs import io
%autocall 1
files = !ls ~/data/iuvs/level1b/*.gz
files
l1b = io.L1BReader(files[1])
l1b.darks_interpolated.shape
dark0 = l1b.detector_dark[0]
dark1 = l1b.detector_dark[1]
dark2 = l1b.detector_dark[2]
io.image_stats(dark0)
io.image_stats(dark1)
io.image_stats(dark2)
def compare_darks(dark1, dark2):
fig, ax = subplots(nrows=2)
ax[0].imshow(dark1, vmin=0, vmax=1000,cmap='gray')
ax[1].imshow(dark2, vmin=0, vmax=1000,cmap='gray')
l1b.detector_raw.shape
l1b.detector_dark.shape
rcParams['figure.figsize']
compare_darks(dark1,dark2)
# _, axes = subplots(2)
for i, dark in enumerate([dark0, dark1, dark2]):
hist(dark.ravel(), 100, range=(0,5000), log=True, label='dark'+str(i),alpha=0.5)
legend()
delta_mean = abs(dark1.mean() - dark2.mean())
delta_mean
def myhist(data, **kwargs):
hist(data.ravel(), 100, range=(0,5000), log=True, alpha=0.5, **kwargs)
fig, axes = subplots(nrows=2)
axes = axes.ravel()
for i, dark in enumerate([dark1, dark2]):
axes[0].hist(dark.ravel(), 100, range=(0, 5000), log=True, label='dark'+str(i+1),
alpha=0.5)
axes[0].legend()
axes[0].set_title('Original dark1 and dark2 histograms')
for txt,dark in zip(['dark1+delta_mean','dark2'],[dark1+delta_mean,dark2]):
axes[1].hist(dark.ravel(), 100, range=(0, 5000), log=True, label=txt, alpha=0.5)
axes[1].legend()
axes[1].set_title('Shifted dark1 histogram by the difference of their means')
spatial = 30
fig, axes = subplots(ncols=2)
axes = axes.ravel()
def do_plot(ax):
ax.plot(dark2[spatial], '--', label='dark2')
ax.plot(dark1[spatial], '--', label='dark1')
for delta in range(180, 230, 10):
ax.plot(dark1[spatial]+delta, label=delta)
ax.legend(loc='best',ncol=2)
ax.set_ylim(0,1000)
do_plot(axes[0])
axes[0].set_xlim(0,25)
axes[0].set_title('Left 25 spectral bins')
do_plot(axes[1])
length = dark2.shape[1]
axes[1].set_xlim(length-25,length)
axes[1].set_title('Right 25 spectral bins')
# missing code for interpolation check.
raw0 = l1b.detector_raw[0]
spatial = l1b.detector_raw.shape[1]//2
plot(raw0[spatial] - dark1[spatial], 'g',
label='first light minus 2nd dark')
plot(raw0[spatial] - dark2[spatial], 'b',
label='first light minus last dark')
title("Show the importance of taking the right dark")
legend()
imshow(l1b.detector_raw[-1])
title("Last light image in cube")
fig, axes = subplots(nrows=3)
raw = l1b.detector_raw
axes[0].imshow(raw[0])
axes[1].imshow(raw[1])
axes[2].imshow(raw[2])
fig.suptitle("First 3 lights in cube")
spatial = 20
data = dark0[spatial]/dark1[spatial]
plot(data, label=spatial)
legend(loc='best')
title("one row, first dark / second dark. Mean:{:.2f}".format(data.mean()))
grid()
for spatial in range(20,60,10):
plot(dark0[spatial]/dark2[spatial], label=spatial)
legend(loc='best')
raw45 = raw[45]
fig, axes = subplots(nrows=2)
im = axes[0].imshow(dark1, vmax=600, vmin=0)
colorbar(im, ax=axes[0])
im = axes[1].imshow(dark2, vmax=600, vmin=0)
colorbar(im, ax=axes[1])
fig.suptitle("Comparing 2nd dark (before) and last dark (after set)")
spatial=30
raw45 = l1b.detector_raw[-1]
plot(raw45[spatial]-dark2[spatial], label='last light - dark2')
plot(raw45[spatial]-dark1[spatial], label='last light - dark1')
legend(loc='best')
title("Important to subtract the right dark")
spatial=30
plot(raw0[spatial] - dark2[spatial], label='raw0 - last dark')
plot(raw0[spatial] - dark1[spatial], label='raw0 - 2nd dark')
plot(raw0[spatial] - dark0[spatial], label='raw0 - 1st dark')
legend(loc='best')
myhist(dark2, label='dark2')
for a in linspace(1.4,1.6, 3):
myhist(dark1*a, label=str(a))
legend()
dettemp = l1b.DarkEngineering.T['DET_TEMP']
casetemp = l1b.DarkEngineering.T['CASE_TEMP']
print(dettemp[0]/dettemp[1])
print(casetemp[1]/dettemp[0])
for a in [1.5,1.52, 1.54]:
plot(a*dark1.mean(axis=0), label=str(a))
plot(dark2.mean(axis=0), label='dark2')
legend(loc='best')
fig, ax = plt.subplots()
# lines = []
# for i in range(0, 11):
# frac = 0.8 + i*0.03
# diff = rawa - frac*dark
# lines.append(plt.plot(diff[:, j] + i*1000))
diff = rawa - 0.8*dark[...,0]
line, = ax.plot(diff[:, 0])
# ax.set_ylim(-1000, 11000)
def animate(j):
# for i in range(0, 11):
# frac = 0.8 + i*0.03
# diff = rawa - frac*dark
line.set_ydata(diff[:, j] + 0*1000)
def init():
line.set_ydata(np.ma.array(np.arange(341), mask=True))
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(1, 61),
init_func=init, interval=25,
blit=True)
plt.show()
for j in range(61):
plt.clf()
for i in range(0,11):
frac = 0.8 + i*0.03
diff = rawa - frac*dark[...,0]
plt.plot(diff[:, j] + i*1000)
plt.ylim(-1000, 11000)
plt.waitforbuttonpress(0.1)
fig, axes = plt.subplots(nrows=3)
for ax,img in zip(axes, l1b.detector_dark):
im = ax.imshow(img,vmax=2000,cmap='hot')
ax.set_title("Mean: {:.1f}".format(img.mean()))
plt.colorbar(im, ax=ax)
ratio1 = dark0/dark1
ratio2 = dark0/dark2
ratio3 = dark1/dark2
fig, axes = plt.subplots(nrows=3)
for ax,img in zip(axes, [ratio1, ratio2, ratio3]):
im = ax.imshow(img, vmax=1.5, cmap='hot')
plt.colorbar(im, ax=ax)
ax.set_title("Mean: {:.2f}".format(img.mean()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Compute statistic
Step4: Plot
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition1 = epochs1.get_data() # as 3D matrix
event_id = 2
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition2 = epochs2.get_data() # as 3D matrix
condition1 = condition1[:, 0, :] # take only one channel to get a 2D array
condition2 = condition2[:, 0, :] # take only one channel to get a 2D array
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([condition1, condition2], n_permutations=1000,
threshold=threshold, tail=1, n_jobs=1,
out_type='mask')
times = epochs1.times
plt.close('all')
plt.subplot(211)
plt.title('Channel : ' + channel)
plt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),
label="ERF Contrast (Event 1 - Event 2)")
plt.ylabel("MEG (T / m)")
plt.legend()
plt.subplot(212)
for i_c, c in enumerate(clusters):
c = c[0]
if cluster_p_values[i_c] <= 0.05:
h = plt.axvspan(times[c.start], times[c.stop - 1],
color='r', alpha=0.3)
else:
plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),
alpha=0.3)
hf = plt.plot(times, T_obs, 'g')
plt.legend((h, ), ('cluster p-value < 0.05', ))
plt.xlabel("time (ms)")
plt.ylabel("f-values")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Context handlers
Step2: File i/o packages in stdlib
Step3: The implementation technical details like the "rt" vs "r" mode vary a bit, check documentation when unsure.
Step4: CSV
Step5: Note how the truth value and number needed a bit of tinkering. This is one of the downsides of the CSV format, there is no agreed upon way to mark what is a string and what is a number and what is a boolean value.
Step6: Pickle
Step7: The beautiful thing about pickle is that it will serialize complex objects and deserialize them the same way.
|
<ASSISTANT_TASK:>
Python Code:
filename = "../data/example_file.txt"
fp = open(filename, "w")
for string in ["Hello", "Hey", "moi"]:
fp.write(string + "\n")
fp.close()
fp = open(filename, "r")
for line in fp:
print(line.strip()) # the \n is contained in the line, calling strip removes whitespace at the end and beginning
fp.close()
with open(filename, "r") as file:
for line in file:
print(line.strip())
import gzip
# the library gzip offers an API like open(), see https://docs.python.org/3/library/gzip.html
zipped_file_name = "../data/zipped_file.gz"
with gzip.open(zipped_file_name, "wt") as zipped_file:
for line in ["This", "is", "an", "example", "."]:
zipped_file.write((line + "\n"))
## Go ahead, try to read the lines read from zipped_file_name and print them.
## It goes just like in the examples above, except with gzip.open instead of open
## as this is text, you'll need to open the file in mode "rt" and not just r
# we will use this object throughout the examples to illustrate different file format handling
pythons = [
{"name": "Graham Chapham", "birthyear": 1941, "dead": True},
{"name": "Eric Idle", "birthyear": 1943, "dead": False},
{"name": "Terry Gilliam", "birthyear": 1940, "dead": False},
{"name": "Terry Jones", "birthyear": 1942, "dead": False},
{"name": "John Cleese", "birthyear": 1939, "dead": False},
{"name": "Michael Palin", "birthyear": 1939, "dead": False},
]
import csv
filename = "../data/example.csv"
with open(filename, "w") as file_:
writer = csv.DictWriter(file_, fieldnames=["name", "birthyear", "dead"])
writer.writeheader()
for performer in pythons:
writer.writerow(performer)
def print_performer_dict(performer):
import datetime
this_year = datetime.datetime.now().year
if performer["dead"].lower() == "true":
print("%s is dead" % performer["name"])
else:
print("%s turns %d this year" % (performer["name"],
this_year - int(performer["birthyear"])))
with open(filename, "r") as file_:
reader = csv.DictReader(file_)
for performer in reader:
print_performer_dict(performer)
import json
# we have two strategies, store the entire object as JSON or store each row as a separate JSON object,
# both exist in the wild world so both will be shown
# fortunately our dicts only contain very simple datums so there will be no issue
ex_1_file = "../data/example_json_1.json"
ex_2_file = "../data/example_json_2.json"
with open(ex_1_file, "w") as file_:
json.dump(pythons, file_)
with open(ex_2_file, "w") as file_:
for performer in pythons:
json.dump(performer, file_)
file_.write("\n")
#reading back
def print_performer_dict_2(performer):
import datetime
this_year = datetime.datetime.now().year
if performer["dead"]:
print("%s is dead" % performer["name"])
else:
print("%s turns %d this year" % (performer["name"],
this_year -performer["birthyear"]))
with open(ex_1_file, "r") as file_:
data = json.load(file_)
for performer in data:
print_performer_dict_2(performer)
print("####")
with open(ex_2_file, "r") as file_:
for line in file_:
performer = json.loads(line)
print_performer_dict_2(performer)
import pickle
pickled_pythons = pickle.dumps(pythons) #pickle also has dump and dumps like json
#we could write pickled_pythons to a file here if we wanted to, but that's not really the point of the exercise
unpickled_pythons = pickle.loads(pickled_pythons)
print(str(pythons) == str(unpickled_pythons))
import io
my_output = io.StringIO()
writer = csv.DictWriter(my_output, fieldnames=["name", "birthyear", "dead"])
writer.writeheader()
writer.writerows(pythons)
file_contents = my_output.getvalue()
print("file contents would have been:\n")
print(file_contents)
print("---")
#let's construct another StringIO and use csv to read from a string and not a file
my_input = io.StringIO(file_contents)
reader = csv.DictReader(my_input)
for line in reader:
print_performer_dict(line)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Mapping bulk and SIP data
Step3: Pre-fraction dataset
|
<ASSISTANT_TASK:>
Python Code:
%load_ext rpy2.ipython
%%R
physeqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq_bulk_core = 'bulk-core'
physeq_SIP_core = 'SIP-core_unk'
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
library(phyloseq)
%%R
F = file.path(physeqDir, physeq_SIP_core)
physeq.SIP = readRDS(F)
physeq.SIP.m = physeq.SIP %>% sample_data
physeq.SIP
%%R
F = file.path(physeqDir, physeq_bulk_core)
physeq.bulk = readRDS(F)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
# parsing out to just 12C-Con gradients
physeq.bulk.f = prune_samples((physeq.bulk.m$Exp_type == 'microcosm_bulk') |
(physeq.bulk.m$Exp_type == 'SIP' &
physeq.bulk.m$Substrate == '12C-Con'),
physeq) %>%
filter_taxa(function(x) sum(x) > 0, TRUE)
physeq.bulk.f.m = physeq.bulk.f %>% sample_data %>% as.matrix %>% as.data.frame
physeq.bulk.f
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Step3: To shut the window showing the simulation, use env.close().
Step4: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Step5: Experience replay
Step6: Exploration - Exploitation
Step7: Populate the experience memory
Step8: Training
Step9: Visualizing training
Step10: Testing
|
<ASSISTANT_TASK:>
Python Code:
import gym
import tensorflow as tf
import numpy as np
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
env.reset()
rewards = []
actions = [np.random.choice(2) for _ in range(100)]
for _ in range(1000):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
#import pdb; pdb.set_trace()
#state, reward = env.step(env.step(actions[i]))
#import pdb; pdb.set_trace()
rewards.append(reward)
if done:
rewards = []
env.reset()
env.close()
print(rewards[-20:])
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the mean- squared error against λ.
Step2: For λ = 0 and λ = 1 only, plot the learning curve of mean-squared error against episode number.
Step3: plot from #2
|
<ASSISTANT_TASK:>
Python Code:
class Sarsa_Agent:
def __init__(self, environment, n0, mlambda):
self.n0 = float(n0)
self.env = environment
self.mlambda = mlambda
# N(s) is the number of times that state s has been visited
# N(s,a) is the number of times that action a has been selected from state s.
self.N = np.zeros((self.env.dealer_values_count,
self.env.player_values_count,
self.env.actions_count))
self.Q = np.zeros((self.env.dealer_values_count,
self.env.player_values_count,
self.env.actions_count))
self.E = np.zeros((self.env.dealer_values_count, self.env.player_values_count, self.env.actions_count))
# Initialise the value function to zero.
self.V = np.zeros((self.env.dealer_values_count, self.env.player_values_count))
self.count_wins = 0
self.iterations = 0
# get optimal action, with epsilon exploration (epsilon dependent on number of visits to the state)
# ε-greedy exploration strategy with εt = N0/(N0 + N(st)),
def train_get_action(self, state):
dealer_idx = state.dealer-1
player_idx = state.player-1
try:
n_visits = sum(self.N[dealer_idx, player_idx, :])
except:
n_visits = 0
# epsilon = N0/(N0 + N(st)
curr_epsilon = self.n0 / (self.n0 + n_visits)
# epsilon greedy policy
if random.random() < curr_epsilon:
r_action = Actions.hit if random.random()<0.5 else Actions.stick
# if (dealer_idx == 0 and player_idx == 0):
# print ("epsilon:%s, random:%s " % (curr_epsilon, r_action))
return r_action
else:
action = Actions.to_action(np.argmax(self.Q[dealer_idx, player_idx, :]))
# if (dealer_idx == 0 and player_idx == 0):
# print ("epsilon:%s Qvals:%s Q:%s" % (curr_epsilon, self.Q[dealer_idx, player_idx, :], action))
return action
def get_action(self, state):
action = Actions.to_action(np.argmax(self.Q[state.dealer_idx(), state.player_idx(), :]))
return action
def validate(self, iterations):
wins = 0;
# Loop episodes
for episode in xrange(iterations):
s = self.env.get_start_state()
while not s.term:
# execute action
a = self.get_action(s)
s, r = self.env.step(s, a)
wins = wins+1 if r==1 else wins
win_percentage = float(wins)/iterations*100
return win_percentage
def train(self, iterations):
# Loop episodes
for episode in xrange(iterations):
self.E = np.zeros((self.env.dealer_values_count, self.env.player_values_count, self.env.actions_count))
# get initial state for current episode
s = self.env.get_start_state()
a = self.train_get_action(s)
a_next = a
# Execute until game ends
while not s.term:
# update visits
self.N[s.dealer_idx(), s.player_idx(), Actions.as_int(a)] += 1
# execute action
s_next, r = self.env.step(s, a)
q = self.Q[s.dealer_idx(), s.player_idx(), Actions.as_int(a)]
if not s_next.term:
# choose next action with epsilon greedy policy
a_next = self.train_get_action(s_next)
next_q = self.Q[s_next.dealer_idx(), s_next.player_idx(), Actions.as_int(a_next)]
delta = r + next_q - q
else:
delta = r - q
# alpha = 1.0 / (self.N[s.dealer_idx(), s.player_idx(), Actions.as_int(a)])
# update = alpha * delta
# self.Q[s.dealer_idx(), s.player_idx(), Actions.as_int(a)] += update
self.E[s.dealer_idx(), s.player_idx(), Actions.as_int(a)] += 1
alpha = 1.0 / (self.N[s.dealer_idx(), s.player_idx(), Actions.as_int(a)])
update = alpha * delta * self.E
self.Q += update
self.E *= self.mlambda
# reassign s and a
s = s_next
a = a_next
#if episode%10000==0: print "Episode: %d, Reward: %d" %(episode, my_state.rew)
self.count_wins = self.count_wins+1 if r==1 else self.count_wins
self.iterations += iterations
# print float(self.count_wins)/self.iterations*100
# Derive value function
for d in xrange(self.env.dealer_values_count):
for p in xrange(self.env.player_values_count):
self.V[d,p] = max(self.Q[d, p, :])
def plot_frame(self, ax):
def get_stat_val(x, y):
return self.V[x, y]
X = np.arange(0, self.env.dealer_values_count, 1)
Y = np.arange(0, self.env.player_values_count, 1)
X, Y = np.meshgrid(X, Y)
Z = get_stat_val(X, Y)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
return surf
N0 = 100
agent = Sarsa_Agent(Environment(), N0, 0.9)
for i in xrange (10):
agent.train(50000)
mc_agent = MC_Agent(Environment(), 100)
mc_agent.train(1000000)
N0 = 100
lambdas = [0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
agent_list = []
sme_list = []
n_elements = mc_agent.Q.shape[0]*mc_agent.Q.shape[1]*2
for l in lambdas:
agent = Sarsa_Agent(Environment(), N0, l)
agent_list.append(l)
agent.train(1000)
sme = np.sum(np.square(agent.Q-mc_agent.Q))/float(n_elements)
sme_list.append(sme)
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
%matplotlib inline
fig = plt.figure("N100")
surf = plt.plot(lambdas[1:10], sme_list[1:10])
plt.show()
N0 = 100
l = 0.0
learning_rate = []
learning_rate_i = []
n_elements = len(mc_agent.Q)
agent = Sarsa_Agent(Environment(), N0, l)
for i in xrange(1000):
learning_rate_i.append(i)
agent.train(1)
sme = np.sum(np.square(agent.Q-mc_agent.Q))/float(1000)
learning_rate.append(sme)
fig = plt.figure("0.0")
surf = plt.plot(learning_rate_i, learning_rate)
plt.show()
N0 = 100
l = 1.0
learning_rate = []
learning_rate_i = []
agent = Sarsa_Agent(Environment(), N0, l)
for i in xrange(1000):
learning_rate_i.append(i)
agent.train(1)
sme = np.sum(np.square(agent.Q-mc_agent.Q))/float(1000)
learning_rate.append(sme)
fig = plt.figure("0.0")
surf = plt.plot(learning_rate_i, learning_rate)
plt.show()
def animate(frame):
i = agent.iterations
step_size = i
step_size = max(1, step_size)
step_size = min(step_size, 2 ** 16)
agent.train(step_size)
ax.clear()
surf = agent.plot_frame(ax)
plt.title('MC score:%s frame:%s step_size:%s ' % (float(agent.count_wins)/agent.iterations*100, frame, step_size) )
# plt.draw()
fig.canvas.draw()
print "done ", frame, step_size, i
return surf
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
%matplotlib inline
N0 = 100
mlambda = 0.2
agent = Sarsa_Agent(Environment(), N0, mlambda)
fig = plt.figure("N100")
ax = fig.add_subplot(111, projection='3d')
# ani = animation.FuncAnimation(fig, animate, 32, repeat=False)
ani = animation.FuncAnimation(fig, animate, 500, repeat=False)
# note: requires gif writer; swap with plt.show()
ani.save('Sarsa_Agent.gif', writer='imagemagick', fps=3)
# plt.show()
from IPython.display import Image
Image(url="Sarsa_Agent.gif")
# for i in xrange (10):
# agent.train(50000)
agent.validate(50000)
# 100, .75, 520 itterations = 52.8
# 1000, .75, 520 itterations =
stick_v = np.zeros((agent.env.dealer_values_count, agent.env.player_values_count))
hit_v = np.zeros((agent.env.dealer_values_count, agent.env.player_values_count))
actions = np.zeros((agent.env.dealer_values_count, agent.env.player_values_count))
values = np.zeros((agent.env.dealer_values_count, agent.env.player_values_count))
for d in xrange(agent.env.dealer_values_count):
for p in xrange(agent.env.player_values_count):
action = Actions.to_action(np.argmax(agent.Q[d, p, :]))
value = agent.V[d, p]
values[d,p] = value
if (action == Actions.stick):
stick_v[d,p] = value
hit_v[d,p] = 0
actions[d,p] = -1
else:
hit_v[d,p] = value
stick_v[d,p] = 0
actions[d,p] = 1
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
%matplotlib inline
# fig = plt.figure("N100")
# ax = fig.add_subplot()
# bx = fig.add_subplot()
fig, ax = plt.subplots()
fig2, bx = plt.subplots()
fig3, cx = plt.subplots()
fig4, dx = plt.subplots()
heatmap3 = ax.pcolor(actions, cmap=plt.cm.hot, alpha=0.8)
heatmap = bx.pcolor(stick_v, cmap=plt.cm.afmhot, alpha=0.8)
heatmap2 = cx.pcolor(hit_v, cmap=plt.cm.afmhot, alpha=0.8)
heatmap4 = dx.pcolor(values, cmap=plt.cm.afmhot, alpha=0.8)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use k-means to rediscover these clusters in unsupervised learning
|
<ASSISTANT_TASK:>
Python Code:
from numpy import random, array
#Create fake income/age clusters for N people in k clusters
def createClusteredData(N, k):
random.seed(10)
pointsPerCluster = float(N)/k
X = []
for i in range (k):
incomeCentroid = random.uniform(20000.0, 200000.0)
ageCentroid = random.uniform(20.0, 70.0)
for j in range(int(pointsPerCluster)):
X.append([random.normal(incomeCentroid, 10000.0), random.normal(ageCentroid, 2.0)])
X = array(X)
return X
%matplotlib inline
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from numpy import random, float
data = createClusteredData(100, 5)
model = KMeans(n_clusters=5)
# Note I'm scaling the data to normalize it! Important for good results.
model = model.fit(scale(data))
# We can look at the clusters each data point was assigned to
print model.labels_
# And we'll visualize it:
plt.figure(figsize=(8, 6))
plt.scatter(data[:,0], data[:,1], c=model.labels_.astype(float))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Rotations
Step5: PCA
Step8: FastFourier Transformation
Step9: Save python object with pickle
Step10: Progress Bar
Step11: Check separations by histogram and scatter plot
Step13: Plot Cumulative Lift
Step15: GBM skitlearn
Step17: Xgboost
Step18: LightGBM
Step21: Control plots
Step25: Tuning parameters of a model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import precision_recall_curve
df = pd.read_csv("iris.csv")
def rotMat3D(a,r):
Return the matrix that rotate the a vector into the r vector. numpy array are required
a = a/np.linalg.norm(a)
r = r/np.linalg.norm(r)
I = np.eye(3)
v = np.cross(a,r)
c = np.inner(a,r)
v_x = np.array([[0,-v[2],v[1]],[v[2],0,-v[0]],[-v[1],v[0],0]])
return I + v_x + np.matmul(v_x,v_x)/(1+c)
# example usage
z_old = np.array([0, 0, 1])
z = np.array([1, 1, 1])
R = rotMat3D(z, z_old)
print(z, R.dot(z))
print(z_old, R.dot(z_old))
print(np.linalg.norm(z), np.linalg.norm(R.dot(z)))
def createR2D(vector):
rotate the vector to [0,1], require numpy array
m = np.linalg.norm(vector)
c, s = vector[1]/m , vector[0]/m
R2 = np.array([c, -s, s, c]).reshape(2,2)
return R2
# example usage
y_old = np.array([3,4])
R2 = createR2D(y_old)
print(y_old, R2.dot(y_old))
from sklearn import decomposition
def pca_decomposition(df):
Perform sklearn PCA. The returned components are already ordered by the explained variance
pca = decomposition.PCA()
pca.fit(df)
return pca
def pca_stats(pca):
print("variance explained:\n", pca.explained_variance_ratio_)
print("pca components:\n", pca.components_)
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
def add_pca_to_df(df, allvars, pca):
df[["pca_" + str(i) for i, j in enumerate(pca.components_)
]] = pd.DataFrame(pca.fit_transform(df[allvars]))
pca = pca_decomposition( df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']] )
pca_stats(pca)
add_pca_to_df(df, ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'], pca)
plot_classcolor(df, 'pca_0', 'pca_1', 'species_id')
from scipy.fftpack import fft, rfft, irfft, fftfreq
def rfourier_transformation(df, var, pass_high=-1, pass_low=-1, verbose=True, plot=True):
Return the signal after low and high filter applied.
Use verbose and plot to see stats and plot the signal before and after the filter.
low = pass_high
high = pass_low
if (high < low) and (high>0):
print("Cannot be pass_low < pass_high!!")
return -1
time = pd.Series(df.index.values[1:10] -
df.index.values[:10 - 1]) # using the first 10 data
dt = time.describe()['50%']
if (verbose):
print(
sampling time: {0} s
sampling frequency: {1} hz
max freq in rfft: {2} hz
.format(dt, 1 / dt, 1 / (dt * 2), 1 / (dt)))
signal = df[var]
freq = fftfreq(signal.size, d=dt)
f_signal = rfft(signal)
m = {}
if (low > 0):
f_signal_lowcut = f_signal.copy()
f_signal_lowcut[(freq < low)] = 0
cutted_signal_low = irfft(f_signal_lowcut)
m['low'] = 1
if (high > 0):
f_signal_highcut = f_signal.copy()
f_signal_highcut[(freq > high)] = 0
cutted_signal_high = irfft(f_signal_highcut)
m['high'] = 1
if (high > 0) & (low > 0):
f_signal_bwcut = f_signal.copy()
f_signal_bwcut[(freq < low) | (freq > high)] = 0
cutted_signal_bw = irfft(f_signal_bwcut)
m['bw'] = 1
m['low'] = 2
m['high'] = 3
n = len(freq)
if (plot):
f, axarr = plt.subplots(len(m) + 1, 1, sharex=True, figsize=(18,15))
f.canvas.set_window_title(var)
# time plot
axarr[0].plot(signal)
axarr[0].set_title('Signal')
if 'bw' in m:
axarr[m['bw']].plot(df.index, cutted_signal_bw)
axarr[m['bw']].set_title('Signal after low-high cut')
if 'low' in m:
axarr[m['low']].plot(df.index, cutted_signal_low)
axarr[m['low']].set_title('Signal after high filter (low frequencies rejected)')
if 'high' in m:
axarr[m['high']].plot(df.index, cutted_signal_high)
axarr[m['high']].set_title('Signal after low filter (high frequencies rejected)')
plt.show()
# spectrum
f = plt.figure(figsize=(18,8))
plt.plot(freq[0:n // 2], f_signal[:n // 2])
f.suptitle('Frequency spectrum')
if 'low' in m:
plt.axvline(x=low, ymin=0., ymax=1, linewidth=2, color='red')
if 'high' in m:
plt.axvline(x=high, ymin=0., ymax=1, linewidth=2, color='red')
plt.show()
if 'bw' in m:
return cutted_signal_bw
elif 'low' in m:
return cutted_signal_low
elif 'high' in m:
return cutted_signal_high
else:
return signal
acc = pd.read_csv('accelerations.csv')
signal = rfourier_transformation(acc, 'x', pass_high=0.1, pass_low=0.5, verbose=True, plot=True)
# save in pickle with gzip compression
import pickle
import gzip
def save(obj, filename, protocol=0):
file = gzip.GzipFile(filename, 'wb')
file.write(pickle.dumps(obj, protocol))
file.close()
def load(filename):
file = gzip.GzipFile(filename, 'rb')
buffer = ""
while True:
data = file.read()
if data == "":
break
buffer += data
obj = pickle.loads(buffer)
file.close()
return obj
# Simple bar, the one to be used in a general python code
import tqdm
import time
for i in tqdm.tqdm(range(0, 1000)):
pass
# Bar to be used in a jupyter notebook
for i in tqdm.tqdm_notebook(range(0, 1000)):
pass
# custom update bar
tot = 4000
bar = tqdm.tqdm_notebook(desc='Status ', total=tot, mininterval=0.5, miniters=5, unit='cm', unit_scale=True)
# with the file options you can show the progress bar into a file
# mininterval: time in seconds to see an update on the progressbar. tqdm always gets updated in the background, but it will diplay only every mininterval.
# miniters: Tweak this and `mininterval` to get very efficient loops, if 0 will only use mininterval
# unit_scale: use international scale for the units (k, M, m, etc...)
# bar_format: specify the bar format, default is '{l_bar}{bar}{r_bar}'. It can impact the performance if you ask for complicate bar format
# unit_divisor: [default: 1000], ignored unless `unit_scale` is True
# ncols: The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound.
for l in range(0, tot):
if ((l-1) % 10) == 0:
bar.update(10)
if l % 1000 == 0:
bar.write('to print something without duplicate the progress bar (if you are using tqdm.tqdm instead of tqdm.tqdm_notebook)')
print('or use the simple print if you are using tqdm.tqdm_notebook')
time.sleep(0.001)
# Some text from web page to don't slow your code and keep the progressbar with the right printing frequency
# mininterval is more intuitive to configure than miniters.
# clever adjustment system dynamic_miniters will automatically adjust miniters to the amount of iterations that fit into time mininterval.
# Essentially, tqdm will check if it’s time to print without actually checking time. This behavior can be still be bypassed by manually setting miniters.
# However, consider a case with a combination of fast and slow iterations. After a few fast iterations, dynamic_miniters will set miniters to a large number.
# When interation rate subsequently slows, miniters will remain large and thus reduce display update frequency. To address this:
# maxinterval defines the maximum time between display refreshes. A concurrent monitoring thread checks for overdue updates and forces one where necessary.
# you can use tqdm as bash command too (e.g. for compression/decompression of a file, cut, sed, awk operations etc...)
!seq 9999999 | tqdm --unit_scale | wc -l
# use trange instead of range, it's faster with progressbar
for i in tqdm.trange(100):
pass
# use tqdm.tnrange instead of trange in jupyter notebook
for i in tqdm.tnrange(100):
pass
# change the prefix and postfix of the bar during executions
from random import random, randint
t = tqdm.trange(100)
for i in t:
# Description will be displayed on the left
t.set_description('GEN %i' % i)
# Postfix will be displayed on the right, and will format automatically based on argument's datatype
t.set_postfix(loss=random(), gen=randint(1,999), str='h', lst=[1, 2])
time.sleep(0.1)
# nested progress bar
for i in tqdm.tnrange(3, desc='first progressbar'):
for j in tqdm.tnrange(20, desc='\tsecond progressbar', leave=True):
sleep(0.05)
# with this extension you can use tqdm_notebook().pandas(...) instead of tqdm.pandas(...)
from tqdm import tqdm_notebook
!jupyter nbextension enable --py --sys-prefix widgetsnbextension
# pandas apply & groupby operations with progressbar (tqdm state that it will not noticeably slow pandas down)
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, int(1e8), (100, 3)))
# Create and register a new `tqdm` instance with `pandas`
# (can use tqdm_gui, optional kwargs, etc.)
print('set tqdm_notebook for pandas, show the bar')
tqdm_notebook().pandas()
# Now you can use `progress_apply` instead of `apply`
print('example usage of progressbar in a groupby pandas statement')
df_g = df.groupby(0).progress_apply(lambda x: time.sleep(0.01))
print('example usage of progressbar in an apply pandas statement')
df_a = df.progress_apply(lambda x: time.sleep(0.01))
def plot_classcolor(df, x='y', y='x', hue=None):
sns.lmplot(x, y, data=df, hue=hue, fit_reg=False)
sns.plt.title("({} vs {})".format(y, x))
plt.show()
plot_classcolor(df, 'sepal_length', 'sepal_width', hue='species')
def plot_histo_per_class(df, var, target):
t_list = df[target].unique()
for t in t_list:
sns.distplot(
df[df[target] == t][var], kde=False, norm_hist=True, label=str(t))
sns.plt.legend()
sns.plt.show()
plot_histo_per_class(df, 'sepal_length', "species_id")
def plotLift(df, features, target, ascending=False, multiclass_level=None):
Plot the Lift function for all the features.
Ascending can be a list of the same feature length or a single boolean value.
For the multiclass case you can give the value of a class and the lift is calculated
considering the select class vs all the other
if multiclass_level != None:
df = df[features+[target]].copy()
if multiclass_level != 0:
df.loc[df[target] != multiclass_level, target] = 0
df.loc[df[target] == multiclass_level, target] = 1
else :
df.loc[df[target] == multiclass_level, target] = 1
df.loc[df[target] != multiclass_level, target] = 0
npoints = 100
n = len(df)
st = n / npoints
df_shuffled = df.sample(frac=1)
flat = np.array([[(i * st) / n, df_shuffled[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
flat = flat.transpose()
to_leg = []
if not isinstance(features, list):
features = [features]
if not isinstance(ascending, list):
ascending = [ascending for i in features]
for f, asc in zip(features, ascending):
a = df[[f, target]].sort_values(f, ascending=asc)
b = np.array([[(i * st) / n, a[0:int(i * st)][target].sum()]
for i in range(1, npoints + 1)])
b = b.transpose()
to_leg.append(plt.plot(b[0], b[1], label=f)[0])
to_leg.append(plt.plot(flat[0], flat[1], label="no_gain")[0])
plt.legend(handles=to_leg, loc=4)
plt.xlabel('faction of data', fontsize=18)
plt.ylabel(target+' (cumulative sum)', fontsize=16)
plt.show()
# Lift for regression
titanic = sns.load_dataset("titanic")
plotLift(titanic, ['sibsp', 'survived', 'class'], 'fare', ascending=[False,False, True])
# Lift plot example for multiclass
plotLift(
df, ['sepal_length', 'sepal_width', 'petal_length'],
'species_id',
ascending=[False, True, False],
multiclass_level=3)
def plot_var_imp_skitlearn(features, clf_fit):
Plot var_imp for a skitlearn fitted model
my_ff = np.array(features)
importances = clf_fit.feature_importances_
indices = np.argsort(importances)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, importances[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances', size=25)
plt.grid(True)
plt.show()
importance_dict = dict(zip(my_ff[indices], importances[indices]))
return importance_dict
import xgboost
#### VERSIONE GIUSTA A LAVORO
def plot_var_imp_xgboost(model, mode='gain', ntop=-1):
Plot the vars imp for xgboost model, where mode = ['weight','gain','cover']
'weight' - the number of times a feature is used to split the data across all trees.
'gain' - the average gain of the feature when it is used in trees
'cover' - the average coverage of the feature when it is used in trees
importance = model.get_score(importance_type=mode)
importance = sorted(
importance.items(), key=operator.itemgetter(1), reverse=True)
if ntop == -1: ntop = len(importance)
importance = importance[0:ntop]
my_ff = np.array([i[0] for i in importance])
imp = np.array([i[1] for i in importance])
indices = np.argsort(imp)
pos = np.arange(len(my_ff[indices])) + .5
plt.figure(figsize=(20, 0.75 * len(my_ff[indices])))
plt.barh(pos, imp[indices], align='center')
plt.yticks(pos, my_ff[indices], size=25)
plt.xlabel('rank')
plt.title('Feature importances (' + mode + ')', size=25)
plt.grid(True)
plt.show()
return
import lightgbm as lgb
### VERSIONE CORRETTA A LAVORO
def plot_ROC_PrecisionRecall(y_test, y_pred):
Plot ROC curve and Precision-Recall plot
numpy arrays are required.
fpr_clf, tpr_clf, _ = roc_curve(y_test, y_pred)
precision, recall, thresholds = precision_recall_curve(y_test, y_pred)
f1 = np.array([2 * p * r / (p + r) for p, r in zip(precision, recall)])
f1[np.isnan(f1)] = 0
t_best_f1 = thresholds[np.argmax(f1)]
roc_auc = auc(fpr_clf, tpr_clf)
plt.figure(figsize=(25, 25))
# plot_ROC
plt.subplot(221)
plt.plot(
fpr_clf,
tpr_clf,
color='r',
lw=2,
label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='-')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
# plot_PrecisionRecall
plt.subplot(222)
plt.plot(
recall, precision, color='r', lw=2, label='Precision-Recall curve')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precison-Recall curve')
plt.legend(loc="lower right")
plt.show()
return {"roc_auc": roc_auc, "t_best_f1": t_best_f1}
def plot_ROC_PR_test_train(y_train, y_test, y_test_pred, y_train_pred):
Plot ROC and Precision-Recall curve for test and train.
Return the auc for test and train
roc_auc_test = plot_ROC_PrecisionRecall(y_test, y_test_pred)
roc_auc_train = plot_ROC_PrecisionRecall(y_train, y_train_pred)
return roc_auc_test, roc_auc_train
### Bayesian Optimization
# https://github.com/fmfn/BayesianOptimization
from bayes_opt import BayesianOptimization
def xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds):
Create the function to be optimized (example for xgboost)
params = { 'eta': 0.1, 'objective':'binary:logistic','silent': 1, 'eval_metric': 'auc' }
def xgb_evaluate(min_child_weight,colsample_bytree,max_depth,subsample,gamma,alpha):
Return the function to be maximized by the Bayesian Optimization,
where the inputs are the parameters to be optimized and the output the
evaluation_metric on test set
params['min_child_weight'] = int(round(min_child_weight))
params['cosample_bytree'] = max(min(colsample_bytree, 1), 0)
params['max_depth'] = int(round(max_depth))
params['subsample'] = max(min(subsample, 1), 0)
params['gamma'] = max(gamma, 0)
params['alpha'] = max(alpha, 0)
#cv_result = xgb.cv(params, xg_train, num_boost_round=num_rounds, nfold=5,
# seed=random_state, callbacks=[xgb.callback.early_stop(25)]
model_temp = xgb.train(params, dtrain=xg_train, num_boost_round=num_rounds,
evals=watchlist, early_stopping_rounds=15, verbose_eval=False)
# return -cv_result['test-merror-mean'].values[-1]
return float(str(model_temp.eval(xg_test)).split(":")[1][0:-1])
return xgb_evaluate
def go_with_BayesianOptimization(xg_train, xg_test, watchlist, num_rounds = 1,
num_iter = 10, init_points = 10, acq='ucb'):
Send the Batesian Optimization for xgboost. acq = 'ucb', 'ei', 'poi'
xgb_func = xgb_evaluate_gen(xg_train, xg_test, watchlist, num_rounds)
xgbBO = BayesianOptimization(xgb_func, {'min_child_weight': (1, 50),
'colsample_bytree': (0.5, 1),
'max_depth': (5, 15),
'subsample': (0.5, 1),
'gamma': (0, 2),
'alpha': (0, 2),
})
xgbBO.maximize(init_points=init_points, n_iter=num_iter, acq=acq) # poi, ei, ucb
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Load daily SST data
Step2: 3. Detect Marine Heatwave
Step3: 3.2 Check properties of MHWs
Step4: Maximum intensities (in $^∘$C) of the first ten events
Step5: Properties of the event with the largest maximum intensity
Step6: 4. Visualize
Step7: Yep, It's certainly picked out the largest event in the series (dark red shading). This event also seems to have been preceded and succeeded by a number of shorter, weaker events (light red shading).
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from datetime import date
from matplotlib import pyplot as plt
# Load marineHeatWaves definition module
import marineHeatWaves as mhw
sst = np.loadtxt('data/sst_WA.csv', delimiter=',')
# Generate time vector using datetime format (January 1 of year 1 is day 1)
t = np.arange(date(1982,1,1).toordinal(),date(2017,12,31).toordinal()+1)
dates = [date.fromordinal(tt.astype(int)) for tt in t]
mhws, clim = mhw.detect(t, sst)
mhws['n_events']
mhws['intensity_max'][0:10]
ev = np.argmax(mhws['intensity_max']) # Find largest event
print 'Maximum intensity:', mhws['intensity_max'][ev], 'deg. C'
print 'Average intensity:', mhws['intensity_mean'][ev], 'deg. C'
print 'Cumulative intensity:', mhws['intensity_cumulative'][ev], 'deg. C-days'
print 'Duration:', mhws['duration'][ev], 'days'
print 'Start date:', mhws['date_start'][ev].strftime("%d %B %Y")
print 'End date:', mhws['date_end'][ev].strftime("%d %B %Y")
plt.figure(figsize=(14,10))
plt.subplot(2,1,1)
# Plot SST, seasonal cycle, and threshold
plt.plot(dates, sst, 'k-')
plt.plot(dates, clim['thresh'], 'g-')
plt.plot(dates, clim['seas'], 'b-')
plt.title('SST (black), seasonal climatology (blue), \
threshold (green), detected MHW events (shading)')
plt.xlim(t[0], t[-1])
plt.ylim(sst.min()-0.5, sst.max()+0.5)
plt.ylabel(r'SST [$^\circ$C]')
plt.subplot(2,1,2)
# Find indices for all ten MHWs before and after event of interest and shade accordingly
for ev0 in np.arange(ev-10, ev+11, 1):
t1 = np.where(t==mhws['time_start'][ev0])[0][0]
t2 = np.where(t==mhws['time_end'][ev0])[0][0]
plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \
color=(1,0.6,0.5))
# Find indices for MHW of interest (2011 WA event) and shade accordingly
t1 = np.where(t==mhws['time_start'][ev])[0][0]
t2 = np.where(t==mhws['time_end'][ev])[0][0]
plt.fill_between(dates[t1:t2+1], sst[t1:t2+1], clim['thresh'][t1:t2+1], \
color='r')
# Plot SST, seasonal cycle, threshold, shade MHWs with main event in red
plt.plot(dates, sst, 'k-', linewidth=2)
plt.plot(dates, clim['thresh'], 'g-', linewidth=2)
plt.plot(dates, clim['seas'], 'b-', linewidth=2)
plt.title('SST (black), seasonal climatology (blue), \
threshold (green), detected MHW events (shading)')
plt.xlim(mhws['time_start'][ev]-150, mhws['time_end'][ev]+150)
plt.ylim(clim['seas'].min() - 1, clim['seas'].max() + mhws['intensity_max'][ev] + 0.5)
plt.ylabel(r'SST [$^\circ$C]')
plt.figure(figsize=(15,7))
# Duration
plt.subplot(2,2,1)
evMax = np.argmax(mhws['duration'])
plt.bar(range(mhws['n_events']), mhws['duration'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['duration'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['duration'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.ylabel('[days]')
plt.title('Duration')
# Maximum intensity
plt.subplot(2,2,2)
evMax = np.argmax(mhws['intensity_max'])
plt.bar(range(mhws['n_events']), mhws['intensity_max'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_max'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_max'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.ylabel(r'[$^\circ$C]')
plt.title('Maximum Intensity')
# Mean intensity
plt.subplot(2,2,4)
evMax = np.argmax(mhws['intensity_mean'])
plt.bar(range(mhws['n_events']), mhws['intensity_mean'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_mean'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_mean'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.title('Mean Intensity')
plt.ylabel(r'[$^\circ$C]')
plt.xlabel('MHW event number')
# Cumulative intensity
plt.subplot(2,2,3)
evMax = np.argmax(mhws['intensity_cumulative'])
plt.bar(range(mhws['n_events']), mhws['intensity_cumulative'], width=0.6, \
color=(0.7,0.7,0.7))
plt.bar(evMax, mhws['intensity_cumulative'][evMax], width=0.6, color=(1,0.5,0.5))
plt.bar(ev, mhws['intensity_cumulative'][ev], width=0.6, edgecolor=(1,0.,0.), \
color='none')
plt.xlim(0, mhws['n_events'])
plt.title(r'Cumulative Intensity')
plt.ylabel(r'[$^\circ$C$\times$days]')
plt.xlabel('MHW event number')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.