Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,000
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from openfermion.ops import QuadOperator
from openfermion.utils import commutator, normal_ordered
H = QuadOperator('q0 q0', 0.5) + QuadOperator('p0 p0', 0.5) - QuadOperator('q0', 2)
(1j/2)*normal_ordered(commutator(H, QuadOperator('q0')), hbar=2)
(1j/2)*normal_ordered(commutator(H, QuadOperator('p0')), hbar=2)
import strawberryfields as sf
from strawberryfields.ops import *
from sfopenboson.ops import GaussianPropagation
eng, q = sf.Engine(1)
with eng:
Xgate(1) | q[0]
Zgate(0.5) | q[0]
GaussianPropagation(H, 1.43) | q
state = eng.run('gaussian')
state.means()
eng.print_applied()
eng, q = sf.Engine(1, hbar=2)
t_vals = np.arange(0, 6, 0.02)
results = np.zeros([2, len(t_vals)])
for step, t in enumerate(t_vals):
eng.reset()
with eng:
Xgate(1) | q[0]
Zgate(0.5) | q[0]
GaussianPropagation(H, t) | q
state = eng.run('gaussian')
results[:, step] = state.means()
from matplotlib import pyplot as plt
plt.style.use('ggplot')
fig,ax = plt.subplots(figsize=(8,5))
ax.set_xlabel('q'); ax.set_ylabel('p')
plt.xlim((0.0,4.0))
ax.plot(*results);
from openfermion.hamiltonians import bose_hubbard
bose_hubbard(x_dimension=1, y_dimension=2, tunneling=1, interaction=2,
chemical_potential=0., dipole=3., periodic=False)
H = bose_hubbard(1, 2, 1, 1.5)
import strawberryfields as sf
from strawberryfields.ops import *
from sfopenboson.ops import BoseHubbardPropagation
eng, q = sf.Engine(2)
with eng:
Fock(2) | q[1]
BoseHubbardPropagation(H, 1.086, 20) | q
state = eng.run('fock', cutoff_dim=3)
state.fock_prob([2,0])
state.fock_prob([1,1])
state.fock_prob([0,2])
from openfermion.ops import BosonOperator
J = 1
H = BosonOperator('0 1^', -J) + BosonOperator('0^ 1', -J)
H += BosonOperator('0 2^', -J) + BosonOperator('0^ 2', -J)
H += BosonOperator('1 2^', -J) + BosonOperator('1^ 2', -J)
U = 1.5
H += BosonOperator('0^ 0 0^ 0', 0.5*U) - BosonOperator('0^ 0', 0.5*U)
H += BosonOperator('1^ 1 1^ 1', 0.5*U) - BosonOperator('1^ 1', 0.5*U)
H += BosonOperator('2^ 2 2^ 2', 0.5*U) - BosonOperator('2^ 2', 0.5*U)
eng, q = sf.Engine(3)
with eng:
Fock(2) | q[0]
BoseHubbardPropagation(H, 1.086, 100) | q
state = eng.run('fock', cutoff_dim=3)
for i in ([2,0,0], [1,1,0], [1,0,1], [0,2,0], [0,1,1], [0,0,2]):
print(state.fock_prob(i))
from scipy.linalg import expm
Jr2 = J*np.sqrt(2)
H = np.array([[U , Jr2, Jr2, 0 , 0 , 0 ],
[Jr2, 0 , J , Jr2, J , 0 ],
[Jr2, J , 0 , 0 , J , Jr2],
[0 , Jr2, 0 , U , Jr2, 0 ],
[0 , J , J , Jr2, 0 , Jr2],
[0 , 0 , Jr2, 0 , Jr2, U ]])
np.abs(expm(-1j*H*1.086)[0])**2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Forced quantum harmonic oscillator
Step2: In the Heisenberg picture, the time-evolution of the $\hat{q}$ and $\hat{p}$ operators is given by
Step3: Assuming the oscillator has initial conditions $\hat{q}(0)$ and $\hat{p}(0)$, it's easy to solve this coupled set of linear differentials analytically, giving the parameterized solution
Step4: GaussianPropagation accepts the following arguments
Step5: Now, we can run this simulation using the Gaussian backend of Strawberry Fields, and output the location of the oscillator in phase space at time $t = 1.43$
Step6: We compare this to the analytic solution,
Step7: Plotting the phase-space time evolution
Step8: Here, we are looping over the same circuit as above for values of $t$ wihtin the domain $0 ≤ t ≤ 6$, and storing the resulting expectation values $(\langle\hat{q}(t)\rangle, \langle\hat{p}t)\rangle)$ in the array results. We can plot this array in the phase space
Step9: 2. Bose-Hubbard time propagation
Step10: For more inofrmation regarding this function, please see the OpenFermion documentation.
Step11: To simulate the time-propagation of the Hamiltonian in StrawberryFields, we also need to impor the BoseHubbardPropagation class from the SFOpenBoson plugin
Step12: BoseHubbardPropagation accepts the following arguments
Step13: Now we can run this simulation using the Fock backend of Strawberry Fields, and output the Fock state probabilities at time $t = 1.086$
Step14: We can see that this matches the results obtained in the Strawberry Fields documentation.
Step15: Let's define this Hamiltonian using OpenFermion. First, constructing the tunneling terms between each pair of adjacent modes
Step16: Next, let's add an on-site interaction term, with strength $U = 1.5$
Step17: NOTE
Step18: Running the circuit, and checking some output probabilities
Step19: To verifiy this result, we can construct the $6\times6$ Hamiltonian matrix $H_{ij} = \langleφ_i\lvert\hat{H}\rvertφ_{ij}\rangle$ is a member of the set of allowed Fock states ${∣2,0,0⟩,∣1,1,0⟩,∣1,0,1⟩,∣0,2,0⟩,∣0,1,1⟩,∣0,0,2⟩}$. Performing these inner products, we find that
|
12,001
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
a = pd.Series([20, 50, 190, 11, 76])
a
dados = [20, 50, 190, 11, 76]
rotulos = ['a', 'b', 'c', 'd', 'e']
b = pd.Series(dados, index=rotulos)
b
print(a[2])
print(b[2])
print(b['c'])
a.apply(lambda x: 2*x)
matriz = np.array([[1, 2, 3], [4, 5, 6]])
nomes_linhas = ['L1', 'L2']
nomes_cols = ['C1', 'C2', 'C3']
df = pd.DataFrame(matriz, index=nomes_linhas, columns=nomes_cols)
df
print(df.to_latex()) # latex
print(df.to_csv(index=False)) #csv
print(df.to_json()) # JSON
print(df.to_html()) #HTML
df['C3']
df['C4'] = [1, 0]
df
df['C4'] = [4, 7]
df
df.transpose()
df.sort_values(by='C4', ascending=False)
df = pd.read_csv('titanic.csv')
df.head() # 5 primeiras linhas
df.tail() # últimas 5 linhas
df.columns # colunas do dataframe
df.describe() # calcula estatísticas para cada coluna numérica do DataFrame
df[df.Sex == "female"]
df['Sex'].value_counts()
df['Survived'].apply(lambda s: "Yes" if s == 1 else "No").value_counts()
df.groupby('Sex')['Survived'].value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Series
Step2: Os dados em uma série podem conter um índice, permitindo uma otimização no acesso dos dados
Step3: Além disso, o índice pode ser utilizado para dar uma semântia ao dado de uma série, permitindo também que o mesmo seja acessado pelo índice atribuído
Step4: As Series também possuem um método de transformação, conforme apresentado na aula anterior. Esse método chama-se apply
Step5: DataFrame
Step6: Para fins de exportação, um DataFrame pode ser representado em diversos formatos
Step7: Enquanto em uma Serie utilizamos os colchetes ([]) para acessar um elemento em um certo índice, no DataFrame o operador refere-se à uma Serie, permitindo acessá-la, sobrescrevê-la ou adicionar uma nova
Step8: Um DataFrame também pode ser transposto, ou seja, as labels das suas colunas viram índices e os índices viram as novas colunas
Step9: Podemos também ordenar as linhas do DataFrame a partir de uma de suas colunas
Step10: Importando um dataset real
Step11: O operador colchetes ([]) do pandas também pode ser usado como uma filtragem, ou seja, dada uma condição (ou predicado), ele retorna apenas as linhas do DataFrame que satisfaçam o predicado.
Step12: O DataFrame permite também que sejam relizadas contagens sobre os valores presentes nas séries, permitindo assim analizarmos a ocorrência de certos dados categóricos
Step13: Por último mas nunca menos importante, podemos agrupar as linhas do DataFrame a partir de uma coluna e operar sobre os grupos criados
|
12,002
|
<ASSISTANT_TASK:>
Python Code:
!sudo pip3 install -q tensorflow-hub==0.4.0
!sudo pip3 install -q --upgrade tensorflow==1.15.0
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import re
import seaborn as sns
import scipy
import math
import tensorflow as tf
print(tf.__version__)
# Task 1
import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/nnlm-en-dim50/1"
embed = hub.Module(module_url)
embeddings = embed(["cat"])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
print(sess.run(embeddings))
word_1 = "cat"
word_2 = "dog"
word_3 = "potato"
# Task 2b
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
messages = [word_1, word_2, word_3]
def create_embeddings(messages, embed):
my_embeddings = None
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
my_embeddings = np.array(message_embeddings)
for i, message_embedding in enumerate(np.array(my_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
return my_embeddings
my_embeddings = create_embeddings(messages, embed)
def plot_similarity(labels, embeddings):
corr = np.inner(embeddings, embeddings)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity([word_1, word_2, word_3], my_embeddings)
messages = ["cat", "The cat sat on the mat", "dog", "The cat sat on the dog"]
my_embeddings = create_embeddings(messages, embed)
plot_similarity(messages, my_embeddings)
def load_sts_dataset(filename):
# Loads a subset of the STS dataset into a DataFrame. In particular both
# sentences and their human rated similarity score.
sent_pairs = []
with tf.gfile.GFile(filename, "r") as f:
for line in f:
ts = line.strip().split("\t")
# (sent_1, sent_2, similarity_score)
sent_pairs.append((ts[5], ts[6], float(ts[4])))
return pd.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"])
def download_and_load_sts_data():
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = load_sts_dataset(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"))
sts_test = load_sts_dataset(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"))
return sts_dev, sts_test
sts_dev, sts_test = download_and_load_sts_data()
sts_dev.head()
sts_input1 = tf.placeholder(tf.string, shape=(None))
sts_input2 = tf.placeholder(tf.string, shape=(None))
# For evaluation we use exactly normalized rather than
# approximately normalized.
sts_encode1 = tf.nn.l2_normalize(embed(sts_input1), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(sts_input2), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
sim_scores = 1.0 - tf.acos(clip_cosine_similarities)
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
text_a = sts_data['sent_1'].tolist()
text_b = sts_data['sent_2'].tolist()
dev_scores = sts_data['sim'].tolist()
def run_sts_benchmark(session):
Returns the similarity scores
emba, embb, scores = session.run(
[sts_encode1, sts_encode2, sim_scores],
feed_dict={
sts_input1: text_a,
sts_input2: text_b
})
return scores
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
scores = run_sts_benchmark(session)
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make sure to restart your notebook's kernel before proceeding further.
Step2: <h2>TensorFlow Hub Concepts</h2>
Step3: When I completed this exercise, I got a vector that looked like [[ 0.11233182 -0.3176392 -0.0166118...]]
Step4: Now, we'll use the same process of using our Hub module to generate embeddings but instead of printing the embeddings, capture them in a variable called 'my_embeddings'.
Step5: Now, we'll use Seaborn's heatmap function to see how the vectors compare to each other. I've written the shell of a function that you'll need to complete that will generate a heatmap. The one piece that's missing is how we'll compare each pair of vectors. Note that because we are computing a score for every pair of vectors, we should have len(my_embeddings)^2 scores. There are many valid ways of comparing vectors. Generality, similarity scores are symmetric. The simplest is to take their dot product. For extra credit, implement a more complicated vector comparison function.
Step6: What you should observe is that, trivially, all words are identical to themselves, and, more interestingly, that the two more similar words have more similar embeddings than the third word.
Step7: Which is cat more similar to, "The cat sat on the mat" or "dog"? Is this desireable?
Step8: Take a look at the data. The sim score is higher when the sentences are more similar and lower when they are not.
Step9: <h3>Build the Evaluation Graph</h3>
Step11: <h3>Evaluate Sentence Embeddings</h3>
|
12,003
|
<ASSISTANT_TASK:>
Python Code:
print ("Hello my name is Levi.")
table={}
print (table)
table={}
table['a'] = 'n'
print (table)
newletter = table['a']
print(newletter)
table['b'] = 'f'
table['c'] = 'r'
print(table)
table.keys()
table.values()
print(table['c'])
print(table)
print(table['d'])
'd' in table.keys( )
table['d'] = 'n'
'd' in table.keys()
table['d'] = 'l'
print(table['d'])
myList=['t','h','e','q']
print(myList)
myList[2]
myList[0]
myList[4]
myString = "the quick brown fox jumped over the lazy dogs"
myString
myString[2]
myString[3]
table={}
table['a']='t'
print(table)
table['b']='h'
print(table)
sentence = "the quick brown fox jumped over the lazy dogs"
sentence[10]
everyletter = "abcdefghijklmnopqrstuvwxyz"
table={}
table['a']='t'
table['b']='h'
table['c']='e'
table['d']='q'
table['e']='u'
table['f']='i'
table['g']='c'
#And on and on...
table
table={}
table[ everyletter[0]] = sentence[0]
table[ everyletter[1]] = sentence[1]
table[ everyletter[2]] = sentence[2]
table[ everyletter[3]] = sentence[3]
table[ everyletter[4]] = sentence[4]
#And on and on...
table
table[ everyletter[1]] = sentence[0]
print(table)
table={}
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
for name in ["Bart", "Lisa", "Maggie"]:
print ("Hi, " + name)
for villain in ["Kylo Ren", "The Joker", "Magneto", "Megamind"]:
print ("Oh no! It's " + villain)
print ("I'll stop you!")
print ("You'll never get away with this.")
for villain in ["Kylo Ren", "The Joker", "Magneto", "Megamind"]:
print ("Oh no! It's " + villain)
print ("I'll stop you!")
print ("You'll never get away with this.")
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
for i in range(5):
print(i)
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
from IPython.display import YouTubeVideo
YouTubeVideo("rl6-zJharrs")
for letter in "this is a super secret message":
print (letter)
for letter in "this is a super secret message":
print (table[letter])
for letter in "this is a super secret message":
print ("old: " + letter)
print ("new: " + table[letter])
table[' '] = ' '
for letter in "this is a super secret message":
print (table[letter])
table[' '] = ' '
coded_message = ""
for letter in "this is a super secret message":
coded_message = coded_message + table[letter]
print (coded_message)
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
coded_message = ""
for letter in "this is a super secret message":
coded_message = coded_message + table[letter]
print (coded_message)
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
print (table)
table[' '] = ' '
coded_message = ""
for letter in "my teacher is a handsome genius":
coded_message = coded_message + table[letter]
print (coded_message)
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
name = input("What is your name? ")
print ("Well, hello, " + name)
uncoded_message = input("What message should I encode?")
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
name = "Luke"
if name is "Luke":
print ("May the force be with you, " + name)
name = "Anakin"
if name is "Luke":
print ("May the force be with you, " + name)
name = input("What is your name? ")
if name == "Luke":
print ("May the force be with you, " + name)
for name in ["Luke", "Leia", "Anakin"]:
if name is "Luke":
print ("May the force be with you, " + name)
for name in ["Luke", "Leia", "Anakin"]:
if name is "Luke" or name is "Leia":
print ("May the force be with you, " + name)
for name in ["Luke", "Leia", "Anakin"]:
if not name is "Anakin":
print ("May the force be with you, " + name)
sentence = "the quick brown fox jumped over the lazy dogs"
passphrase = ""
for letter in sentence:
if not letter in passphrase:
passphrase = passphrase + letter
print (passphrase)
sentence = "the quick brown fox jumped over the lazy dogs"
passphrase = ""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove the duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ everyletter[num]] = passphrase[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ passphrase[num]] = everyletter[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicates
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build a table for decoding
for num in range(26):
table[ passphrase[num]] = everyletter[num]
table[' '] = ' '
#**** This is the new part. If we're encoding, we rebuild the table
# But we switch everyletter and passphrase
task = input("Are you encoding or decoding?")
if task == "encoding":
#Build a table for encoding instead
print ("Remaking table for encoding...")
for num in range(26):
table[ everyletter[num]] = passphrase[num]
print (table)
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can also change the code in the boxes before you run them. Try replacing "Levi" with your name in the box above. Now, while the cursor is still in the box, push Ctrl+Enter to see Python echo it back. From here on out, make a habit of executing each box in this notebook with Ctrl-Enter.
Step2: Don't go on until you click into the box above and push Ctrl+Enter. You should see {} appear below the box. One more thing about Jupyter notebooks. Each time you execute a box, Python remembers what you've done. Here, we created a dictionary called "table." When we refer to "table" in the next box, Python will remember that "table" is an empty dictionary. The code in the box below adds one entry to your dictionary with the letter 'a' in the left column and 'n' in the right column. Try it out by pushing Ctrl-Enter in the box below.
Step3: Don't be afraid to change the code in the boxes. You'll learn a lot by playing around, so don't be shy. Make some changes to the code in the boxes, then push Ctrl-Enter to see what you changed. You might get an error from Python. That's OK. See if you can figure out how to fix it. As you play around, remember that the notebook is remembering your changes. If you want to clear the notebook's memory and start again, look for the menu option above called "Kernel." Select "Restart" from that sub menu.
Step4: The first line, newletter = table['a'], tells Python to read the entry we gave it for 'a' and call it by a new name, newletter. In this case, table['a'] is 'n'. After that, everytime we refer to newletter Python will remember that we told it newletter is 'n'. So, when we ask Python to print newletter, it remembers and prints 'n'. newletter is what programmers call a 'variable'--a name that refers, in your program, to some other value.
Step5: The colon (
Step6: Python didn't like that and sent us back a KeyError. To avoid these errors, we can first check to see whether that table entry exists.
Step7: We just asked Python, "Is d one of the keys in our table?" Python said, "No."
Step8: A key can only appear in the left column one time. If I try to assign one letter on the left column to two different letters on the right, the dictionary will store the last value.
Step9: The old table entry for 'd' is lost and replaced by the new value. Note that more than one key can correspond to a single value. When we assigned 'd' to be changed to 'n', both 'a' and 'd' were being translated to 'n'. That's perfectly OK.
Step10: The list works a lot like a dictionary with numbers in the left column. We can ask for specific letters in the list by referring to their position in the list. Try this
Step11: Wait, but isn't 'e' the third letter in the list? It's important to remember that computers start counting from zero.
Step12: In a Python list, position 0 is the very start of the list. Now, there are 4 letters in the list. What do you think will happen if we ask for the letter in position 4?
Step13: What do you think this error means? (Hint
Step14: Our string acts just like a list
Step15: The third letter in our string is 'e'. We ask Python for letter number 2 since Python starts counting from zero.
Step16: The output above just looks like two quote marks, but there's actually a space between them. The 4th "letter" in our string is actually a space.
Step17: Next, I'll add a row for changing all of the b's.
Step18: In fact, what I'd like to do is to take an easy to remember phrase and use it to determine the right hand side of my table. I'd like to put 'abcdefghijklmnopqrstuvwxyz' down the left hand column and 'the quick brown fox jumped over the lazy dogs' down the right hand side. So 'a' changes to 't'. 'b' changes to 'h'. 'c' changes to 'e' and so on.
Step19: Just like before, we can treat it like a list of letters.
Step20: Now, let's create another string like this.
Step21: We're going to use these two strings to create a table like this
Step22: But what if someone discovered our passphrase ("the quick brown fox...")? We'd have to change each of the individual letters. Instead, we can tell Python to read the letters from the strings, everyletter and sentence, that we created above. Take a look at this. We're using some fancier Python tricks that I'll explain in a minute.
Step23: This requires some explanation. The first line, I think you'll remember, creates an empty table or dictionary. Then, when we write
Step24: What changed? Why? This is another good chance to play around by writing your own code.
Step25: Don't worry if this isn't clear yet. We'll take a good look at this. The first line creates an empty dictionary. The next line is the loop
Step26: Here, we gave Python a list of three names. For each name on the list, Python executes the commands that follow, replacing "name" with one of the names in the list. This is the first time we've seen + in Python, but it's an easy one. "Hi, " + name, just means that we add each name in the list to the end of "Hi, " and send that to print.
Step27: The spaces before the print statements tell us whether they're part of the loop to be executed once for each villain, or just a lone command to be executed only once.
Step28: Python didn't like that. The line that says print("I'll stop you!") was not indented. So, Python thought you were done with that loop. When it saw that print("You'll never get away with this.") was indented, it didn't know what loop it was part of. So, it sent an error message. Errors are part of programming. They can be intimidating, but it can help you to sort out the problem if you read the error message. Here, Python told us "unexpected indent" and referred to print("You'll never get away with this."). It clearly didn't expect that line to be indented.
Step29: The "range" function just generates a list of numbers beginning with zero. The code below creates a list from 0 to 4 (5 total numbers), then prints each one.
Step30: So, to make our table, we make a list of numbers from 0 to 25, then for each number, Python repeats the command
Step31: BREAK TIME!
Step32: Using our table to encode
Step33: That's not very secret yet. This time, instead of printing our each letter of our message, let's look in the table, swap the letter, then print it out. Do the same as before with this code. Can you predict what Python will do?
Step34: Oops! It seemed to be doing fine, but we crashed on the 5th letter. Python called it a "KeyError". What do you think that means?
Step35: It's the space! Python is complaining because we asked for the table entry with a space, ' ', in the left column. But there is no such entry. Remember when we called the left hand column the "keys"? A KeyError means we asked for a key that isn't in the table. It even told us the incorrect key was ' '--the space! There are several ways to solve this. What would you do? Take a minute and try it.
Step36: Now, there is an entry for ' '. It just gets replaced by ' ' again. Now our original program works.
Step37: We did it! This is just what we'd expect if we encoded this message by hand.
Step38: Let's look closer at this one, too. At the beginning,
Step39: This is pretty slick. All we have to do is replace "this is a super secret message" with our own and Python does the rest of the work. Now, it's time to play around. Change the script above to encrypt a message of your own. One word of warning. We're not ready to use capital letters or punctuation yet. Use just lower case letters.
Step40: Getting Input From the User
Step41: Uh-oh! Can you find and fix the problem above?
Step42: Important note for instructors
Step43: Now, let's add this to our main program. One quick tip. For now, when our program asks for input, you might want to use all lower case letters--no capitals and no punctuation. You're welcome to try anything you want, but don't be frustrated by the error messages. We'll discuss the reason for those a little later.
Step44: In the above code, we made a few other changes. Can you see them? We added some lines beginning with #. When we begin a line with #, we're telling Python that the line is just for other humans to read and Python ignores it. So, we can put whatever we want there and it won't affect our program. It's a good idea to put comments in your program so someone else knows what you're doing.
Step45: That looks easy enough.
Step46: Aha. No message that time. Notice that the indenting rules are the same for "if" statements as for "for" statements. As long as you indent, Python will treat each new statement as part of the "if" clause and only execute it if the "if" condition is true. What if I'm not Luke?
Step47: You should get a message only if you tell Python your name is Luke. There's one new idea here. I used "==" instead of "is" here. In Python, these two mean almost the same thing. Usually you can get away with using either, but it's a good idea to always use == when comparing two strings. This is particularly true when one string is returned by input.
Step48: We combined the for loop that we already know with the if statement we're just learning about. We made a list and for every name in the list we first checked if that name is "Luke", then sent a message only to Luke.
Step49: The other way is to explicitly leave Anakin out. Python also understands "not", but in a funny way. Watch.
Step50: Python first tests
Step51: That looks almost right. It would be perfect if not for that lousy space. To get rid of that, we can use "and" to test two different things. With "or" we only needed one of the two tests to be true ("either Luke OR Leia") but with "and", both statements must be true. Try this.
Step52: Perfect. Let's incorporate this into our code. In the code below, I've left a tricky "bug". A "bug" is what programmers call a mistake in the code that causes the program not to work right.
Step53: Ouch! Errors again. Unfortunately, lots of computer programming is finding problems ("bugs") in your code. Try to solve the problem above. It might help to add some "print" statements.
Step54: (Did you forget to switch "everyletter" and "passphrase"?)
|
12,004
|
<ASSISTANT_TASK:>
Python Code:
import sys, csv
from itertools import izip
def convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path):
with open(inputX_file_path, 'rb') as inputX_f, \
open(inputY_file_path, 'rb') as inputY_f, \
open(output_file_path, 'wb') as output_f:
readerX = csv.reader(inputX_f)
readerY = csv.reader(inputY_f)
# for each line of trainY, trainX
for row, (X_line, Y_line) in enumerate(izip(readerX, readerY)):
# write the Y label and the Namespace
# NOTE Y label goes from 1...10
# =============================
output_line = str(int(Y_line[0])+1) + " |image "
# for each non-zero comma-separated value in the csv line
for i, item in enumerate(X_line):
if float(item) != 0.0:
# write pixel_no:value
output_line += "pxl" + str(i) + ":" + str(item) + " "
output_f.write(output_line+"\n")
inputX_file_path = '../data/trainX.csv'
inputY_file_path = '../data/trainY.csv'
output_file_path = '../vw/data/mnist_train.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../data/testX.csv'
inputY_file_path = '../data/testY.csv'
output_file_path = '../vw/data/mnist_test.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../data/trainX_pca.csv'
inputY_file_path = '../data/trainY.csv'
output_file_path = '../vw/data/mnist_train_pca.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../data/testX_pca.csv'
inputY_file_path = '../data/testY.csv'
output_file_path = '../vw/data/minst_test_pca.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../kaggle/data/kaggle_trainX.csv'
inputY_file_path = '../kaggle/data/kaggle_trainY.csv'
output_file_path = '../vw/data/kaggle_train.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../kaggle/data/kaggle_trainX_pca.csv'
inputY_file_path = '../kaggle/data/kaggle_trainY.csv'
output_file_path = '../vw/data/kaggle_train_pca.vw'
convert_XY_toVW(inputX_file_path, inputY_file_path, output_file_path)
inputX_file_path = '../kaggle/data/kaggle_testX_deskewed.csv'
output_file_path = '../vw/data/kaggle_test.vw'
with open(inputX_file_path, 'rb') as inputX_f, \
open(output_file_path, 'wb') as output_f:
readerX = csv.reader(inputX_f)
# for each line of testX
for row, X_line in enumerate(readerX):
# write the Y label and the Namespace
# NOTE Y label for an unlabeled test set is 1
# ===========================================
output_line = str(1) + " |image "
# for each non-zero comma-separated value in the csv line
for i, item in enumerate(X_line):
if float(item) != 0.0:
# write pixel_number:value
output_line += "pxl" + str(i) + ":" + str(item) + " "
output_f.write(output_line+"\n")
inputX_file_path = '../kaggle/data/kaggle_testX_pca.csv'
output_file_path = '../vw/data/kaggle_test_pca.vw'
with open(inputX_file_path, 'rb') as inputX_f, \
open(output_file_path, 'wb') as output_f:
readerX = csv.reader(inputX_f)
# for each line of testX
for row, X_line in enumerate(readerX):
# write the Y label and the Namespace
# NOTE Y label for an unlabeled test set is 1
# ===========================================
output_line = str(1) + " |image "
# for each non-zero comma-separated value in the csv line
for i, item in enumerate(X_line):
if float(item) != 0.0:
# write pixel_number:value
output_line += "pxl" + str(i) + ":" + str(item) + " "
output_f.write(output_line+"\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function to convert to VW format
Step2: Convert the training data to VW format
Step3: Convert the test data to VW format
Step4: Convert the training PCA data to VW format
Step5: Convert the test PCA data to VW format
Step6: Convert the Kaggle training data to VW format
Step7: Convert Kaggle PCA training data to VW format
Step8: Convert the Kaggle test data (X only) to VW format
Step9: Convert the Kaggle PCA test data (X only) to VW format
|
12,005
|
<ASSISTANT_TASK:>
Python Code:
import os
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
# alternatively, you can just set your API key directly as a string variable:
# API_KEY = "YOUR_PLANET_API_KEY_HERE"
# construct auth tuple for use in the requests library
BASIC_AUTH = (API_KEY, '')
BASE_URL = "https://api.planet.com/analytics/"
import requests
feed_list_url = f'{BASE_URL}feeds?limit=1'
resp = requests.get(feed_list_url, auth=BASIC_AUTH)
if resp.status_code == 200:
print('Yay, you can access the Analytics API')
else:
print('Something is wrong:', resp.content)
limit = 1000
feed_list_url = f'{BASE_URL}feeds?limit={limit}'
print(f'Feeds endpoint: {feed_list_url}')
resp = requests.get(feed_list_url, auth=BASIC_AUTH)
feeds = resp.json()['data']
feed_count = len(feeds)
print(f'Available feeds: {feed_count}')
if feed_count >= limit:
print('More feeds are probably available through pagination links')
print(resp.json()['links'])
from pprint import pprint
pprint(feeds[0])
import pandas as pd
# bump this up in case there are many available feeds to display
pd.options.display.max_rows = 1000
# make a dataframe from the feeds json data
df = pd.DataFrame(feeds)
# instead of including the entire source and target dicts, make columns for the types
df['targetType'] = df['target'].map(lambda t: t['type'])
df['sourceType'] = df['source'].map(lambda t: t['type'])
df[['id', 'title', 'description', 'sourceType', 'targetType', 'created', 'updated']]
limit = 1000
subscriptions_url = f'{BASE_URL}subscriptions?limit={limit}'
print(f'Subscriptions endpoint: {subscriptions_url}')
resp = requests.get(subscriptions_url, auth=BASIC_AUTH)
subs = resp.json()['data']
sub_count = len(subs)
print(f'Available subscriptions: {sub_count}')
if sub_count >= limit:
print('More subscriptions are probably available through pagination links')
print(resp.json()['links'])
pprint(subs[0])
df = pd.DataFrame(subs)
df[['id', 'title', 'description', 'feedID', 'startTime', 'endTime', 'created', 'updated']]
feed_id = feeds[0]['id']
feed_title = feeds[0]['title']
print(feed_title)
print('id:', feed_id)
filtered_subscriptions_url = f'{BASE_URL}subscriptions?feedID={feed_id}'
print('url:', filtered_subscriptions_url)
resp = requests.get(filtered_subscriptions_url, auth=BASIC_AUTH)
filtered_subs = resp.json()['data']
filtered_sub_count = len(filtered_subs)
print(f'You have access to {filtered_sub_count} subscriptions for feed {feed_id} ({feed_title})')
# get the latest subscription's geometry
subscription = subs[0]
geom = subscription['geometry']
pprint(geom)
# make a map, and draw the subscription geometry
from ipyleaflet import Map, GeoJSON
lon, lat = geom['coordinates'][0][0]
m = Map(center=(lat, lon), zoom=8)
geo_json = GeoJSON(data=subscription['geometry'], style = {'color': 'blue', 'opacity':1, 'weight':1.9, 'dashArray':'9', 'fillOpacity':0.1})
m.add_layer(geo_json)
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the base url for the Planet Analytic Feeds product
Step2: Test API Connection
Step3: Summarizing Feeds
Step4: Inspecting feed metadata
Step5: Some of the fields include
Step6: Summarizing Subscriptions
Step7: What's in a subscription?
Step8: Subscriptions also have id, title, description, created, and updated fields.
Step9: Filtering subscriptions by feed
Step10: Inspecting a subscription's geometry
|
12,006
|
<ASSISTANT_TASK:>
Python Code:
from pyspark import SparkContext
sc = SparkContext(master = 'local')
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
iris = spark.read.csv('data/iris.csv', header=True, inferSchema=True)
iris.show(5)
iris.dtypes
iris.describe().show()
from pyspark.ml.linalg import Vectors
from pyspark.sql import Row
iris2 = iris.rdd.map(lambda x: Row(features=Vectors.dense(x[:-1]), species=x[-1])).toDF()
iris2.show(5)
from pyspark.ml.feature import StringIndexer
from pyspark.ml import Pipeline
stringindexer = StringIndexer(inputCol='species', outputCol='label')
stages = [stringindexer]
pipeline = Pipeline(stages=stages)
iris_df = pipeline.fit(iris2).transform(iris2)
iris_df.show(5)
iris_df.describe().show(5)
iris_df.dtypes
train, test = iris_df.randomSplit([0.8, 0.2], seed=1234)
from pyspark.ml.classification import NaiveBayes
naivebayes = NaiveBayes(featuresCol="features", labelCol="label")
from pyspark.ml.tuning import ParamGridBuilder
param_grid = ParamGridBuilder().\
addGrid(naivebayes.smoothing, [0, 1, 2, 4, 8]).\
build()
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator()
from pyspark.ml.tuning import CrossValidator
crossvalidator = CrossValidator(estimator=naivebayes, estimatorParamMaps=param_grid, evaluator=evaluator)
crossvalidation_mode = crossvalidator.fit(train)
pred_train = crossvalidation_mode.transform(train)
pred_train.show(5)
pred_test = crossvalidation_mode.transform(test)
pred_test.show(5)
print("The parameter smoothing has best value:",
crossvalidation_mode.bestModel._java_obj.getSmoothing())
print('training data (f1):', evaluator.setMetricName('f1').evaluate(pred_train), "\n",
'training data (weightedPrecision): ', evaluator.setMetricName('weightedPrecision').evaluate(pred_train),"\n",
'training data (weightedRecall): ', evaluator.setMetricName('weightedRecall').evaluate(pred_train),"\n",
'training data (accuracy): ', evaluator.setMetricName('accuracy').evaluate(pred_train))
print('test data (f1):', evaluator.setMetricName('f1').evaluate(pred_test), "\n",
'test data (weightedPrecision): ', evaluator.setMetricName('weightedPrecision').evaluate(pred_test),"\n",
'test data (weightedRecall): ', evaluator.setMetricName('weightedRecall').evaluate(pred_test),"\n",
'test data (accuracy): ', evaluator.setMetricName('accuracy').evaluate(pred_test))
train_conf_mat = pred_train.select('label', 'prediction')
train_conf_mat.rdd.zipWithIndex().countByKey()
test_conf_mat = pred_test.select('label', 'prediction')
test_conf_mat.rdd.zipWithIndex().countByKey()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: load iris data
Step2: Merge features to create a features column
Step3: Index label column with StringIndexer
Step4: Build pipeline
Step5: Transform data
Step6: Check the data one more time
Step7: Naive Bayes classification
Step8: Build cross-validation model
Step9: Parameter grid
Step10: Evaluator
Step11: Build cross-validation model
Step12: Fit cross-validation model
Step13: Prediction on training and test sets
Step14: Best model from cross validation
Step15: Prediction accurary
Step16: Prediction accuracy on test data
Step17: Confusion matrix
Step18: Confusion matrix on test data
|
12,007
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
%matplotlib inline
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
np.random.seed(987654321)
data = st.norm.rvs(size=1000)
analyze(xdata=data)
pets = ['dog', 'cat', 'rat', 'cat', 'rabbit', 'dog', 'hamster', 'cat', 'rabbit', 'dog', 'dog']
analyze(pets)
from inspect import signature
print(analyze.__name__, signature(analyze))
print(analyze.__doc__)
example1 = [0.2, 0.25, 0.27, np.nan, 0.32, 0.38, 0.39, np.nan, 0.42, 0.43, 0.47, 0.51, 0.52, 0.56, 0.6]
example2 = [0.23, 0.27, 0.29, np.nan, 0.33, 0.35, 0.39, 0.42, np.nan, 0.46, 0.48, 0.49, np.nan, 0.5, 0.58]
analyze(example1, example2)
np.random.seed(987654321)
group_a = st.norm.rvs(size=50)
group_b = st.norm.rvs(size=25)
group_c = st.norm.rvs(size=30)
group_d = st.norm.rvs(size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.norm.rvs(0.1, 1, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
np.random.seed(987654321)
group_a = st.norm.rvs(0.0, 1, size=50)
group_b = st.norm.rvs(0.0, 3, size=25)
group_c = st.weibull_max.rvs(1.2, size=30)
group_d = st.norm.rvs(0.0, 1, size=40)
analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d})
import pandas as pd
np.random.seed(987654321)
df = pd.DataFrame(
{
'ID' : np.random.randint(10000, 50000, size=60).astype(str),
'One' : st.norm.rvs(0.0, 1, size=60),
'Two' : st.norm.rvs(0.0, 3, size=60),
'Three' : st.weibull_max.rvs(1.2, size=60),
'Four' : st.norm.rvs(0.0, 1, size=60),
'Month' : ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] * 5,
'Condition' : ['Group A', 'Group B', 'Group C', 'Group D'] * 15
}
)
df
analyze(
df['One'],
name='Column One',
title='Distribution from pandas'
)
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
contours=True,
fit=False,
title='Bivariate Analysis between Column One and Column Three'
)
analyze(
df['One'],
df['Three'],
labels=df['ID'],
highlight=df[df['Three'] < -2.0]['ID'],
fit=False,
xname='Column One',
yname='Column Three',
title='Bivariate Analysis between Column One and Column Three'
)
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
title='Bivariate Analysis between Column One and Column Three'
)
analyze(
df['One'],
df['Three'],
xname='Column One',
yname='Column Three',
groups=df['Condition'],
boxplot_borders=False,
highlight=['Group B'],
title='Bivariate Analysis between Column One and Column Three'
)
analyze(
df['Two'],
groups=df['Condition'],
categories='Condition',
name='Column Two',
title='Oneway from pandas'
)
analyze(
[df['One'], df['Two'], df['Three'], df['Four']],
groups=['One', 'Two', 'Three', 'Four'],
categories='Columns',
title='Unstacked Oneway'
)
analyze(
{'One': df['One'], 'Two': df['Two'], 'Three': df['Three'], 'Four': df['Four']},
categories='Columns',
title='Unstacked Oneway Using a Dictionary'
)
def set_quarter(data):
month = data['Month']
if month.all() in ('Jan', 'Feb', 'Mar'):
quarter = 'Q1'
elif month.all() in ('Apr', 'May', 'Jun'):
quarter = 'Q2'
elif month.all() in ('Jul', 'Aug', 'Sep'):
quarter = 'Q3'
elif month.all() in ('Oct', 'Nov', 'Dec'):
quarter = 'Q4'
else:
quarter = 'Unknown'
data.loc[:, 'Quarter'] = quarter
return data
quarters = ('Q1', 'Q2', 'Q3', 'Q4')
df2 = df.groupby(df['Month']).apply(set_quarter)
data = {quarter: data['Two'] for quarter, data in df2.groupby(df2['Quarter'])}
analyze(
[data[quarter] for quarter in quarters],
groups=quarters,
categories='Quarters',
name='Column Two',
title='Oneway of Annual Quarters'
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This will tell python to import the sci-analysis function analyze().
Step2: Now, sci-analysis should be ready to use. Try the following code
Step3: A histogram, box plot, summary stats, and test for normality of the data should appear above.
Step4: Let's examine the analyze() function in more detail. Here's the signature for the analyze() function
Step5: analyze() will detect the desired type of data analysis to perform based on whether the ydata argument is supplied, and whether the xdata argument is a two-dimensional array-like object.
Step6: If xdata is a sequence or dictionary of vectors, a location test and summary statistics for each vector will be performed. If each vector is normally distributed and they all have equal variance, a one-way ANOVA is performed. If the data is not normally distributed or the vectors do not have equal variance, a non-parametric Kruskal-Wallis test will be performed instead of a one-way ANOVA.
Step7: In the example above, sci-analysis is telling us the four groups are normally distributed (by use of the Bartlett Test, Oneway ANOVA and the near straight line fit on the quantile plot), the groups have equal variance and the groups have matching means. The only significant difference between the four groups is the sample size we specified. Let's try another example, but this time change the variance of group B
Step8: In the example above, group B has a standard deviation of 2.75 compared to the other groups that are approximately 1. The quantile plot on the right also shows group B has a much steeper slope compared to the other groups, implying a larger variance. Also, the Kruskal-Wallis test was used instead of the Oneway ANOVA because the pre-requisite of equal variance was not met.
Step9: The above example models group C as a Weibull distribution, while the other groups are normally distributed. You can see the difference in the distributions by the one-sided tail on the group C boxplot, and the curved shape of group C on the quantile plot. Group C also has significantly the lowest mean as indicated by the Tukey-Kramer circles and the Kruskal-Wallis test.
Step10: This creates a table (pandas DataFrame object) with 6 columns and an index which is the row id. The following command can be used to analyze the distribution of the column titled One
Step11: Anywhere you use a python list or numpy Array in sci-analysis, you can use a column or row of a pandas DataFrame (known in pandas terms as a Series). This is because a pandas Series has much of the same behavior as a numpy Array, causing sci-analysis to handle a pandas Series as if it were a numpy Array.
Step12: Since there isn't a correlation between columns One and Three, it might be useful to see where most of the data is concentrated. This can be done by adding the argument contours=True and turning off the best fit line with fit=False. For example
Step13: With a few point below -2.0, it might be useful to know which data point they are. This can be done by passing the ID column to the labels argument and then selecting which labels to highlight with the highlight argument
Step14: To check whether an individual Condition correlates between Column One and Column Three, the same analysis can be done, but this time by passing the Condition column to the groups argument. For example
Step15: The borders of the graph have boxplots for all the data points on the x-axis and y-axis, regardless of which group they belong to. The borders can be removed by adding the argument boxplot_borders=False.
Step16: Performing a location test on data in a pandas DataFrame requires some explanation. A location test can be performed with stacked or unstacked data. One method will be easier than the other depending on how the data to be analyzed is stored. In the example DataFrame used so far, to perform a location test between the groups in the Condition column, the stacked method will be easier to use.
Step17: From the graph, there are four groups
Step18: To perform a location test using the unstacked method, the columns to be analyzed are passed in a list or tuple, and the groups argument needs to be a list or tuple of the group names. One thing to note is that the groups argument was used to explicitly define the group names. This will only work if the group names and order are known in advance. If they are unknown, a dictionary comprehension can be used instead of a list comprehension to to get the group names along with the data
Step19: The output will be identical to the previous example. The analysis also shows that the variances are not equal, and the means are not matched. Also, because the data in column Three is not normally distributed, the Levene Test is used to test for equal variance instead of the Bartlett Test, and the Kruskal-Wallis Test is used instead of the Oneway ANOVA.
Step20: This function will take a GroupBy object called data, where data's DataFrame object was grouped by month, and set the variable quarter based off the month. Then, a new column called Quarter is added to data where the value of each row is equal to quarter. Finally, the resulting DataFrame object is returned.
|
12,008
|
<ASSISTANT_TASK:>
Python Code:
paragraph_text = "The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space
Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spac
ecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on t
he Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Proje
ct Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two-man Gemini program which ran concurrently with it from 1962 t
o 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehi
cles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and th
e Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975."
question_text = "What project put the first Americans into space?"
#question_text = "What year did the first manned Apollo flight occur?"
#question_text = "What President is credited with the original notion of putting Americans in space?"
#question_text = "Who did the U.S. collaborate with on an Earth orbit mission in 1975?"
import data_processing as dp
import tokenization
#Large
#tokenizer = tokenization.FullTokenizer(vocab_file="./data/uncased_L-24_H-1024_A-16/vocab.txt", do_lower_case=True)
#Base
tokenizer = tokenization.FullTokenizer(vocab_file="./data/uncased_L-12_H-768_A-12/vocab.txt", do_lower_case=True)
# The maximum number of tokens for the question. Questions longer than this will be truncated to this length.
max_query_length = 64
# When splitting up a long document into chunks, how much stride to take between chunks.
doc_stride = 128
# The maximum total input sequence length after WordPiece tokenization.
# Sequences longer than this will be truncated, and sequences shorter
max_seq_length = 384
# Extract tokecs from the paragraph
doc_tokens = dp.convert_doc_tokens(paragraph_text)
# Extract features from the paragraph and question
features = dp.convert_examples_to_features(doc_tokens, question_text, tokenizer, max_seq_length, doc_stride, max_query_length)
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
import ctypes
nvinfer = ctypes.CDLL("libnvinfer_plugin.so", mode = ctypes.RTLD_GLOBAL)
cm = ctypes.CDLL("./build/libcommon.so", mode = ctypes.RTLD_GLOBAL)
pg = ctypes.CDLL("./build/libbert_plugins.so", mode = ctypes.RTLD_GLOBAL)
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
import time
# For this example we are going to use batch size 1
max_batch_size = 1
# Load the BERT Engine
# with open("./bert_python.engine", "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
with open("./bert_python_base.engine", "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
print("List engine binding:")
for binding in engine:
print(" - {}: {}, Shape {}, {}".format(
"Input" if engine.binding_is_input(binding) else "Output",
binding,
engine.get_binding_shape(binding),
engine.get_binding_dtype(binding)))
# Determine dimensions and create page-locked memory buffers (i.e. won't be swapped to disk) to hold host inputs/outputs.
shape_input_0 = (max_batch_size,) + tuple(engine.get_binding_shape(0))
shape_input_1 = (max_batch_size,) + tuple(engine.get_binding_shape(1))
shape_input_2 = (max_batch_size,) + tuple(engine.get_binding_shape(2))
h_input_0 = cuda.pagelocked_empty(shape_input_0, dtype=np.int32)
h_input_1 = cuda.pagelocked_empty(shape_input_1, dtype=np.int32)
h_input_2 = cuda.pagelocked_empty(shape_input_2, dtype=np.int32)
shape_output = (max_batch_size,) + tuple(engine.get_binding_shape(3))
h_output = cuda.pagelocked_empty(shape_output, dtype=np.float32)
# Allocate device memory for inputs and outputs.
d_input_0 = cuda.mem_alloc(h_input_0.nbytes)
d_input_1 = cuda.mem_alloc(h_input_1.nbytes)
d_input_2 = cuda.mem_alloc(h_input_2.nbytes)
d_output = cuda.mem_alloc(h_output.nbytes)
# Create a stream in which to copy inputs/outputs and run inference.
stream = cuda.Stream()
print("\nRunning Inference...")
with engine.create_execution_context() as context:
eval_start_time = time.time()
# Transfer input data to the GPU.
cuda.memcpy_htod_async(d_input_0, features["input_ids"], stream)
cuda.memcpy_htod_async(d_input_1, features["segment_ids"], stream)
cuda.memcpy_htod_async(d_input_2, features["input_mask"], stream)
# Run inference.
context.execute_async(bindings=[int(d_input_0), int(d_input_1), int(d_input_2), int(d_output)], stream_handle=stream.handle)
# Transfer predictions back from the GPU.
cuda.memcpy_dtoh_async(h_output, d_output, stream)
# Synchronize the stream
stream.synchronize()
# Return the host output.
eval_time_elapsed = time.time() - eval_start_time
start_logits = h_output[0,0,0]
end_logits = h_output[0,1,0]
# The total number of n-best predictions to generate in the nbest_predictions.json output file
n_best_size = 20
# The maximum length of an answer that can be generated. This is needed
# because the start and end predictions are not conditioned on one another
max_answer_length = 30
(prediction, nbest_json, scores_diff_json) = \
dp.get_predictions(doc_tokens, features, \
start_logits, end_logits, n_best_size, max_answer_length)
print("-----------------------------")
print("Running Inference in {:.3f} Sentences/Sec".format(1.0/eval_time_elapsed))
print("-----------------------------")
print("Answer: '{}'".format(prediction))
print("with prob: {:.3f}%".format(nbest_json[0]['probability']*100.0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: In this example we ask our BERT model questions related to the following paragraph
Step3: TensorRT Inference
Step4: Data Post-Processing
|
12,009
|
<ASSISTANT_TASK:>
Python Code:
import wikinetworking as wn
import networkx as nx
from pyquery import PyQuery
%matplotlib inline
print "OK"
url = "https://en.wikipedia.org/wiki/List_of_Marvel_Cinematic_Universe_film_actors"
print url
links = wn.filter_links(PyQuery(url=url))
print links
selector="th"
links = wn.filter_links(PyQuery(url=url), selector=selector)
print links
another_url = ""
another_selector = ""
more_links = wn.filter_links(PyQuery(url=another_url), selector=another_selector)
print more_links
url_pattern = "https://en.wikipedia.org/wiki/List_of_Marvel_Comics_characters:_"
sections = [letter for letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ']
sections.append('0-9')
many_urls = [url_pattern + section for section in sections]
print many_urls
selector = ".hatnote"
more_links = wn.retrieve_multipage(many_urls, selector=selector, verbose=True)
relevant_links = wn.intersection(links, more_links)
print relevant_links
wn.write_list(relevant_links, "relevant_links.txt")
relevant_links = wn.read_list("relevant_links.txt")
print relevant_links
starting_url="/wiki/Iron_Man"
raw_crawl_data = wn.crawl(starting_url, accept=relevant_links)
import json
print json.dumps(raw_crawl_data, sort_keys=True, indent=4)
graph_data = wn.undirected_graph(raw_crawl_data)
import json
print json.dumps(graph_data, sort_keys=True, indent=4)
wn.save_dict(graph_data, "undirected_graph.json")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You just ran some Python code that imports packages. Packages are pre-written Python code. The wikinetworking package contains code for crawling, text mining and graphing Wiki articles. You can access these functions in the wn object.
Step2: Now we can download the article and get a list of links from it.
Step3: Many of these links may not be relevant to our topic. We can filter for links that exist inside certain types of HTML elements. You can find the type of element by inspecting a relevant link on your Wikipedia page in your browser. We can use a special type of filter called a CSS selector to get only links that are inside of specific types of elements.
Step4: Cross referencing lists of links
Step5: What if you need links from a lists of lists? You can automatically crawl a list of URLs, as well. First we need to generate a list of URLs.
Step6: And then we can crawl this list of URLs.
Step7: Now that we have a second set of links, we can look for the intersection of the two lists. That should give us only the URLs we want.
Step8: Let's save these links into a file so we don't have to download the data again.
Step9: Let's also make sure we can load the data after we've saved it.
Step10: Crawling The Links
Step11: We can "flatten" the data and save it for convenience.
|
12,010
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 4
sample_id = 0
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
x = np.array(x)
return x/np.max(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(xs):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
output_size =10
# Reinventing the wheel
n_xs = len(xs)
one_hot = np.zeros((n_xs, output_size), dtype=np.float32)
one_hot[range(n_xs),(xs)] = 1
return one_hot
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32,(None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# Convolution
input_depth = int(x_tensor.shape[-1])
weight_size = (conv_ksize[0], conv_ksize[1], input_depth, conv_num_outputs)
conv_strides = (1, conv_strides[0], conv_strides[1], 1)
weights = tf.Variable(tf.truncated_normal(weight_size, dtype=tf.float32, stddev=0.1))
bias = tf.Variable(tf.zeros((conv_num_outputs,), dtype=tf.float32))
conv2d_tensor = tf.nn.conv2d(x_tensor, weights, conv_strides, 'SAME') + bias
relu_tensor = tf.nn.relu(conv2d_tensor)
# Maxpooling
pool_ksize = (1, pool_ksize[0], pool_ksize[1], 1)
pool_strides = (1, pool_strides[0], pool_strides[1], 1)
max_tensor = tf.nn.max_pool(relu_tensor, pool_ksize, pool_strides, 'SAME')
return max_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# Skipping the batch dimension
static_dimensions = x_tensor.shape[1:]
flatten_image_size = 1
for dimension in static_dimensions:
flatten_image_size *= int(dimension)
# Only using basic TensorFlow
return tf.reshape(x_tensor, [-1, flatten_image_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, n_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
n_inputs = int(x_tensor.shape[1])
weight_shape = (n_inputs, n_outputs)
weights = tf.Variable(tf.truncated_normal(weight_shape, stddev=0.1, dtype=tf.float32))
bias = tf.Variable(tf.zeros(n_outputs), dtype=tf.float32)
logits = tf.add(tf.matmul(x_tensor, weights), bias)
activation = tf.nn.relu(logits)
return activation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, n_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
n_inputs = int(x_tensor.shape[1])
weight_shape = (n_inputs, n_outputs)
weights = tf.Variable(tf.truncated_normal(weight_shape, stddev=0.1, dtype=tf.float32))
bias = tf.Variable(tf.zeros(n_outputs), dtype=tf.float32)
output = tf.add(tf.matmul(x_tensor, weights), bias)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
conv_stride = (1,1)
pool_size = (2,2)
pool_stride = pool_size
model = x
model = conv2d_maxpool(model, 64, (2,2), conv_stride, pool_size, pool_stride)
model = conv2d_maxpool(model, 64, (2,2), conv_stride, pool_size, pool_stride)
model = conv2d_maxpool(model, 64, (2,2), conv_stride, pool_size, pool_stride)
# Apply a Flatten Layer
model = flatten(model)
# Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
#model = fully_conn(model, 40)
#model = fully_conn(model, 20)
#model = fully_conn(model, 10)
# Apply dropout
#model = tf.nn.dropout(model, keep_prob)
# Apply an Output Layer
n_outputs = 10
model = output(model, n_outputs)
return model
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={keep_prob:keep_probability, x:feature_batch, y:label_batch})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost_function, accuracy_function):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
training_cost = session.run(cost_function, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0})
training_accuracy = session.run(accuracy_function, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0})
validation_cost = session.run(cost_function, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
validation_accuracy = session.run(accuracy_function, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
stats = 'Cost: {:6.3f} / {:6.3f}, \t Accuracy: {:6.5f} / {:6.5f} \t (Training/Validation)'
print(stats.format(training_cost, validation_cost, training_accuracy, validation_accuracy))
epochs = 50
batch_size = 256
keep_probability = 1.0
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
12,011
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
t=np.linspace(0,4*np.pi,250)
def plot_sine1(a, b):
plt.figure(figsize=(6+a,6))
plt.plot(t, np.sin(t*a+b))
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], [0,r'$\pi$',r'$2\pi$',r'$3\pi$',r'$4\pi$'])
plt.tight_layout
plt.ylabel('sin(ax+b)')
plt.xlabel('x')
plt.title('Sin(ax+b) vs x')
plt.ylim(-1.25,1.25)
plt.yticks([-1.0,0,1.0], [-1,0,1])
plot_sine1(5, 3.4)
interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1))
assert True # leave this for grading the plot_sine1 exercise
def plot_sine2(a, b, style):
plt.figure(figsize=(6+a,6))
plt.plot(t, np.sin(t*a+b), style)
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], [0,r'$\pi$',r'$2\pi$',r'$3\pi$',r'$4\pi$'])
plt.tight_layout
plt.ylabel('sin(ax+b)')
plt.xlabel('x')
plt.title('Sin(ax+b) vs x')
plt.ylim(-1.25,1.25)
plt.yticks([-1.0,0,1.0], [-1,0,1])
plot_sine2(4.0, -1.0, 'r--')
interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style=('b','ko','r^'))
assert True # leave this for grading the plot_sine2 exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting with parameters
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
|
12,012
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mlxtend.data import iris_data
from mlxtend.preprocessing import shuffle_arrays_unison
X, y = iris_data()
X, y = shuffle_arrays_unison([X, y], random_seed=123)
X_train, X_test = X[:100], X[100:150]
y_train, y_test = y[:100], y[100:150]
np.bincount(y)
np.bincount(y_train)
np.bincount(y_test)
df = pd.DataFrame(X)
df['class'] = y
df_train = pd.DataFrame(X_train)
df_train['class'] = y_train
df_test = pd.DataFrame(X_test)
df_test['class'] = y_test
def stackhist(x, y, **kws):
grouped = pd.groupby(x, y)
data = [d for _, d in grouped]
labels = [l for l, _ in grouped]
plt.hist(data, histtype="barstacked",
label=labels, alpha=0.8, normed=True,
bins=np.arange(4.0, 8.1, 0.25))
plt.ylim([0, 0.6])
plt.xlim([4, 8])
plt.xlabel('Sepal Width [cm]')
plt.ylabel('Frequency')
with plt.style.context('fivethirtyeight'):
stackhist(df[0], df['class'])
#plt.legend(['Setosa', 'Virginica', 'Versicolor'], fontsize=12)
plt.title('All')
plt.tight_layout()
plt.savefig('./all.svg')
plt.show()
stackhist(df_train[0], df_train['class'])
plt.title('Train')
plt.tight_layout()
#plt.savefig('./train.svg')
plt.show()
stackhist(df_test[0], df_test['class'])
plt.title('Test')
plt.tight_layout()
#plt.savefig('./test.svg')
plt.show()
fig = plt.figure()
figlegend = plt.figure(figsize=(3,4))
ax = fig.add_subplot(111)
lines = ax.plot(range(10), np.random.randn(10), range(10), np.random.randn(10), np.random.randn(10), range(10),)
figlegend.legend(lines, ['Setosa', 'Virginica', 'Versicolor'], 'center')
fig.show()
figlegend.show()
figlegend.savefig('legend.svg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Iris Feature by Class Distribution in Random Subsampling
|
12,013
|
<ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import urllib2
import scipy.stats as stats
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,
x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
syn_normalized = syn_unmasked
print 'end setup'
import sklearn.mixture as mixture
n_clusters = 4
gmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag')
labels = gmm.fit_predict(syn_unmasked)
clusters = []
for l in range(n_clusters):
a = np.where(labels == l)
clusters.append(syn_unmasked[a,:])
print len(clusters)
print clusters[0].shape
# Regression (x,y,z,syn/unmasked) on cleaned data ##################################
# Load regressions
from sklearn.linear_model import LinearRegression
from sklearn.svm import LinearSVR
from sklearn.neighbors import KNeighborsRegressor as KNN
from sklearn.ensemble import RandomForestRegressor as RF
from sklearn.preprocessing import PolynomialFeatures as PF
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
names = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']
regressions = [LinearRegression(),
LinearSVR(C=1.0),
KNN(n_neighbors=10, algorithm='auto'),
RF(max_depth=5, max_features=1),
Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]
k_fold = 10
counter = 0
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Regression on cluster: ' + str(counter)
X = cluster[:, (0,1,2)] # x,y,z
Y = cluster[:,-1] # syn/unmasked from spike
for idx2, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[idx2], scores.mean(), scores.std() * 2))
print "Regression done"
counter = 0
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Regression on cluster: ' + str(counter)
X = cluster[:, (0,1,2)] # x,y,z
Y = cluster[:,-1] # syn/unmasked from spike
for power in range(2,10):
a = Pipeline([('poly', PF(degree=power)),('linear', LinearRegression(fit_intercept=False))])
scores = cross_validation.cross_val_score(a, X, Y, scoring='r2', cv=k_fold)
print("R^2 of Polynomial Regression of power of %i: %0.2f (+/- %0.2f)" % (power, scores.mean(), scores.std() * 2))
print "Regression on section done"
counter = 0
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Regression on cluster: ' + str(counter)
X = cluster[:, (0,1,2)] # x,y,z
Y = cluster[:,-1] # syn/unmasked from spike
for depth in range(3,12):
a = RF(max_depth=5, max_features=1)
scores = cross_validation.cross_val_score(a, X, Y, scoring='r2', cv=k_fold)
print("R^2 of Random Forrest Regression of Depth %i: %0.2f (+/- %0.2f)" % (depth, scores.mean(), scores.std() * 2))
a = RF(max_features=1)
scores = cross_validation.cross_val_score(a, X, Y, scoring='r2', cv=k_fold)
print("R^2 of Random Forrest Regression of no set depth: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
print "Regression on section done"
counter = 0
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Regression on cluster: ' + str(counter)
X = cluster[:, (0,1,2)] # x,y,z
Y = cluster[:,-1] # syn/unmasked from spike
for neighbor in range(5,30):
a = KNN(n_neighbors=neighbor, algorithm='auto')
scores = cross_validation.cross_val_score(a, X, Y, scoring='r2', cv=k_fold)
print("R^2 of KNN w/ %i neighbors: %0.2f (+/- %0.2f)" % (neighbor, scores.mean(), scores.std() * 2))
print "Regression on section done"
counter = 0
fig, axs = plt.subplots(1,4, figsize=(20,5))
for cluster in clusters:
s = cluster.shape
cluster = cluster.reshape((s[1], s[2]))
counter += 1
print
print'Working on cluster: ' + str(counter)
X = cluster[:, (0,1,2)] # x,y,z
Y = cluster[:,-1] # syn/unmasked from spike
figure = plt.figure()
axs[counter-1].hist(cluster[:,-1],100)
axs[counter-1].set_title('Histogram of Density w/in Cluster#: '+ str(counter))
axs[counter-1].set_xlabel('Density')
axs[counter-1].set_ylabel('Frequency')
axs[counter-1].set_ylim([0,500])
print "Done with cluster"
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Label Data by It's Associated Cluster
Step2: 1) Run general regressions on data associated with actual cluster
Step3: 2) Change polynomial degree for regression on all data, by cluster
Step4: 3) Change Random Forest Regression Parameters
Step5: 4) Changing known nearest neighbor parameters
Step6: 5) Density distrubtion in each cluster
|
12,014
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
import phoebe
b = phoebe.default_binary()
b.add_dataset('mesh')
print b.get_parameter('columns').get_choices()
b.add_dataset('lc')
print b.get_parameter('columns').get_choices()
b['columns'] = ['*@lc01', 'teffs']
b.get_parameter('columns').get_value()
b.get_parameter('columns').expand_value()
print b.get_parameter('include_times').get_value()
print b.get_parameter('include_times').get_choices()
b['include_times'] = ['lc01', 't0@system']
print b.get_parameter('include_times').get_value()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this tutorial we will review the changes in the PHOEBE mesh structures. We will first explain the changes and then demonstrate them in code. As usual, let us import phoebe and create a default binary bundle
Step2: PHOEBE 2.0 had a mesh dataset along with pbmesh and protomesh options you could send to b.run_compute(). These options were quite convenient, but had a few inherit problems
Step3: The 'include_times' parameter
|
12,015
|
<ASSISTANT_TASK:>
Python Code:
from model.db.tb_raw import TbRaw
query = TbRaw.select().where(TbRaw.id > 0)
print "select query:", query.sql()[0] % tuple(query.sql()[1])
print "query result:"
for item in query.dicts().execute():
print item
# UNION查询
query = TbRaw.select().where(TbRaw.id >= 2) | TbRaw.select().where(TbRaw.id < 2)
print query.sql()[0] % tuple(query.sql()[1])
for item in query.dicts().execute():
print item
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 分析每一次调用
|
12,016
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('./reviews.txt', 'r') as f:
reviews = f.read()
with open('./labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:10]
from collections import Counter
counts = Counter(words)
print(counts.most_common(10))
vocab = sorted(counts, key=counts.get, reverse=True)
#print(vocab)
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
#print(vocab_to_int)
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
print(reviews_ints[:1])
# Convert labels to 1s and 0s for 'positive' and 'negative'
print(labels[:50])
labels_wip = labels.split('\n')
print(labels_wip[:50])
labels_wip2 = np.array([1 if each == 'positive' else 0 for each in labels_wip])
print(labels_wip2[:10])
labels = labels_wip2
print(labels[:10])
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
print(len(reviews_ints))
reviews_ints_wip = [each for each in reviews_ints if len(each) > 0]
print(len(reviews_ints_wip))
reviews_ints = reviews_ints_wip
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
test_frac = 0.5
split_idx = int(len(features)*split_frac)
print(len(features))
print(split_idx)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*test_frac)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int) + 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='lables')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
12,017
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
# Connect to the database backend and initalize a Snorkel session
from lib.init import *
from snorkel.annotations import load_marginals
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
from snorkel.annotations import load_gold_labels
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, load_as_array=True, zero_one=True)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True)
train_marginals = load_marginals(session, split=0)
from snorkel.learning.disc_models.rnn import reRNN
train_kwargs = {
'lr': 0.001,
'dim': 100,
'n_epochs': 10,
'dropout': 0.25,
'print_freq': 1,
'batch_size': 128,
'max_sentence_length': 100
}
lstm = reRNN(seed=1701, n_threads=1)
lstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
lstm.save("spouse.lstm")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I. Loading Candidates and Gold Labels
Step2: II. Training a Long Short-term Memory (LSTM) Neural Network
Step3: Now, we get the precision, recall, and F1 score from the discriminative model
Step4: We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report
Step5: Finally, let's save our model for later use.
|
12,018
|
<ASSISTANT_TASK:>
Python Code:
from random import randint
class RandomPlayer():
Player that chooses a move randomly.
def move(self, game, legal_moves, time_left):
if not legal_moves: return (-1,-1)
return legal_moves[randint(0,len(legal_moves)-1)]
class HumanPlayer():
Player that chooses a move according to
user's input.
def move(self, game, legal_moves, time_left):
print('\t'.join(['[%d] %s'%(i,str(move)) for i,move in enumerate(legal_moves)] ))
valid_choice = False
while not valid_choice:
try:
index = int(raw_input('Select move index:'))
valid_choice = 0 <= index < len(legal_moves)
if not valid_choice:
print('Illegal move! Try again.')
except ValueError:
print('Invalid index! Try again.')
return legal_moves[index]
class OpenMoveEvalFn():
def score(self, game, maximizing_player):
if maximizing_player:
eval_func = len(game.get_legal_moves())
else:
eval_func = len(game.get_opponent_moves())
return eval_func
class CustomEvalFn():
def score(self, game, maximizing_player):
#maximize your own moves and minimize opponents moves
if maximizing_player:
my_moves = len(game.get_legal_moves())
opponents_moves = len(game.get_opponent_moves())
else:
opponents_moves = len(game.get_legal_moves())
my_moves = len(game.get_opponent_moves())
#maximize this
eval_func = my_moves - opponents_moves
return eval_func
class CustomPlayer():
def __init__(self, search_depth=15, eval_fn=CustomEvalFn(), threshold = 50):
self.eval_fn = eval_fn
self.search_depth = search_depth
self.thresh = threshold
def move(self, game, legal_moves, time_left):
if game.move_count<2:
return legal_moves[randint(0,len(legal_moves)-1)]
best_move = self.alphabeta_id(game, time_left, self.search_depth)
# you will eventually replace minimax with alpha-beta
return best_move
def utility(self, game, maximizing_player):
if game.is_winner(self):
return 50000
if game.is_opponent_winner(self):
return -50000
return self.eval_fn.score(game, maximizing_player)
def minimax(self, game, time_left, depth=float("inf"), maximizing_player=True):
#terminal states
#if maximizing_player and realize opponent has won
#if minimizing player and realize max has won
if (maximizing_player and game.is_opponent_winner(self)) or (not maximizing_player and game.is_winner(self)):
return ((-1,-1), self.utility(game, maximizing_player))
#if realize that time_left isn't much, we have an arbitrary threshold of 50 ms here/max depth is reached
if depth==0 or time_left()<self.thresh:
return ((-1,-1), self.utility(game, maximizing_player))
#get actions
actions = game.get_legal_moves()
best_move = (-1,-1)
#if maximizing player, get minimax value for all actions and choose the move which has maxMIN value
if maximizing_player:
best_val = float("-inf")
for a in actions:
_, score_of_action = self.minimax(game.forecast_move(a), time_left, depth-1, False);
best_move, best_val = (a, score_of_action) if score_of_action>=best_val else (best_move, best_val)
if time_left()<self.thresh:
return (best_move, best_val)
#if minimizing player find minimax value for all actions and choose one which has minMAX value
else:
best_val = float("inf")
for a in actions:
_, score_of_action = self.minimax(game.forecast_move(a), time_left, depth-1, True);
best_move, best_val = (a, score_of_action) if score_of_action<=best_val else (best_move, best_val)
if time_left()<self.thresh:
return (best_move, best_val)
return (best_move, best_val)
def alphabeta(self, game, time_left, depth=float("inf"), alpha=float("-inf"), beta=float("inf"), maximizing_player=True):
#terminal states
#if maximizing_player and realize opponent has won
#if minimizing player and realize max has won
if (maximizing_player and game.is_opponent_winner(self)) or (not maximizing_player and game.is_winner(self)):
return ((-1,-1), self.utility(game, maximizing_player))
#if realize that time_left isn't much, we have an arbitrary threshold of 75 ms here/max depth is reached
if depth==0 or time_left()<self.thresh:
return ((-1,-1), self.utility(game, maximizing_player))
#get actions
actions = game.get_legal_moves()
best_move = (-1,-1)
#if maximizing player, get alphabeta value for all actions and choose the move
#which has max value plus prune those which do not adher to alphabeta limits
if maximizing_player:
best_val = float("-inf")
for a in actions:
_, score_of_action = self.alphabeta(game.forecast_move(a), time_left, depth-1, alpha, beta, False);
if score_of_action>best_val:
best_move = a
best_val = score_of_action
if best_val>beta:
return a, best_val
alpha = max(alpha, best_val)
if alpha>=beta:
return a, alpha
if time_left()<self.thresh:
return (best_move, best_val)
else:
best_val = float("inf")
for a in actions:
_, score_of_action = self.alphabeta(game.forecast_move(a), time_left, depth-1, alpha, beta, True);
if score_of_action<best_val:
best_move = a
best_val = score_of_action
if best_val<alpha:
return a, best_val
beta = min(beta, best_val)
if beta<=alpha:
return a, beta
if time_left()<self.thresh:
return (best_move, best_val)
return (best_move, best_val)
def alphabeta_id(self, game, time_left, max_depth):
for depth in range(1, max_depth):
best_move, _ = self.alphabeta(game, time_left, depth)
if time_left()<self.thresh:
break
return (best_move)
Example test you can run
to make sure your AI does better
than random.
from isolation import Board
if __name__ == "__main__":
r = RandomPlayer()
h = CustomPlayer()
game = Board(h,r)
game.play_isolation(500)
Example test you can run
to make sure your basic evaluation
function works.
from isolation import Board
if __name__ == "__main__":
sample_board = Board(RandomPlayer(),RandomPlayer())
# setting up the board as though we've been playing
sample_board.move_count = 3
sample_board.__active_player__ = 0 # player 1 = 0, player 2 = 1
# 1st board = 16 moves
sample_board.__board_state__ = [
[0,2,0,0,0],
[0,0,0,0,0],
[0,0,1,0,0],
[0,0,0,0,0],
[0,0,0,0,0]]
sample_board.__last_player_move__ = [(2,2),(0,1)]
# player 1 should have 16 moves available,
# so board gets a score of 16
h = OpenMoveEvalFn()
print('This board has a score of %s.'%(h.score(sample_board)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is the ipython notebook you should use as a template for your agent. Your task for this assignment is to implement a winning AI for the game of Isolation, as specified in the assignment pdf you have been issued.
Step3: The following are functions that might be useful to you in developing your agent
Step4: This is the first part of the assignment you are expected to implement. It is the evaluation function we've been using in class. The score of a specified game state is just the number of moves open to the active player.
Step5: The following is a
Step6: Implement a Player below that chooses a move using
Step9: The following are some basic tests you can use to sanity-check your code. You will also be provided with a test server to which you will be able to submit your agents later this week. Good luck!
|
12,019
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
%%html
<style>
.box-note {
padding: 10px;
margin: 20px;
background: #efefef;
font-size: 12px;
color: #333333;
}
</style>
mapserver_url = '172.17.0.2'
url = (
'http://172.17.0.2/?map=/maps/meteorological/ndvi.map&' +
'SERVICE=WMS&' +
'VERSION=1.3.0&'+
'REQUEST=GetMap&' +
'LAYERS=3304557&' +
'CRS=CRS:84&' +
'BBOX=-43.79653853090349,-23.0828926973403,-43.09903940038369,-22.746020285810395&' +
'WIDTH=800&' +
'HEIGHT=400&' +
'FORMAT=image/png'
)
Image(url=url, embed=True, format='png')
url = (
'http://172.17.0.2/?map=/maps/alert/dengue.map&' +
'SERVICE=WMS&' +
'VERSION=1.3.0&'+
'REQUEST=GetMap&' +
'LAYERS=3304557&' +
'CRS=CRS:84&' +
'BBOX=-43.79653853090349,-23.0828926973403,-43.09903940038369,-22.746020285810395&' +
'WIDTH=800&' +
'HEIGHT=400&' +
'FORMAT=image/png'
)
Image(url=url, embed=True, format='png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introdução
Step2: A endereço completo de acesso ao servidor de mapas necessita os seguintes parâmetros
Step3: Como comentado anteriormente, é possível solicitar ao servidor de mapas Info Dengue dados de alerta de incidência de Dengue, Chikungunya e Zika.
|
12,020
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
if 'google.colab' in sys.modules:
!pip install --upgrade pip
!pip install -U tfx
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
!head {_data_filepath}
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen, enable_cache=True)
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen, enable_cache=True)
context.show(statistics_gen.outputs['statistics'])
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen, enable_cache=True)
context.show(schema_gen.outputs['schema'])
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator, enable_cache=True)
context.show(example_validator.outputs['anomalies'])
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
NUMERICAL_FEATURES = ['trip_miles', 'fare', 'trip_seconds']
BUCKET_FEATURES = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
CATEGORICAL_NUMERICAL_FEATURES = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
CATEGORICAL_STRING_FEATURES = [
'payment_type',
'company',
]
# Number of vocabulary terms used for encoding categorical features.
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized categorical are hashed.
OOV_SIZE = 10
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def t_name(key):
Rename the feature keys so that they don't clash with the raw keys when
running the Evaluator component.
Args:
key: The original feature key
Returns:
key with '_xf' appended
return key + '_xf'
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_NUMERICAL_FEATURES = taxi_constants.NUMERICAL_FEATURES
_BUCKET_FEATURES = taxi_constants.BUCKET_FEATURES
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_CATEGORICAL_NUMERICAL_FEATURES = taxi_constants.CATEGORICAL_NUMERICAL_FEATURES
_CATEGORICAL_STRING_FEATURES = taxi_constants.CATEGORICAL_STRING_FEATURES
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
def _make_one_hot(x, key):
Make a one-hot tensor to encode categorical features.
Args:
X: A dense tensor
key: A string key for the feature in the input
Returns:
A dense one-hot tensor as a float list
integerized = tft.compute_and_apply_vocabulary(x,
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE,
vocab_filename=key, name=key)
depth = (
tft.experimental.get_vocabulary_size_by_name(key) + _OOV_SIZE)
one_hot_encoded = tf.one_hot(
integerized,
depth=tf.cast(depth, tf.int32),
on_value=1.0,
off_value=0.0)
return tf.reshape(one_hot_encoded, [-1, depth])
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
if not isinstance(x, tf.sparse.SparseTensor):
return x
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in _NUMERICAL_FEATURES:
# If sparse make it dense, setting nan's to 0 or '', and apply zscore.
outputs[taxi_constants.t_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]), name=key)
for key in _BUCKET_FEATURES:
outputs[taxi_constants.t_name(key)] = tf.cast(tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT, name=key),
dtype=tf.float32)
for key in _CATEGORICAL_STRING_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(_fill_in_missing(inputs[key]), key)
for key in _CATEGORICAL_NUMERICAL_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(tf.strings.strip(
tf.strings.as_string(_fill_in_missing(inputs[key]))), key)
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_LABEL_KEY] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform, enable_cache=True)
transform.outputs
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
from typing import Dict, List, Text
import os
import glob
from absl import logging
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_transform import TFTransformOutput
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_LABEL_KEY = taxi_constants.LABEL_KEY
_BATCH_SIZE = 40
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
tf_transform_output.transformed_metadata.schema)
def _get_tf_examples_serving_signature(model, tf_transform_output):
Returns a serving signature that accepts `tensorflow.Example`.
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_inference = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_example):
Returns the output to be used in the serving signature.
raw_feature_spec = tf_transform_output.raw_feature_spec()
# Remove label feature since these will not be present at serving time.
raw_feature_spec.pop(_LABEL_KEY)
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_inference(raw_features)
logging.info('serve_transformed_features = %s', transformed_features)
outputs = model(transformed_features)
# TODO(b/154085620): Convert the predicted labels from the model using a
# reverse-lookup (opposite of transform.py).
return {'outputs': outputs}
return serve_tf_examples_fn
def _get_transform_features_signature(model, tf_transform_output):
Returns a serving signature that applies tf.Transform to features.
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_eval = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def transform_features_fn(serialized_tf_example):
Returns the transformed_features to be fed as input to evaluator.
raw_feature_spec = tf_transform_output.raw_feature_spec()
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_eval(raw_features)
logging.info('eval_transformed_features = %s', transformed_features)
return transformed_features
return transform_features_fn
def export_serving_model(tf_transform_output, model, output_dir):
Exports a keras model for serving.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
model: A keras model to export for serving.
output_dir: A directory where the model will be exported to.
# The layer has to be saved to the model for keras tracking purpases.
model.tft_layer = tf_transform_output.transform_features_layer()
signatures = {
'serving_default':
_get_tf_examples_serving_signature(model, tf_transform_output),
'transform_features':
_get_transform_features_signature(model, tf_transform_output),
}
model.save(output_dir, save_format='tf', signatures=signatures)
def _build_keras_model(tf_transform_output: TFTransformOutput
) -> tf.keras.Model:
Creates a DNN Keras model for classifying taxi data.
Args:
tf_transform_output: [TFTransformOutput], the outputs from Transform
Returns:
A keras Model.
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(_LABEL_KEY)
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
# TODO(b/208879020): Move into schema such that spec.shape is [1] and not
# [] for scalars.
inputs[key] = tf.keras.layers.Input(
shape=spec.shape or [1], name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
output = tf.keras.layers.Concatenate()(tf.nest.flatten(inputs))
output = tf.keras.layers.Dense(100, activation='relu')(output)
output = tf.keras.layers.Dense(70, activation='relu')(output)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(20, activation='relu')(output)
output = tf.keras.layers.Dense(1)(output)
return tf.keras.Model(inputs=inputs, outputs=output)
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
model = _build_keras_model(tf_transform_output)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
# Export the model.
export_serving_model(tf_transform_output, model, fn_args.serving_model_dir)
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10000),
eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer, enable_cache=True)
model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(
signature_name='serving_default',
label_key=taxi_constants.LABEL_KEY,
preprocessing_function_names=['transform_features'],
)
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10})))
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(
feature_keys=['trip_start_hour'])
])
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
context.run(model_resolver, enable_cache=True)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
context.run(evaluator, enable_cache=True)
evaluator.outputs
context.show(evaluator.outputs['evaluation'])
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher, enable_cache=True)
pusher.outputs
push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFX Keras Component Tutorial
Step2: Install TFX
Step3: Did you restart the runtime?
Step4: Let's check the library versions.
Step5: Set up pipeline paths
Step6: Download example data
Step7: Take a quick look at the CSV file.
Step8: Disclaimer
Step9: Run TFX components interactively
Step10: Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples
Step11: We can also take a look at the first three training examples
Step12: Now that ExampleGen has finished ingesting the data, the next step is data analysis.
Step13: After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!
Step14: SchemaGen
Step15: After SchemaGen finishes running, we can visualize the generated schema as a table.
Step16: Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
Step17: After ExampleValidator finishes running, we can visualize the anomalies as a table.
Step19: In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.
Step23: Next, we write a preprocessing_fn that takes in raw data as input, and returns transformed features that our model can train on
Step24: Now, we pass in this feature engineering code to the Transform component and run it to transform your data.
Step25: Let's examine the output artifacts of Transform. This component produces two types of outputs
Step26: Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.
Step27: The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data.
Step36: After the Transform component has transformed your data into features, and the next step is to train a model.
Step37: Now, we pass in this model code to the Trainer component and run it to train the model.
Step38: Analyze Training with TensorBoard
Step39: Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.
Step40: Evaluator
Step41: Next, we give this configuration to Evaluator and run it.
Step42: Now let's examine the output artifacts of Evaluator.
Step43: Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.
Step44: To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.
Step45: This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set.
Step46: Now can also verify the success by loading the validation result record
Step47: Pusher
Step48: Let's examine the output artifacts of Pusher.
Step49: In particular, the Pusher will export your model in the SavedModel format, which looks like this
|
12,021
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
x=np.linspace(0,50,100)
ts1=pd.Series(3.1*np.sin(x/1.5)+3.5)
ts2=pd.Series(2.2*np.sin(x/3.5+2.4)+3.2)
ts3=pd.Series(0.04*x+3.0)
#ts1.plot()
#ts2.plot()
#ts3.plot()
#plt.ylim(-2,10)
#plt.legend(['ts1','ts2','ts3'])
#plt.show()
def euclid_dist(t1,t2):
return sqrt(sum((t1-t2)**2))
#print euclid_dist(ts1,ts2)
#print euclid_dist(ts1,ts3)
def DTWDistance(s1, s2):
DTW={}
for i in range(len(s1)):
DTW[(i, -1)] = float('inf')
for i in range(len(s2)):
DTW[(-1, i)] = float('inf')
DTW[(-1, -1)] = 0
for i in range(len(s1)):
for j in range(len(s2)):
dist= (s1[i]-s2[j])**2
DTW[(i, j)] = dist + min(DTW[(i-1, j)],DTW[(i, j-1)], DTW[(i-1, j-1)])
return sqrt(DTW[len(s1)-1, len(s2)-1])
#print DTWDistance(ts1,ts2)
#print DTWDistance(ts1,ts3)
def DTWDistance(s1, s2,w):
DTW={}
w = max(w, abs(len(s1)-len(s2)))
for i in range(-1,len(s1)):
for j in range(-1,len(s2)):
DTW[(i, j)] = float('inf')
DTW[(-1, -1)] = 0
for i in range(len(s1)):
for j in range(max(0, i-w), min(len(s2), i+w)):
dist= (s1[i]-s2[j])**2
DTW[(i, j)] = dist + min(DTW[(i-1, j)],DTW[(i, j-1)], DTW[(i-1, j-1)])
return sqrt(DTW[len(s1)-1, len(s2)-1])
#rint DTWDistance(ts1,ts2,10)
#print DTWDistance(ts1,ts3,10)
def LB_Keogh(s1,s2,r):
LB_sum=0
for ind,i in enumerate(s1):
#print(ind -r, ind+r)
lower_bound=min(s2[(ind-r if ind-r>=0 else 0):(ind+r)])
upper_bound=max(s2[(ind-r if ind-r>=0 else 0):(ind+r)])
if i>upper_bound:
LB_sum=LB_sum+(i-upper_bound)**2
elif i<lower_bound:
LB_sum=LB_sum+(i-lower_bound)**2
return sqrt(LB_sum)
#print LB_Keogh(ts1,ts2,20)
#print LB_Keogh(ts1,ts3,20)
#from sklearn.metrics import classification_report
from math import sqrt
def knn(train,test,w):
preds=[]
for ind,i in enumerate(test):
min_dist=float('inf')
closest_seq=[]
#print ind
for j in train:
if LB_Keogh(i[:-1],j[:-1],5)<min_dist:
dist=DTWDistance(i[:-1],j[:-1],w)
if dist<min_dist:
min_dist=dist
closest_seq=j
preds.append(closest_seq[-1])
return classification_report(test[:,-1],preds)
train = np.genfromtxt('datasets/train.csv', delimiter='\t')
test = np.genfromtxt('datasets/test.csv', delimiter='\t')
#print (knn(train,test,4))
import random
def k_means_clust(data,num_clust,num_iter,w=5):
centroids=random.sample(data,num_clust)
counter=0
for n in range(num_iter):
counter+=1
print (counter)
assignments={}
#assign data points to clusters
for ind,i in enumerate(data):
min_dist=float('inf')
closest_clust=None
for c_ind,j in enumerate(centroids):
if LB_Keogh(i,j,200)<min_dist:
cur_dist=DTWDistance(i,j,w)
if cur_dist<min_dist:
min_dist=cur_dist
closest_clust=c_ind
if closest_clust in assignments:
assignments[closest_clust].append(ind)
else:
assignments[closest_clust]=[]
#recalculate centroids of clusters
for key in assignments:
clust_sum=0
for k in assignments[key]:
clust_sum= clust_sum+data[k]
print("DEBUG")
for m in clust_sum:
#print(m)
t = m/float(len(assignments[key]))
centroids[key] = m/float(len(assignments[key])) #centroids[key]=[m/float(len(assignments[key])) for m in clust_sum]
return centroids
train = np.genfromtxt('datasets/train.csv', delimiter='\t')
test = np.genfromtxt('datasets/test.csv', delimiter='\t')
data1=np.vstack((train[:,:-1],test[:,:-1]))
#print(type(train))
#print(np.fromfile("ndarray.csv"))
#print("origi dataset")
df = pd.DataFrame.from_csv("ndarray.csv")
#data = np.ndarray(df)
#numpyMatrix = df.as_matrix()
data1=np.vstack((train[:,:-1],test[:,:-1]))
print(data1[0])
print(type(data1[0]))
data = np.fromfile("prices.csv")
data = np.vstack(data)
print(data[0])
print(type(data[0]))
d = df.values.tolist()
data = np.vstack(d)
for i in range(26):
if np.isnan(d[i][-1]):
d[i][-1] = 0.1
#data = np.ndarray(d)
type(data1[0])
input = data1
y=np.array([np.array(di)[:100] for di in d])
ts1 = y[0]
ts2 = y[1]
#print LB_Keogh(ts1,ts3,2)
print(data1[1])
y = np.delete(y, 25, 0)
# len of ts in example is 60 - range=5
# len of ts in datset is 1416 - 100
#for i in range(26):
# ts = y[i][:60]
# y[i] = ts
(y[1][-1])
#print(y[24])
len(y[1])
import matplotlib.pylab as plt
centroids=k_means_clust(data1,4,10,4) #data,num_clust,num_iter,w=5
print("centroids" ,centroids)
for i in centroids:
plt.plot(i)
plt.show()
import numpy as np;
import seaborn as sns;
import pandas as pd
from scipy import stats
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
num_samples = 61
group_size = 10
df = pd.DataFrame.from_csv("ndarray.csv")
d = df.values.tolist()
data = np.vstack(d)
for i in range(26):
d[i][-1] = 0
#data = np.ndarray(d)
#type(data1[0])
input = data1
y=np.array([np.array(di)[:60] for di in d])
for i in range(26):
timeseries = y[i]
timeSeries = (timeseries-timeseries.min())/(timeseries.max()-timeseries.min())
y[i] = timeSeries
timeSeries = pd.DataFrame()
#timeSeries = (timeseries-timeseries.min())/(timeseries.max()-timeseries.min())
ax = None
for arr in y:
#for arr in data1:
#arr = arr + np.random.rand(group_size, num_samples) + (np.random.randn(group_size, 1)*3)
df = pd.DataFrame(arr)
#print(df)
timeSeries = timeSeries.append(df)
# We use seaborn to plot what we have
#ax = sns.tsplot(ax=ax, data=df.values, ci=[68, 95])
#ax = sns.tsplot(ax=ax, data=df.values, err_style="unit_traces")
# Just one line :)
Z = hac.linkage(timeSeries, 'ward')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'complete')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'average')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'centroid')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'single',DTWDistance)
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
print("method is single")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the above example, it is clear that $ts1$ and $ts2$ are most similar (they are both $sin$ functions under different transformations). $ts3$ is clearly the most different. Let's compute the Euclidean distance $d(ts1,ts2)$ and $d(ts1,ts3)$ to see if the Euclidean distance measure agrees with what our intuition tells us. Let's first create a function that computes the Euclidean distance between two time series.
Step2: Let's now find the Euclidean distance between $ts1$ and $ts2$
Step3: and the Euclidean distance between $ts1$ and $ts3$
Step4: This is not good because according to the Euclidean distance measure, $ts1$ is more similar to $ts3$ than to $ts2$ which contradicts our intuition. This is the problem with using the Euclidean distance measure. It often produced pessimistic similarity measures when it encounters distortion in the time axis. The way to deal with this is to use dynamic time warping.
Step5: Now let's compute the Euclidean distance between $ts1$ and $ts2$ using dynamic time warping.
Step6: and now the dynamic time warping distance between $ts1$ and $ts3$
Step7: As you can see, our results have changed from when we only used the Euclidean distance measure. Now, in agreement with our intuition, $ts2$ is shown to be more similar to $ts1$ than $ts3$ is.
Step8: Let's test this faster version.
Step9: Another way to speed things up is to use the LB Keogh lower bound of dynamic time warping. It is defined as $$LBKeogh(Q,C)=\sum_{i=1}^n (c_i-U_i)^2I(c_i > U_i)+(c_i-L_i)^2I(c_i < L_i)$$
Step10: Let's now test on $ts1$ and $ts2$
Step11: and now $ts1$ and $ts3$.
Step12: The LB Keogh lower bound method is linear whereas dynamic time warping is quadratic in complexity which make it very advantageous for searching over large sets of time series.
Step13: Now let's test it on some data. We will use a window size of 4. Although the code is sped up with the use of the LB Keogh bound and the dynamic time warping locality contraint, it may still take a few minutes to run.
Step14: The same idea can also be applied to k-means clustering. In this algorithm, the number of clusters is set apriori and similar time series are clustered together.
Step17: Let's test it on the entire data set (i.e. the training set and the test set stacked together).
|
12,022
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.__version__
x = [1,2]
y = [[4, 1], [2, 2]]
print np.dot(x, y)
print np.dot(y, x)
print np.matmul(x, y)
print np.inner(x, y)
print np.inner(y, x)
x = [[1, 0], [0, 1]]
y = [[4, 1], [2, 2], [1, 1]]
print np.dot(y, x)
print np.matmul(y, x)
x = np.array([[1, 4], [5, 6]])
y = np.array([[4, 1], [2, 2]])
print np.vdot(x, y)
print np.vdot(y, x)
print np.dot(x.flatten(), y.flatten())
print np.inner(x.flatten(), y.flatten())
print (x*y).sum()
x = np.array(['a', 'b'], dtype=object)
y = np.array([1, 2])
print np.inner(x, y)
print np.inner(y, x)
print np.outer(x, y)
print np.outer(y, x)
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32)
L = np.linalg.cholesky(x)
print L
assert np.array_equal(np.dot(L, L.T.conjugate()), x)
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)
q, r = np.linalg.qr(x)
print "q=\n", q, "\nr=\n", r
assert np.allclose(np.dot(q, r), x)
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)
U, s, V = np.linalg.svd(x, full_matrices=False)
print "U=\n", U, "\ns=\n", s, "\nV=\n", v
assert np.allclose(np.dot(U, np.dot(np.diag(s), V)), x)
x = np.diag((1, 2, 3))
eigenvals = np.linalg.eig(x)[0]
eigenvals_ = np.linalg.eigvals(x)
assert np.array_equal(eigenvals, eigenvals_)
print "eigenvalues are\n", eigenvals
eigenvecs = np.linalg.eig(x)[1]
print "eigenvectors are\n", eigenvecs
print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
x = np.arange(1, 10).reshape((3, 3))
print np.linalg.norm(x, 'fro')
print np.linalg.cond(x, 'fro')
x = np.arange(1, 5).reshape((2, 2))
out1 = np.linalg.det(x)
out2 = x[0, 0] * x[1, 1] - x[0, 1] * x[1, 0]
assert np.allclose(out1, out2)
print out1
x = np.eye(4)
out1 = np.linalg.matrix_rank(x)
out2 = np.linalg.svd(x)[1].size
assert out1 == out2
print out1
x = np.arange(1, 5).reshape((2, 2))
sign, logdet = np.linalg.slogdet(x)
det = np.linalg.det(x)
assert sign == np.sign(det)
assert logdet == np.log(np.abs(det))
print sign, logdet
x = np.eye(4)
out1 = np.trace(x)
out2 = x.diagonal().sum()
assert out1 == out2
print out1
x = np.array([[1., 2.], [3., 4.]])
out1 = np.linalg.inv(x)
assert np.allclose(np.dot(x, out1), np.eye(2))
print out1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matrix and vector products
Step2: Q2. Predict the results of the following code.
Step3: Q3. Predict the results of the following code.
Step4: Q4. Predict the results of the following code.
Step5: Decompositions
Step6: Q6. Compute the qr factorization of x and verify it.
Step7: Q7. Factor x by Singular Value Decomposition and verify it.
Step8: Matrix eigenvalues
Step9: Q9. Predict the results of the following code.
Step10: Norms and other numbers
Step11: Q11. Calculate the determinant of x.
Step12: Q12. Calculate the rank of x.
Step13: Q13. Compute the sign and natural logarithm of the determinant of x.
Step14: Q14. Return the sum along the diagonal of x.
Step15: Solving equations and inverting matrices
|
12,023
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from time import time
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.linear_model import RidgeClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import NearestCentroid
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils.extmath import density
from sklearn import metrics
from sklearn import cross_validation
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import train_test_split
import pandas as pd
import connect_aws_db as cadb
%matplotlib inline
engine = cadb.connect_aws_db(write_unicode=True)
categories = ['dogs', 'general']
cmd = "SELECT review_rating, review_text FROM bf_reviews"
bfdf = pd.read_sql_query(cmd, engine)
print(len(bfdf))
bfdf.head(5)
len(bfdf[bfdf['review_text'].str.len() > 300])
lbfdf = bfdf[bfdf['review_text'].str.len() > 300].copy()
cmd = "SELECT review_rating, review_text FROM yelp_reviews"
yelpdf = pd.read_sql_query(cmd, engine)
print(len(yelpdf))
yelpdf.head(5)
len(yelpdf[yelpdf['review_text'].str.len() > 300])
lydf = yelpdf[yelpdf['review_text'].str.len() > 300].copy()
# OLD WAY OF SPLITTING TRAINING AND TEST DATA:
##################################################
# train_data = np.hstack((lbfdf['review_text'].values[:len_revs],
# lydf['review_text'].values[:len_revs]))
# len(train_data)
# labels = ['dog'] * len_revs
# labels.extend(['general'] * len_revs)
# len(labels)
# y_train = labels
# test_data = np.hstack((lbfdf['review_text'].values[1000:1500],
# lydf['review_text'].values[1000:1500]))
# len(test_data)
# labels = ['dog'] * 500
# labels.extend(['general'] * 500)
# y_test = labels
# len(y_test)
len_revs = len(lbfdf)
data_revs = np.hstack((lbfdf['review_text'].values[:len_revs],
lydf['review_text'].values[:len_revs]))
len(data_revs)
labels = ['dog'] * len_revs
labels.extend(['general'] * len_revs)
len(labels)
train_data, test_data, y_train, y_test = train_test_split(
data_revs, labels, test_size=0.33, random_state=18)
t0 = time()
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
X_train = vectorizer.fit_transform(train_data)
duration = time() - t0
print('vectorized in {:.2f} seconds.'.format(duration))
t0 = time()
X_test = vectorizer.transform(test_data)
duration = time() - t0
print('transformed test data in {:.2f} seconds.'.format(duration))
feature_names = np.asarray(vectorizer.get_feature_names())
clf = RidgeClassifier(tol=1e-2, solver="lsqr")
print(clf)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(len(y_test))
print(len(pred))
score = metrics.f1_score(y_test, pred, labels=None, pos_label=None)
acc = metrics.accuracy_score(y_test, pred, normalize=True)
len(y_test) == len(pred)
#help(metrics.f1_score)
def benchmark(clf, pos_label=None):
print('_' * 80)
print("Training: ")
print(clf)
t0 = time()
clf.fit(X_train, y_train)
train_time = time() - t0
print("train time: %0.3fs" % train_time)
t0 = time()
pred = clf.predict(X_test)
test_time = time() - t0
print("test time: %0.3fs" % test_time)
score = metrics.f1_score(y_test, pred, pos_label=pos_label)
print("f1-score: %0.3f" % score)
acc = metrics.accuracy_score(y_test, pred, normalize=True)
print('accuracy: {:.2f}'.format(acc))
if hasattr(clf, 'coef_'):
print("dimensionality: %d" % clf.coef_.shape[1])
print("density: %f" % density(clf.coef_))
# if opts.print_top10 and feature_names is not None:
# print("top 10 keywords per class:")
# for i, category in enumerate(categories):
# top10 = np.argsort(clf.coef_[i])[-10:]
# print(trim("%s: %s"
# % (category, " ".join(feature_names[top10]))))
print()
# if opts.print_report:
# print("classification report:")
# print(metrics.classification_report(y_test, pred,
# target_names=categories))
# if opts.print_cm:
# print("confusion matrix:")
# print(metrics.confusion_matrix(y_test, pred))
print()
clf_descr = str(clf).split('(')[0]
return clf_descr, score, acc, train_time, test_time
results = []
for clf, name in (
(RidgeClassifier(tol=1e-2, solver="lsqr"), "Ridge Classifier"),
(Perceptron(n_iter=50), "Perceptron"),
(PassiveAggressiveClassifier(n_iter=50), "Passive-Aggressive"),
(KNeighborsClassifier(n_neighbors=10), "kNN"),
(RandomForestClassifier(n_estimators=10), 'RandomForest')):
print('=' * 80)
print(name)
results.append(benchmark(clf))
results
for penalty in ["l2", "l1"]:
print('=' * 80)
print("%s penalty" % penalty.upper())
# Train Liblinear model
results.append(benchmark(LinearSVC(loss='l2', penalty=penalty,
dual=False, tol=1e-3)))
# Train SGD model
results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,
penalty=penalty)))
results
# Train SGD with Elastic Net penalty
print('=' * 80)
print("Elastic-Net penalty")
results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,
penalty="elasticnet")))
# Train NearestCentroid without threshold
print('=' * 80)
print("NearestCentroid (aka Rocchio classifier)")
results.append(benchmark(NearestCentroid()))
# Train sparse Naive Bayes classifiers
print('=' * 80)
print("Naive Bayes")
results.append(benchmark(MultinomialNB(alpha=.01)))
results.append(benchmark(BernoulliNB(alpha=.01)))
results
class L1LinearSVC(LinearSVC):
def fit(self, X, y):
# The smaller C, the stronger the regularization.
# The more regularization, the more sparsity.
self.transformer_ = LinearSVC(penalty="l1",
dual=False, tol=1e-3)
X = self.transformer_.fit_transform(X, y)
return LinearSVC.fit(self, X, y)
def predict(self, X):
X = self.transformer_.transform(X)
return LinearSVC.predict(self, X)
print('=' * 80)
print("LinearSVC with L1-based feature selection")
results.append(benchmark(L1LinearSVC()))
indices = np.arange(len(results))
results = [[x[i] for x in results] for i in range(5)]
clf_names, score, acc, training_time, test_time = results
training_time = np.array(training_time) / np.max(training_time)
test_time = np.array(test_time) / np.max(test_time)
clf_names
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 16}
plt.rc('font', **font)
plt.rcParams['figure.figsize'] = 12.94, 8
#plt.figure(figsize=(12, 8))
plt.title("Score")
plt.barh(indices, score, .2, label="score", color='#982023')
plt.barh(indices + .3, training_time, .2, label="training time", color='#46959E')
plt.barh(indices + .6, test_time, .2, label="test time", color='#C7B077')
plt.yticks(())
plt.legend(loc='best')
plt.subplots_adjust(left=.25)
plt.subplots_adjust(top=.95)
plt.subplots_adjust(bottom=.05)
plt.ylim(0, 17)
print(indices)
for i, c in zip(indices, clf_names):
plt.text(-0.025, i, c, horizontalalignment='right')
clf_names
clf_names[0] = 'Ridge'
clf_names[2] = 'PassAggress'
clf_names[3] = 'KNN'
clf_names[4] = 'RandomForest'
clf_names[5] = 'LinearSVC L2'
clf_names[6] = 'SGDC SVM L2'
clf_names[7] = 'LinearSVC L1'
clf_names[8] = 'SGDC L1'
clf_names[9] = 'SGDC ElNet'
clf_names[13] = 'LinearSVC L1FS'
fig, ax = plt.subplots(1, 1)
training_timen = np.array(training_time) / np.max(training_time)
test_timen = np.array(test_time) / np.max(test_time)
ax.plot(indices, score, '-o', label="f-score", color='#982023')
ax.plot(indices, acc, '-o', label="accuracy", color='#BA4C37')
ax.plot(indices, training_timen, '-o', label="training time", color='#46959E')
ax.plot(indices, test_timen, '-o', label="test time", color='#C7B077')
#labels = [item.get_text() for item in ax.get_xticklabels()]
print(len(labels))
print(len(clf_names))
labels = clf_names
ax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1))
ax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right')
ax.set_xlim([-1, 14])
ax.set_ylim([0, 1])
ax.legend(loc='best')
plt.subplots_adjust(left=0.05, bottom=0.3, top=.98)
#plt.savefig('classifierScoresNorm.png', dpi=144)
fig, ax = plt.subplots(1, 1)
clf_names, score, accs, training_time, test_time = results
ax.plot(indices, score, '-o', label="score", color='#982023')
ax.plot(indices, accs, '-o', label="accuracy", color='#BA4C37')
ax.plot(indices, training_time, '-o', label="training time (s)", color='#46959E')
ax.plot(indices, test_time, '-o', label="test time (s)", color='#C7B077')
#labels = [item.get_text() for item in ax.get_xticklabels()]
print(len(labels))
print(len(clf_names))
labels = clf_names
ax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1))
ax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right')
ax.set_xlim([-1, 14])
ax.set_ylim([0, 1])
ax.legend(loc='best')
plt.subplots_adjust(left=0.05, bottom=0.3, top=.98)
#plt.savefig('classifierScores.png', dpi=144)
for name, scr, acc in zip(clf_names, score, accs):
print('{}: {:.3f}, {:.3f}'.format(name, scr, acc))
#clf = RidgeClassifier(tol=1e-2, solver="lsqr")
clf = LinearSVC(loss='l2', penalty=penalty, dual=False, tol=1e-3)
len(lbfdf['review_text'].values)
# make a set containing all the data:
fulldat = np.hstack((lbfdf['review_text'].values,
lydf['review_text'].values[:1508]))
labels = ['dog'] * 1508
labels.extend(['general'] * 1508)
fulltarget = labels
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
X_train = vectorizer.fit_transform(train_data)
X_test = vectorizer.transform(test_data)
clf = Pipeline([
('vectfidf', TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')),
('feature_selection', LinearSVC(penalty="l1", dual=False, tol=1e-3)),
('classification', LinearSVC())])
scores = cross_validation.cross_val_score(clf, fulldat, fulltarget, cv=10)
print(scores)
print(np.median(scores))
clf = Pipeline([
('vectfidf', TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')),
('feature_selection', LinearSVC(penalty="l1", dual=False, tol=1e-3)),
])
scores = cross_validation.cross_val_score(clf, fulldat, fulltarget, cv=10)
print(scores)
print(np.median(scores))
pars = clf.get_params()
pars.values()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restore BF Reviews and Ratings
Step2: Restore Yelp Reviews and Ratings
Step3: Create a training and test data sets
Step4: Try RidgeClassifier
Step5: Now benchmark multiple classifiers
Step6: Plot Results
Step7: Cross-validation
Step8: Examine Features
|
12,024
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as feat
import matplotlib.pyplot as plt
import numpy as np
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import simple_layout, StationPlot, StationPlotLayout
from metpy.units import units
f = get_test_data('station_data.txt')
all_data = np.loadtxt(f, skiprows=1, delimiter=',',
usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),
('slp', 'f'), ('air_temperature', 'f'),
('cloud_fraction', 'f'), ('dew_point_temperature', 'f'),
('weather', '16S'),
('wind_dir', 'f'), ('wind_speed', 'f')]))
# Get the full list of stations in the data
all_stids = [s.decode('ascii') for s in all_data['stid']]
# Pull out these specific stations
whitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
# This is our container for the data
data = dict()
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon']
data['latitude'] = data_arr['lat']
data['air_temperature'] = data_arr['air_temperature'] * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'] * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'] * units('mbar')
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components(data_arr['wind_speed'] * units('m/s'),
data_arr['wind_dir'] * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = [s.decode('ascii') for s in data_arr['weather']]
wx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55,
'-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75}
data['present_weather'] = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text]
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
state_boundaries = feat.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='110m', facecolor='none')
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries, edgecolor='black')
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Farenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries, edgecolor='black')
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The setup
Step2: This sample data has way too many stations to plot all of them. Instead, we just select
Step3: Next grab the simple variables out of the data we have (attaching correct units), and
Step4: Notice that the names (the keys) in the dictionary are the same as those that the
Step5: All the data wrangling is finished, just need to set up plotting and go
Step6: The payoff
Step7: or instead, a custom layout can be used
|
12,025
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
from modsim import State
init = State(y=381, v=0)
g = 9.8
t_end = 10
dt = 0.1
from modsim import System
system = System(init=init, g=g, t_end=t_end, dt=dt)
def slope_func(t, state, system):
y, v = state
dydt = v
dvdt = -system.g
return dydt, dvdt
dydt, dvdt = slope_func(0, system.init, system)
print(dydt)
print(dvdt)
from modsim import run_solve_ivp
results, details = run_solve_ivp(system, slope_func)
details
results.head()
from modsim import decorate
results.y.plot()
decorate(xlabel='Time (s)',
ylabel='Position (m)')
t_end = results.index[-1]
results.y[t_end]
from modsim import crossings
t_crossings = crossings(results.y, 0)
t_crossings
def event_func(t, state, system):
y, v = state
return y
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
t_end = results.index[-1]
t_end
y, v = results.iloc[-1]
print(y)
print(v)
# Solution
r_0 = 150e9 # 150 million km in m
v_0 = 0
init = State(r=r_0,
v=v_0)
# Solution
radius_earth = 6.37e6 # meters
radius_sun = 696e6 # meters
r_final = radius_sun + radius_earth
r_final
r_0 / r_final
t_end = 1e7 # seconds
system = System(init=init,
G=6.674e-11, # N m^2 / kg^2
m1=1.989e30, # kg
m2=5.972e24, # kg
r_final=radius_sun + radius_earth,
t_end=t_end)
# Solution
def universal_gravitation(state, system):
Computes gravitational force.
state: State object with distance r
system: System object with m1, m2, and G
r, v = state
G, m1, m2 = system.G, system.m1, system.m2
force = G * m1 * m2 / r**2
return force
# Solution
universal_gravitation(init, system)
# Solution
def slope_func(t, state, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `m2`
returns: derivatives of y and v
y, v = state
m2 = system.m2
force = universal_gravitation(state, system)
dydt = v
dvdt = -force / m2
return dydt, dvdt
# Solution
slope_func(0, system.init, system)
# Solution
def event_func(t, state, system):
r, v = state
return r - system.r_final
# Solution
event_func(0, init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
# Solution
t_event = results.index[-1]
t_event
# Solution
from modsim import units
seconds = t_event * units.second
days = seconds.to(units.day)
# Solution
results.index /= 60 * 60 * 24
# Solution
results.r /= 1e9
# Solution
results.r.plot(label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
%psource crossings
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far the differential equations we've worked with have been first
Step2: where y is height above the sidewalk and v is velocity.
Step3: In addition, we'll specify the duration of the simulation and the step
Step4: With these parameters, the number of time steps is 100, which is good
Step5: Now we need a slope function, and here's where things get tricky. As we have seen, run_solve_ivp can solve systems of first order ODEs, but Newton's law is a second order ODE. However, if we recognize that
Step6: The first parameter, state, contains the position and velocity of the
Step7: The result is 0 m/s for velocity and 9.8 m/s$^2$ for acceleration. Now we call run_solve_ivp like this
Step8: results is a TimeFrame with two columns
Step9: Since acceleration is constant, velocity increases linearly and position decreases quadratically; as a result, the height curve is a parabola.
Step10: One way to solve this problem is to use the results to
Step11: The result is an array with a single value, 8.818 s. Now, we could run
Step12: The return value is the height of the penny, y, which passes through
Step13: Then we can get the flight time and final velocity like this
Step16: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
Step17: Under the hood
|
12,026
|
<ASSISTANT_TASK:>
Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("8_wSb927nH0",width=640,height=360) # Complex 'if' statements
# put your code here.
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("MzZCeHB0CbE",width=640,height=360) # Complex loops
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
# put your code here
# put your code here!
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/l7LqskZxIADofpZy2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step6: Assignment wrapup
|
12,027
|
<ASSISTANT_TASK:>
Python Code:
%%sh
ctapipe-info
%%sh
ctapipe-info --version
ctapipe-info --dependencies
ctapipe-info --tools
%matplotlib inline
import warnings
warnings.filterwarnings("ignore") # get rid of some annoyances in latest matplotlib
import matplotlib.pyplot as plt
import numpy as np
from ctapipe.io import CameraGeometry
from ctapipe.visualization import CameraDisplay
geom = CameraGeometry.from_name("HESS",1)
geom.pix_x
geom.pix_area
plt.scatter(geom.pix_x, geom.pix_y)
image = np.random.uniform(size=len(geom.pix_id))
disp = CameraDisplay(geom, image=image)
# better fake data:
%matplotlib inline
from ctapipe.reco import generate_2d_shower_model, make_mock_shower_image
showermodel = generate_2d_shower_model(centroid=(0.2, 0.05),length=0.05, width=0.1,psi='40d')
print(showermodel)
print(showermodel.pdf)
im, sig, bg = make_mock_shower_image(geom, showermodel.pdf,intensity=10,nsb_level_pe=100)
disp = CameraDisplay(geom, image=im)
# switch back to window (no more plots in the notebook)
%matplotlib
disp = CameraDisplay(geom)
disp.image = im
disp.enable_pixel_picker()
# change the image
image[20:50] = 1.0
image[400:450] = 0.5
disp.image = image
# change the normalization or limits:
disp.norm = 'log' # or lin, or any matplotlib.colors.Normalization
disp.norm = 'lin'
disp.add_colorbar()
# change the colorbar and limits
disp.cmap = 'jet' # try others (see below)
%matplotlib inline
from ctapipe.reco.cleaning import tailcuts_clean, dilate
from ctapipe.reco import hillas_parameters
CameraDisplay(geom, image=im)
# cleanmask = [code here]
# im_cleaned = im.copy()
# im_cleaned = [code here]
# Display results!
# plt.figure()
# CameraDisplay(geom, image=cleanmask, title='mask')
# plt.figure()
# CameraDisplay(geom, image=im_cleaned, title='cleaned')
# params =
#disp = CameraDisplay(geom, image=im_cleaned)
#disp.overlay_moments(params)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So there are so far not a lot of tools... the API is more interesting to start with.
Step2: first let's load up a blank image. Rememeber that images in Cherenkov Cameras are generalized
Step3: nice, but we can do better via the visualization module
Step4: Interactive data analysis
Step5: look for the popup window! It may be behind your browser...
Step6: gnuplot, Set1, gnuplot2_r, Blues, Set2, gist_ncar_r, gist_ncar, BrBG, Paired, BuPu, RdGy, gist_earth_r, Spectral, gist_stern_r, YlGn_r, autumn, Greys, gist_gray_r, winter, flag, Paired_r, ocean, PuBuGn, rainbow_r, hsv_r, hot, YlGnBu_r, Blues_r, spring, Oranges, bone_r, prism_r, BuPu_r, pink, hot_r, coolwarm, gist_earth, OrRd, Purples_r, gist_yarg_r, YlOrRd_r, Accent, gist_rainbow, PRGn_r, cool_r, PRGn, flag_r, PuOr, winter_r, summer_r, brg, gray_r, afmhot_r, jet_r, cubehelix, PuRd, YlOrBr, Spectral_r, gist_stern, BrBG_r, PuRd_r, YlGn, RdYlBu_r, RdYlGn_r, autumn_r, YlGnBu, afmhot, RdPu, binary, bone, RdYlGn, nipy_spectral_r, spring_r, Set3, coolwarm_r, terrain_r, spectral, gist_yarg, RdYlBu, RdGy_r, PiYG_r, gist_heat, Pastel1, PuOr_r, PuBu, jet, gist_rainbow_r, rainbow, pink_r, Purples, nipy_spectral, Accent_r, Wistia_r, Pastel2_r, copper, bwr, seismic, Greens_r, summer, cool, YlOrRd, CMRmap_r, Dark2_r, Wistia, seismic_r, RdBu, gist_heat_r, Pastel1_r, binary_r, GnBu_r, PiYG, spectral_r, GnBu, RdPu_r, YlOrBr_r, PuBu_r, PuBuGn_r, Dark2, prism, BuGn, RdBu_r, gist_gray, ocean_r, BuGn_r, gnuplot2, Set3_r, copper_r, OrRd_r, Greens, Set2_r, gnuplot_r, Oranges_r, brg_r, cubehelix_r, Reds_r, Set1_r, gray, Reds, bwr_r, hsv, terrain, Pastel2, Greys_r, CMRmap
Step7: clean the image
Step8: now parameterize the image using hillas_parameters
|
12,028
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
import flopy.utils as fputl
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfnwt'
if platform.system() == 'Windows':
exe_name = 'MODFLOW-NWT.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'watertable'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
def analyticalWaterTableSolution(h1, h2, z, R, K, L, x):
h = np.zeros((x.shape[0]), np.float)
#dx = x[1] - x[0]
#x -= dx
b1 = h1 - z
b2 = h2 - z
h = np.sqrt(b1**2 - (x/L)*(b1**2 - b2**2) + (R * x / K) * (L - x)) + z
return h
#--model dimensions
nlay, nrow, ncol = 1, 1, 100
#--cell spacing
delr = 50.
delc = 1.
#--domain length
L = 5000.
#--boundary heads
h1 = 20.
h2 = 11.
#--ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
#--starting heads
strt = np.zeros((nlay, nrow, ncol), dtype=np.float)
strt[0, 0, 0] = h1
strt[0, 0, -1] = h2
#--top of the aquifer
top = 25.
#--bottom of the aquifer
botm = 0.
#--hydraulic conductivity
hk = 50.
#--location of cell centroids
x = np.arange(0.0, L, delr) + (delr / 2.)
#--location of cell edges
xa = np.arange(0, L+delr, delr)
#--recharge rate
rchrate = 0.001
#--calculate the head at the cell centroids using the analytical solution function
hac = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, x)
#--calculate the head at the cell edges using the analytical solution function
ha = analyticalWaterTableSolution(h1, h2, botm, rchrate, hk, L, xa)
#--ghbs
#--ghb conductance
b1, b2 = 0.5*(h1+hac[0]), 0.5*(h2+hac[-1])
c1, c2 = hk*b1*delc/(0.5*delr), hk*b2*delc/(0.5*delr)
#--dtype
ghb_dtype = flopy.modflow.ModflowGhb.get_default_dtype()
print ghb_dtype
#--build ghb recarray
stress_period_data = np.zeros((2), dtype=ghb_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print 'stress_period_data: ', stress_period_data
print 'type is: ', type(stress_period_data)
#--fill ghb recarray
stress_period_data[0] = (0, 0, 0, h1, c1)
stress_period_data[1] = (0, 0, ncol-1, h2, c2)
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=top, botm=botm,
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowUpw(mf, hk=hk, laytyp=1)
ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=stress_period_data)
rch = flopy.modflow.ModflowRch(mf, rech=rchrate, nrchop=1)
oc = flopy.modflow.ModflowOc(mf)
nwt = flopy.modflow.ModflowNwt(mf, linmeth=2, iprnwt=1, options='COMPLEX')
mf.write_input()
#--remove existing heads results, if necessary
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
#--run existing model
mf.run_model()
#--Create the headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = fputl.HeadFile(headfile, precision='single')
times = headobj.get_times()
head = headobj.get_data(totim=times[-1])
fig = plt.figure(figsize=(16,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 3, 1)
ax.plot(xa, ha, linewidth=8, color='0.5', label='analytical solution')
ax.plot(x, head[0, 0, :], color='red', label='MODFLOW-2015')
leg = ax.legend(loc='lower left')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax = fig.add_subplot(1, 3, 2)
ax.plot(x, head[0, 0, :] - hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Error, in m')
ax = fig.add_subplot(1, 3, 3)
ax.plot(x, 100.*(head[0, 0, :] - hac)/hac, linewidth=1, color='blue')
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Percent Error')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function to calculate the analytical solution at specified points in a aquifer
Step2: Model data required to create the model files and calculate the analytical solution
Step3: Create a flopy object to create and run the MODFLOW-NWT datasets for this problem
Step4: Read the simulated MODFLOW-NWT model results
Step5: Plot the MODFLOW-NWT results and compare to the analytical solution
|
12,029
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
pwd
cd output
ls
ranking_frame = pd.read_csv('rankings_20170425.csv')
ranking_frame.columns
ranking_frame.rrover.head(20)
ranking_frame.rrover.tail(20)
ranking_frame['bins'] = pd.qcut(ranking_frame.probability, 10, labels=False)
grouped = ranking_frame.groupby('bins')
def get_ratio(series):
ratio = series.value_counts()[1] / series.size
return ratio
grouped['rrover'].apply(get_ratio).plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This file contains the ranked predictions of the test set.
Step2: The probabilities are in descending order. Observe the greater number of True values at the top of the rankings versus the bottom.
Step3: Let's plot the True/False ratios for each probability decile. These ratios should roughly reflect the trend in the calibration plot.
|
12,030
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as mtick
import os
sns.set(style="whitegrid", color_codes=True)
import string
#this code is loading the book(in this case boy.txt) and getting the title
with open('boy.txt', encoding='utf-8') as book:
for i, line in enumerate(book):
#this is getting the title from the text
if line.startswith('Title:'):
title = line[6:]
print("The title is", title)
with open('boy.txt', encoding='utf-8') as book:
for i, line in enumerate(book):
if line.startswith('Title:'):
title = line[6:]
print("The title is", title)
if line.startswith('*** START OF THIS PROJECT'):
break
letter_frequencies = {}
for letter in string.ascii_uppercase:
letter_frequencies[letter] = 0
for i, line in enumerate(book):
#This starts reading^
'''this code is putting everything to uppercase so the
'reader program' can get every single letter'''
for char in line:
if char in string.ascii_letters:
uppercase = char.upper()
letter_frequencies[uppercase] += 1
print(letter_frequencies)
if line.startswith('*** END OF THIS PROJECT'):
break
total = sum(letter_frequencies.values())
letter_percentages = {}
for letter, count in letter_frequencies.items():
letter_percentages[letter] = count/total * 100
data = {
'letters': list(letter_percentages.keys()),
'percentages': list(letter_percentages.values()),
}
plt.figure()
plot = sns.barplot(x='letters',y='percentages', data=data, palette='rainbow_r')
plot.set_title("{0}\n total {1} characters".format(title, total))
plot.set_xlabel('Letters')
plot.set_ylabel('Percentages')
fmt = '%.0f%%'
yticks = mtick.FormatStrFormatter(fmt)
plot.yaxis.set_major_formatter(yticks)
# print('The total number of letters is', total)
def word_analysis(filename):
title = ''
with open(filename, encoding='utf-8') as book:
#print(book.read()[:1000])
for i, line in enumerate(book):
if line.startswith('Title:'):
title = line[6:]
if line.startswith('*** START OF THIS PROJECT'):
break
#set up letter counter v
letter_frequencies = {}
for letter in string.ascii_uppercase:
letter_frequencies[letter] = 0
for i, line in enumerate(book):
#This starts reading^
for char in line:
if char in string.ascii_letters:
uppercase = char.upper()
letter_frequencies[uppercase] += 1
if line.startswith('*** END OF THIS PROJECT'):
break
# print (letter_frequencies)
total = sum(letter_frequencies.values())
letter_percentages = {}
for letter, count in letter_frequencies.items():
letter_percentages[letter] = count/total * 100
data = {
'letters': list(letter_percentages.keys()),
'percentages': list(letter_percentages.values()),
}
plt.figure()
plot = sns.barplot(x='letters',y='percentages', data=data, palette='rainbow_r')
plot.set_title("{0}\n total {1} characters".format(title, total))
plot.set_xlabel('Letters')
plot.set_ylabel('Percentages')
fmt = '%.0f%%'
yticks = mtick.FormatStrFormatter(fmt)
plot.yaxis.set_major_formatter(yticks)
# print('The total number of letters is', total)
# print(list(os.walk('.')))
for root, dirs, files in os.walk('.'):
for f in files:
if f.endswith('.txt'):
word_analysis(f)
# Stop walking sub directories
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: get books
Step2: Reading the book
Step3: assigning the end of the book
Step4: adding up and getting percentages
Step5: Plotting the data and analysing
Step6: The code in full
|
12,031
|
<ASSISTANT_TASK:>
Python Code:
def squareMax(num1, num2):
maxNum = max(num1, num2)
return maxNum*maxNum
def noInput():
answer = "abcd"
return answer
output = noInput()
print output
print answer
print squareMax(5,2)
print squareMax(3,3)
def squareMax(num1, num2):
maxNum = max(num1, num2)
return maxNum*maxNum
a = ["blue","red"]
b = ["green","purple","blue","yellow"]
aSize = len(a)
bSize = len(b)
print squareMax(aSize,bSize)
import random
#Removes a random letter from the given word
def removeLetter(word):
wSize = len(word)
posToRemove = random.randint(0,wSize-1) #choose which character to remove
return word[:posToRemove] + word[(posToRemove+1):] #adds the string before that character to the string after
print removeLetter("abc")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions don't always require input. In the following example, the function outputs a string which we can store to a variable. Try running the following code
Step2: Why are we able to print the "output" variable, but not the "answer" variable? When a variable is defined within a function, it is not stored outside the function. Only the values passed through return() are accessible outside the function.
Step3: 5) How do you use a function within a larger python project?
Step4: 6) Test your understanding.
|
12,032
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from __future__ import print_function
import sys, os
from ptha_paths import data_dir, events_dir
fixed_grid_file = os.path.join(data_dir, 'MapsTopo', 'fixedgrid_xyB_small.npy')
d=load(fixed_grid_file)
x=d[:,0]
y=d[:,1]
B=d[:,2]
print("x,y,B are 1-dimensional arrays of length %s" % len(x))
# to reshape them as 2-dimensional arrays:
topo = reshape(B, (250,250), order='F')
X = reshape(x, (250,250), order='F')
Y = reshape(y, (250,250), order='F')
print("X,Y,topo are 2-dimensional arrays with shape ", X.shape)
fname = 'xyB.txt'
f = open(fname,'w')
for npts in range(len(B)):
f.write('%24.10f %24.10f %24.10f\n' % (x[npts],y[npts],B[npts]))
f.close()
print("Created file ", os.path.abspath(fname))
fname = 'xyB.txt'
data_array = vstack((x, y, B)).T
savetxt(fname, data_array, fmt='%24.10f')
print("Created file ", os.path.abspath(fname))
event = 'AASZb'
event_dir = os.path.join(events_dir, event)
zeta_file = os.path.join(event_dir, 'h_eta_small.npy')
zeta = load(zeta_file)
fname = '%s.txt' % event
data_array = vstack((x, y, zeta)).T
savetxt(fname, data_array, fmt='%24.10f')
print("Created file ", os.path.abspath(fname))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in topography data
Step2: Write out to text file
Step3: Another way to do the same thing
Step4: Read in zeta for a single event
Step5: Write out to text file
|
12,033
|
<ASSISTANT_TASK:>
Python Code::
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(x_train , y_train)
pred = model.predict(x_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,034
|
<ASSISTANT_TASK:>
Python Code:
import random
toss = random.random() # returns a random value between 0 and 1
if toss > 0.5:
print 'I won'
toss = random.random()
if toss > 0.5:
print 'I won'
else:
print 'You won'
fruits = ['apple', 'orange', 'banana', 'water melon']
fruit_index = random.randint(0, 3) # Get a random number between 0 and length of the fruit list
fruit = fruits[fruit_index] # use fruit_index as an index to randomly select a fruit
if fruit == 'apple':
print 'red'
elif fruit == 'orange':
print 'orange'
elif fruit == 'banana':
print 'yellow'
else:
print 'green'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: if else statement
Step2: if - else if Statment
|
12,035
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
lllat = 41.0 # latitude of lower left hand corner of the desired map domain (degrees).
urlat = 52.0 # latitude of upper right hand corner of the desired map domain (degrees).
lllon = -5.0 # longitude of lower left hand corner of the desired map domain (degrees).
urlon = 9.5 # longitude of upper right hand corner of the desired map domain (degrees).
m = Basemap(ax=ax,
projection='stere',
lon_0=(urlon+lllon)/2.,
lat_0=(urlat+lllat)/2.,
llcrnrlat=lllat,
urcrnrlat=urlat,
llcrnrlon=lllon,
urcrnrlon=urlon,
resolution='l') # Can be ``c`` (crude), ``l`` (low), ``i`` (intermediate), ``h`` (high), ``f`` (full) or None.
m.drawcoastlines()
m.drawstates()
m.drawcountries()
#m.drawrivers()
#m.drawcounties()
# Eiffel tower's coordinates
pt_lat = 48.858223
pt_lon = 2.2921653
x, y = m(pt_lon, pt_lat)
print(pt_lat, pt_lon)
print(x, y)
m.plot(x, y, 'ro')
#plt.savefig("map.png")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TODO
|
12,036
|
<ASSISTANT_TASK:>
Python Code:
import openpnm as op
import matplotlib.pyplot as plt
import scipy as sp
import numpy as np
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
wrk = op.Workspace()
wrk.loglevel=50
%run shared_funcs.ipynb
x_values, y_values = simulation(n=8)
plt.figure()
plt.plot(x_values, y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel('normalized diffusivity')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the workspace loglevel to not print anything
Step2: As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
Step3: The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one.
|
12,037
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from numpy.random import rand, seed
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
seed(10)
import itertools
# Create the inputs:
encoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1],[-1,1],[-1,1],[-1,1]) )
# Create the experiment design table:
doe = pd.DataFrame(encoded_inputs,columns=['x%d'%(i+1) for i in range(6)])
# "Manufacture" observed data y
doe['y1'] = doe.apply( lambda z : sum([ rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y2'] = doe.apply( lambda z : sum([ 5*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y3'] = doe.apply( lambda z : sum([ 100*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
print(doe[['y1','y2','y3']])
labels = {}
labels[1] = ['x1','x2','x3','x4','x5','x6']
for i in [2,3,4,5,6]:
labels[i] = list(itertools.combinations(labels[1], i))
obs_list = ['y1','y2','y3']
for k in labels.keys():
print(str(k) + " : " + str(labels[k]))
effects = {}
# Start with the constant effect: this is $\overline{y}$
effects[0] = {'x0' : [doe['y1'].mean(),doe['y2'].mean(),doe['y3'].mean()]}
print(effects[0])
effects[1] = {}
for key in labels[1]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ zz*effects_df.ix[zz] for zz in effects_df.index ])
effects_result.append(result)
effects[1][key] = effects_result
effects[1]
for c in [2,3,4,5,6]:
effects[c] = {}
for key in labels[c]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ np.prod(zz)*effects_df.ix[zz]/(2**(len(zz)-1)) for zz in effects_df.index ])
effects_result.append(result)
effects[c][key] = effects_result
def printd(d):
for k in d.keys():
print("%25s : %s"%(k,d[k]))
for i in range(1,7):
printd(effects[i])
print(len(effects))
master_dict = {}
for nvars in effects.keys():
effect = effects[nvars]
for k in effect.keys():
v = effect[k]
master_dict[k] = v
master_df = pd.DataFrame(master_dict).T
master_df.columns = obs_list
y1 = master_df['y1'].copy()
y1.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y1:")
print(y1[:10])
y2 = master_df['y2'].copy()
y2.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y2:")
print(y2[:10])
y3 = master_df['y3'].copy()
y3.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y3:")
print(y3[:10])
# Quantify which effects are not normally distributed,
# to assist in identifying important variables
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
stats.probplot(y1, dist="norm", plot=ax1)
ax1.set_title('y1')
stats.probplot(y2, dist="norm", plot=ax2)
ax2.set_title('y2')
stats.probplot(y3, dist="norm", plot=ax3)
ax3.set_title('y3')
xlabs = ['x1','x2','x3','x4','x5','x6']
ylabs = ['y1','y2','y3']
ls_data = doe[xlabs+ylabs]
import statsmodels.api as sm
import numpy as np
x = ls_data[xlabs]
x = sm.add_constant(x)
y1 = ls_data['y1']
est1 = sm.OLS(y1,x).fit()
print(est1.summary())
y2 = ls_data['y2']
est2 = sm.OLS(y2,x).fit()
print(est2.summary())
y3 = ls_data['y3']
est3 = sm.OLS(y3,x).fit()
print(est3.summary())
%matplotlib inline
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
# Quantify goodness of fit
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
r1 = y1 - est1.predict(x)
r2 = y2 - est2.predict(x)
r3 = y3 - est3.predict(x)
stats.probplot(r1, dist="norm", plot=ax1)
ax1.set_title('Residuals, y1')
stats.probplot(r2, dist="norm", plot=ax2)
ax2.set_title('Residuals, y2')
stats.probplot(r3, dist="norm", plot=ax3)
ax3.set_title('Residuals, y3')
fig = figure(figsize=(10,12))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
axes = [ax1,ax2,ax3]
colors = sns.xkcd_palette(["windows blue", "amber", "faded green", "dusty purple","aqua blue"])
#resids = [r1, r2, r3]
normed_resids = [r1/y1, r2/y2, r3/y3]
for (dataa, axx, colorr) in zip(normed_resids,axes,colors):
sns.kdeplot(dataa, bw=1.0, ax=axx, color=colorr, shade=True, alpha=0.5);
ax1.set_title('Probability Distribution: Normalized Residual Error, y1')
ax2.set_title('Normalized Residual Error, y2')
ax3.set_title('Normalized Residual Error, y3')
# Our original regression variables
xlabs = ['x2','x3','x4']
doe.groupby(xlabs)[ylabs].mean()
# If we decided to go for a different variable set
xlabs = ['x2','x3','x4','x6']
doe.groupby(xlabs)[ylabs].mean()
xlabs = ['x1','x2']
doe.groupby(xlabs)[ylabs].var()
doe.groupby(xlabs)[ylabs].count()
# Histogram of means of response values, grouped by xlabs
xlabs = ['x1','x2','x3','x4']
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)[ylabs].mean().values) / np.ravel(doe.groupby(xlabs)[ylabs].var().values)
hist(dat, 10, normed=False, color=colors[3]);
xlabel(r'Relative Variance ($\mu$/$\sigma^2$)')
show()
# Histogram of variances of response values, grouped by xlabs
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)['y1'].var().values)
hist(dat, normed=True, color=colors[4])
xlabel(r'Variance in $y_{1}$ Response')
ylabel(r'Frequency')
show()
# normal plot of residuals
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.plot(y1,r1,'o',color=colors[0])
ax1.set_xlabel('Response value $y_1$')
ax1.set_ylabel('Residual $r_1$')
ax2.plot(y2,r2,'o',color=colors[1])
ax2.set_xlabel('Response value $y_2$')
ax2.set_ylabel('Residual $r_2$')
ax2.set_title('Response vs. Residual Plots')
ax3.plot(y1,r1,'o',color=colors[2])
ax3.set_xlabel('Response value $y_3$')
ax3.set_ylabel('Residual $r_3$')
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a name="fullfactorial"></a>
Step2: <a name="varlablels"></a>
Step3: Now that we have variable labels for each main effect and interaction effect, we can actually compute those effects.
Step4: Next, compute the main effect of each variable, which quantifies the amount the response changes by when the input variable is changed from the -1 to +1 level. That is, it computes the average effect of an input variable $x_i$ on each of the three response variables $y_1, y_2, y_3$.
Step5: Our next step is to crank through each variable interaction level
Step6: We've computed the main and interaction effects for every variable combination (whew!), but now we're at a point where we want to start doing things with these quantities.
Step7: If we were only to look at the list of rankings of each variable, we would see that each response is affected by different input variables, listed below in order of descending importance
Step8: Normally, we would use the main effects that were computed, and their rankings, to eliminate any variables that don't have a strong effect on any of our variables. However, this analysis shows that sometimes we can't eliminate any variables.
Step9: The first ordinary least squares linear model is created to predict values of the first variable, $y_1$, as a function of each of our input variables, the list of which are contained in the xlabs variable. When we perform the linear regression fitting, we see much of the same information that we found in the prior two-level three-factor full factorial design, but here, everything is done automatically.
Step10: The StatsModel OLS object prints out quite a bit of useful information, in a nicely-formatted table. Starting at the top, we see a couple of important pieces of information
Step11: <a name="goodness_of_fit"></a>
Step12: Determining whether significant trends are being missed by the model depends on how many points deviate from the red line, and how significantly. If there is a single point that deviates, it does not necessarily indicate a problem; but if there is significant wiggle and most points deviate significantly from the red line, it means that there is something about the relationship between the inputs and the outputs that our model is missing.
Step13: Note that in these figures, the bumps at extreme value are caused by the fact that the interval containing the responses includes 0 and values close to 0, so the normalization factor is very tiny, leading to large values.
Step14: This functionality can also be used to determine the variance in all of the experimental observations being aggregated. For example, here we aggregate over $x_3 \dots x_6$ and show the variance broken down by $x_1, x_2$ vs $y_1, y_2, y_3$.
Step15: Or even the number of experimental observations being aggregated!
Step16: <a name="dist_variance"></a>
Step17: The distribution of variance looks mostly normal, with some outliers. These are the same outliers that showed up in our quantile-quantile plot, and they'll show up in the plots below as well.
|
12,038
|
<ASSISTANT_TASK:>
Python Code:
import os
ENDPOINT = # Enter your Kubeflow ENDPOINT here.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
%%bash
TFX_PKG="tfx==0.22.0"
KFP_PKG="kfp==0.5.1"
pip freeze | grep $TFX_PKG || pip install -Uq $TFX_PKG
pip freeze | grep $KFP_PKG || pip install -Uq $KFP_PKG
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
!which skaffold
PIPELINE_NAME = # Your pipeline name
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
!python -m models.features_test
!python -m models.keras.model_test
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil ls gs://{GCS_BUCKET_NAME} | grep {GCS_BUCKET_NAME} || gsutil mb gs://{GCS_BUCKET_NAME}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1. Environment setup
Step2: tfx and kfp tools setup
Step3: You may need to restart the kernel at this point.
Step4: Modify the PATH environment variable so that skaffold is available
Step5: Step 2. Copy the predefined template to your project directory.
Step6: TFX includes the taxi template with the TFX python package.
Step7: Step 3. Browse your copied source files
Step8: Step 4. Create the artifact store bucket
|
12,039
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
ds = xr.tutorial.load_dataset("air_temperature")
# to celsius
air = ds.air - 273.15
# resample from 6-hourly to daily values
air = air.resample(time="D").mean()
air
projection = ccrs.LambertConformal(central_longitude=-95, central_latitude=45)
f, ax = plt.subplots(subplot_kw=dict(projection=projection))
air.isel(time=0).plot(transform=ccrs.PlateCarree(), cbar_kwargs=dict(shrink=0.7))
ax.coastlines()
weights = np.cos(np.deg2rad(air.lat))
weights.name = "weights"
weights
air_weighted = air.weighted(weights)
air_weighted
weighted_mean = air_weighted.mean(("lon", "lat"))
weighted_mean
weighted_mean.plot(label="weighted")
air.mean(("lon", "lat")).plot(label="unweighted")
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Plot the first timestep
Step3: Creating weights
Step4: Weighted mean
Step5: Plot
|
12,040
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import statsmodels
from statsmodels.imputation import mice
import random
random.seed(10)
df = pd.read_csv("http://goo.gl/19NKXV")
df.head()
original = df.copy()
original.describe().loc['count',:]
def add_nulls(df, n):
new = df.copy()
new.iloc[random.sample(range(new.shape[0]), n), :] = np.nan
return new
df.Cholesterol = add_nulls(df[['Cholesterol']], 20)
df.Smoking = add_nulls(df[['Smoking']], 20)
df.Education = add_nulls(df[['Education']], 20)
df.Age = add_nulls(df[['Age']], 5)
df.BMI = add_nulls(df[['BMI']], 5)
df.describe()
for col in ['Gender', 'Smoking', 'Education']:
df[col] = df[col].astype('category')
df.dtypes
df = pd.get_dummies(df);
imp = mice.MICEData(df)
imp.conditional_formula['BMI']
before = imp.data.BMI.copy()
imp.perturb_params('BMI')
imp.impute('BMI')
after = imp.data.BMI
import matplotlib.pyplot as plt
plt.clf()
fig, ax = plt.subplots(1, 1)
ax.plot(before, 'or', label='before', alpha=1, ms=8)
ax.plot(after, 'ok', label='after', alpha=0.8, mfc='w', ms=8)
plt.legend();
pd.DataFrame(dict(before=before.describe(), after=after.describe()))
before[before != after]
after[before != after]
imp.update_all(2)
imp.plot_fit_obs('BMI');
imp.plot_fit_obs('Age');
original.mean()
for col in original.mean().index:
x = original.mean()[col]
y = imp.data[col].mean()
e = abs(x - y) / x
print("{:<12} mean={:>8.2f}, exact={:>8.2f}, error={:>5.2g}%".format(col, x, y, e * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create data frame
Step2: Add some missing values
Step3: Confirm the presence of null values
Step4: Create categorical variables
Step5: Create dummy variables
Step6: Impute data
Step7: Imputation for one feature
Step8: The perturb_params method must be called before running the impute method, that runs the imputation. It updates the specified column in the data attribute.
Step9: Impute all
Step10: Validation
|
12,041
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.linear_model as lm
np.random.seed(1)
N = 400
𝜇 = 10
X = np.random.randn(N) + 𝜇
noise = 0.5 * np.random.randn(len(X))
Y = X + noise
model = lm.LinearRegression()
X = np.reshape(X, (len(X), 1))
M = model.fit(X, Y)
M.score(X, Y)
ϑ0 = M.intercept_
ϑ1 = M.coef_[0]
xMax = np.max(X) + 0.2
xMin = np.min(X) - 0.2
%matplotlib inline
plt.figure(figsize=(15, 10))
sns.set(style='darkgrid')
plt.scatter(X, Y, c='b') # 'b' is blue color
plt.xlabel('X values')
plt.ylabel('true values + noise')
plt.title('Influence of rounding on explained variance')
plt.show(plt.plot([xMin, xMax], [ϑ0 + ϑ1 * xMin, ϑ0 + ϑ1 * xMax], c='r'))
X = np.round(X * 0.8) / 0.8
model = lm.LinearRegression()
M = model.fit(X, Y)
M.score(X, Y)
ϑ0 = M.intercept_
ϑ1 = M.coef_[0]
xMax = max(X) + 0.2
xMin = min(X) - 0.2
plt.figure(figsize=(12, 10))
sns.set(style='darkgrid')
plt.scatter(X, Y, c='b')
plt.plot([xMin, xMax], [ϑ0 + ϑ1 * xMin, ϑ0 + ϑ1 * xMax], c='r')
plt.xlabel('rounded X values')
plt.ylabel('true X values + noise')
plt.title('Influence of rounding on explained variance')
plt.show()
X.shape
selectorX = (X > 11)
selectorY = np.reshape(selectorX, (N,))
XS = X[selectorX]
XS = np.reshape(XS, (len(XS), 1))
YS = Y[selectorY]
model = lm.LinearRegression()
M = model.fit(XS, YS)
M.score(XS, YS)
ϑ0 = M.intercept_
ϑ1 = M.coef_[0]
xMax = max(XS) + 0.2
xMin = min(XS) - 0.2
plt.figure(figsize=(12, 10))
sns.set(style='darkgrid')
plt.scatter(XS, YS, c='b')
plt.plot([xMin, xMax], [ϑ0 + ϑ1 * xMin, ϑ0 + ϑ1 * xMax], c='r')
plt.xlabel('rounded X values')
plt.ylabel('true X values + noise')
plt.title('Influence of subclassing on explained variance')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will work with artificially generated data. The independent variable X is a numpy array
Step2: The dependent variable Y is created by adding some noise to the independent variable X. This noise is
Step3: We build a linear model for X and Y.
Step4: In order to use SciKit-Learn we have to reshape the array X into a matrix.
Step5: We train the model and compute its score.
Step6: In order to plot the data together with the linear model, we extract the coefficients.
Step7: We plot Y versus X and the linear regression line.
Step8: As we want to study the effect of <em style="color
Step9: We create a new <em style="color
Step10: We can see that the performance of the linear model has degraded considerably.
Step11: Next, we investigate the effect of <em style="color
Step12: Again, we fit a linear model.
Step13: We see that the performance of linear regression has degraded considerably. Let's plot this.
|
12,042
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
12,043
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
print(X.shape)
print("mean : %s " % X.mean(axis=0))
print("standard deviation : %s " % X.std(axis=0))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
print(X_scaled.shape)
print("mean : %s " % X_scaled.mean(axis=0))
print("standard deviation : %s " % X_scaled.std(axis=0))
rnd = np.random.RandomState(42)
X_blob = np.dot(rnd.normal(size=(100, 2)), rnd.normal(size=(2, 2))) + rnd.normal(size=2)
plt.scatter(X_blob[:, 0], X_blob[:, 1])
plt.xlabel("feature 1")
plt.ylabel("feature 2")
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(X_blob)
X_pca = pca.transform(X_blob)
plt.scatter(X_pca[:, 0], X_pca[:, 1])
plt.xlabel("first principal component")
plt.ylabel("second principal component")
# Taken from example in scikit-learn examples
# Authors: Fabian Pedregosa <fabian.pedregosa@inria.fr>
# Olivier Grisel <olivier.grisel@ensta.org>
# Mathieu Blondel <mathieu@mblondel.org>
# Gael Varoquaux
# License: BSD 3 clause (C) INRIA 2011
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import offsetbox
from sklearn import (manifold, datasets, decomposition, ensemble, lda,
random_projection)
digits = datasets.load_digits(n_class=6)
n_digits = 500
X = digits.data[:n_digits]
y = digits.target[:n_digits]
n_samples, n_features = X.shape
n_neighbors = 30
def plot_embedding(X, title=None):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
plt.text(X[i, 0], X[i, 1], str(digits.target[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = np.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 1e5:
# don't show points that are too close
# set a high threshold to basically turn this off
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r),
X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
if title is not None:
plt.title(title)
n_img_per_row = 10
img = np.zeros((10 * n_img_per_row, 10 * n_img_per_row))
for i in range(n_img_per_row):
ix = 10 * i + 1
for j in range(n_img_per_row):
iy = 10 * j + 1
img[ix:ix + 8, iy:iy + 8] = X[i * n_img_per_row + j].reshape((8, 8))
plt.imshow(img, cmap=plt.cm.binary)
plt.xticks([])
plt.yticks([])
plt.title('A selection from the 64-dimensional digits dataset')
print("Computing PCA projection")
X_pca = decomposition.TruncatedSVD(n_components=2).fit_transform(X)
plot_embedding(X_pca, "Principal Components projection of the digits")
plt.show()
from sklearn.datasets import make_s_curve
X, y = make_s_curve(n_samples=1000)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.view_init(10, -60)
X_pca = PCA(n_components=2).fit_transform(X)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
from sklearn.manifold import Isomap
iso = Isomap(n_neighbors=15, n_components=2)
X_iso = iso.fit_transform(X)
plt.scatter(X_iso[:, 0], X_iso[:, 1], c=y)
from sklearn.datasets import load_digits
digits = load_digits(5)
X = digits.data
# ...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The iris dataset is not "centered" that is it has non-zero mean and the standard deviation is different for each component
Step2: To use a preprocessing method, we first import the estimator, here StandardScaler and instantiate it
Step3: As with the classification and regression algorithms, we call fit to learn the model from the data. As this is an unsupervised model, we only pass X, not y. This simply estimates mean and standard deviation.
Step4: Now we can rescale our data by applying the transform (not predict) method
Step5: X_scaled has the same number of samples and features, but the mean was subtracted and all features were scaled to have unit standard deviation
Step6: Principal Component Analysis
Step7: As always, we instantiate our PCA model. By default all directions are kept.
Step8: Then we fit the PCA model with our data. As PCA is an unsupervised algorithm, there is no output y.
Step9: Then we can transform the data, projected on the principal components
Step10: On the left of the plot you can see the four points that were on the top right before. PCA found fit first component to be along the diagonal, and the second to be perpendicular to it. As PCA finds a rotation, the principal components are always at right angles to each other.
Step11: Note that this projection was determined without any information about the
Step12: This is a 2-dimensional dataset embedded in three dimensions, but it is embedded
Step13: Manifold learning algorithms, however, available in the sklearn.manifold
Step14: Exercise
|
12,044
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.datasets import make_blobs
# Create simulated data
X, _ = make_blobs(n_samples = 10,
n_features = 2,
centers = 1,
random_state = 1)
# Replace the first observation's values with extreme values
X[0,0] = 10000
X[0,1] = 10000
# Create detector
outlier_detector = EllipticEnvelope(contamination=.1)
# Fit detector
outlier_detector.fit(X)
# Predict outliers
outlier_detector.predict(X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Detect Outliers
|
12,045
|
<ASSISTANT_TASK:>
Python Code:
myExperiment = metatlas_objects.Experiment(name = 'QExactive_Hilic_Pos_Actinobacteria_Phylogeny')
myPath = '/global/homes/b/bpb/ExoMetabolomic_Example_Data/'
myPath = '/project/projectdirs/metatlas/data_for_metatlas_2/20150324_LPSilva_BHedlund_chloroflexi_POS_rerun/'
myFiles = glob.glob('%s*.mzML'%myPath)
myFiles.sort()
groupID = []
for f in myFiles:
groupID.append('')
i = 0
while i < len(myFiles):
a,b = os.path.split(myFiles[i])
j = raw_input('enter group id for %s [number, "x" to go back]:'%b)
if j == 'x':
i = i - 1
else:
groupID[i] = j
i = i + 1
print groupID
uGroupID = sorted(set(groupID))
print uGroupID
uGroupName = []
for u in uGroupID:
j = raw_input('enter group name for Group #%s: '%u)
uGroupName.append(j)
fsList = []
for i,g in enumerate(groupID):
for j,u in enumerate(uGroupID):
if g == u:
fs = metatlas_objects.FileSpec(polarity = 1,
group = uGroupName[j],
inclusion_order = i)
fsList.append(fs)
myExperiment.load_files([myFiles[i]],fs)
myExperiment.save()
print myExperiment.finfos[0].hdf_file
print myExperiment.finfos[0].group
print myExperiment.finfos[0].polarity
# myH5Files = []
# for f in myFiles:
# metatlas.mzml_to_hdf('%s'%(f))
# myH5Files.append(f.replace('.mzML','.h5'))
# print f
print len(myExperiment.finfos)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1>Get a list of mzML files that you uploaded and assign them to a group</h1>
Step2: <h1>Specify the descriptive names for each group</h1>
Step3: <h1>Steps in the file description and conversion process</h1>
Step4: <h1>Convert All Your Files Manually</h1>
|
12,046
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import seaborn as _
from bob_emploi.data_analysis.lib import read_data
from bob_emploi.data_analysis.lib import plot_helpers
data_folder = os.getenv('DATA_FOLDER')
fiche_dicts = read_data.load_fiches_from_xml(os.path.join(data_folder, 'rome/ficheMetierXml'))
rome = [read_data.fiche_extractor(f) for f in fiche_dicts]
skills_data = [dict(skill, code_rome=job['code_rome'], job_group_name=job['intitule'])
for job in rome
for skill in job['skills']]
skills_raw = pd.DataFrame(skills_data)
skills_raw.columns = ['skill_id', 'job_group_id', 'type', 'job_group_name', 'name', 'position', 'priorisation']
skills_raw.head()
skills_raw.describe().transpose()
# Mapping extracted from data/rome/csv/unix_referentiel_competence_v330_utf8.csv
COMPETENCE_TYPES = {
'1': 'theoretical_skill',
'2': 'action_skill'
}
skills = skills_raw.drop(['priorisation'], axis=1)
skills['type'] = skills.type.map(COMPETENCE_TYPES)
skills['position'] = skills.position.astype(int)
dedup_skills = skills.drop_duplicates(subset=['skill_id'])
dedup_skills['type'].value_counts(normalize=True)
skills.groupby('skill_id')['type'].nunique().value_counts()
skills.duplicated(subset=['job_group_id', 'type', 'position']).sum()
skills.duplicated(subset=['job_group_id', 'position']).sum()
skills = skills.sort_values('position')
first = skills.groupby(['job_group_name', 'type']).first().name
last = skills.groupby(['job_group_name', 'type']).last().name
pd.concat([first, last], axis=1, keys=['first', 'last'])
skills.groupby('job_group_id').skill_id.nunique().hist();
by_job_by_type = skills.groupby(['job_group_id', 'type'])
unique_job_counts = by_job_by_type.skill_id.nunique().reset_index()
unique_job_counts.hist(by='type', normed=True, sharex=True, sharey=True);
skill_frequency = skills.name.value_counts() / skills.job_group_id.nunique()
skill_frequency.index = pd.Series(range(len(skill_frequency))) / len(skill_frequency)
skill_frequency.plot(figsize=(8, 4));
counts = skills.groupby(['name', 'type']).skill_id.count().to_frame()
counts.sort_values('skill_id', ascending=False).head(10)
skills['category'] = skills.job_group_id.str.slice(0, 1)
skills.groupby('name').category.nunique().hist(bins=14);
skills.groupby(['name', 'type']).category.nunique().hist(bins=14);
skills.groupby(['name', 'type']).category.nunique().reset_index().hist(by='type', normed=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before using the XML version of the data, I had a look at the CSV data. I saw that i could use unix_coherence_item_v330_utf8 to establish a mapping between job_groups and skills, however I saw that the XML data contains this ordering of skills. I thought it could be quite useful, if that actually represents a ordering from more to less specific. That is why I used the XML version of the data in this case.
Step2: Overview of dataset structure
Step3: So we got 1741 different skills for the 531 different job groups in ROME. They are of two different types where one of them makes up about 70% of all skill occurrences (3964 / 5520). The most common skill is Utilisation d'outils bureautiques.... The position column suggests that they are ordered and the priorisation column does not contain any information.
Step4: The distribution of theoretical to action skills is present also in the deduplicated list of skills, not only in the skills associated to jobs.
Step5: Good!
Step6: Skills are ordered separately for the two types of skills.
Step7: Feedback I got from Pascal
Step8: That's a neat distribution with a clear peak around 10. Florian asked in the PRD 'how many skills do we plan to collect per user'. With users having had several jobs in their past, I would expect a user to pick between 15-30 skills, as a very rough estimate. This of course also depends on the typical skill overlap between jobs and a probobably higher skill overlap between jobs in the same field.
Step9: The theoretical skills resemble the above distribution, but many jobs seem to have only one or two action skills.
Step10: There seem to be a very few skills that are associated with lots of different jobs. Most of the skills seem to be very specific to a job.
Step11: Looks like very general soft skills.
|
12,047
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 3
sample_id = 100
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(imgs):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
output=[]
for img in imgs:
output.append(np.array(\
[np.array([\
np.array([channel/255 for channel in pix ]) \
for pix in row ]) \
for row in img ]))
return(np.array(output))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(labels):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
output=[]
for label in labels:
one_hot_vector = np.zeros(10)
one_hot_vector[label]=1
output.append(one_hot_vector)
return np.array(output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32,[None,image_shape[0],image_shape[1],image_shape[2]],name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32,[None,n_classes],name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32,name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weight = tf.Variable(tf.truncated_normal([conv_ksize[0],conv_ksize[1],x_tensor.shape[3].value,conv_num_outputs],stddev=0.1))
bias= tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor,weight,strides=[1,conv_strides[0],conv_strides[1],1],padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer,bias)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer,ksize=[1,pool_ksize[0],pool_ksize[1],1],strides=[1,pool_strides[0],pool_strides[1],1],padding='SAME')
return(conv_layer)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.reshape(x_tensor,[-1,x_tensor.shape[1].value*x_tensor.shape[2].value*x_tensor.shape[3].value])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weight = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value,num_outputs],stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
output = tf.add(tf.matmul(x_tensor,weight),bias)
output = tf.nn.relu(output)
return(output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weight = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value,num_outputs],stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return(tf.add(tf.matmul(x_tensor,weight),bias))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_layer = conv2d_maxpool(x,32,[4,4],[1,1],[4,4],[2,2])
conv_layer = conv2d_maxpool(conv_layer,128,[3,3],[1,1],[2,2],[2,2])
conv_layer = conv2d_maxpool(conv_layer,256,[3,3],[1,1],[2,2],[2,2])
flat_layer = flatten(conv_layer)
full_layer = fully_conn(flat_layer,512)
dropout_layer = tf.nn.dropout(full_layer,keep_prob)
output_layer = output(dropout_layer,10)
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer,feed_dict={x:feature_batch,y:label_batch,keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
#print("Loss: ",session.run(cost,feed_dict={features:feature_batch,labels:label_batch}))
print("Loss:",session.run(cost,feed_dict={x:feature_batch,y:label_batch,keep_prob:1}),
"Training accuracy: ",session.run(accuracy,feed_dict={x:feature_batch,y:label_batch,keep_prob:1}),\
"Validation accuracy: ",session.run(accuracy,feed_dict={x:valid_features,y:valid_labels,keep_prob:1}))
# TODO: Tune Parameters
epochs = 50
batch_size = 128
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
import time
start_time = time.time()
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
print("--- %s seconds ---" % (time.time() - start_time))
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
12,048
|
<ASSISTANT_TASK:>
Python Code:
# Initial import
import dcgpy
# For plotting
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
# For scientific computing and more ...
import numpy as np
from tqdm import tqdm
from sklearn.utils import shuffle
%matplotlib inline
# To plot the unction we use a uniform grid
X = np.arange(-1, 1, 0.05)
Y = np.arange(-1, 1, 0.05)
n_samples = len(X) * len(Y)
points = np.zeros((n_samples, 2))
i=0
for x in X:
for y in Y:
points[i][0] = x
points[i][1] = y
i=i+1
labels = (np.sin(5 * points[:,0] * (3 * points[:,1] + 1.)) + 1. ) / 2.
points = points.reshape((n_samples,2))
labels = labels.reshape((n_samples,1))
# To plot the function
X, Y = np.meshgrid(X, Y)
Z = (np.sin(5 * X * (3 * Y + 1.)) + 1. ) / 2.
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# We shuffle the points and labels
points, labels = shuffle(points, labels, random_state=0)
# We create training and test sets
X_train = points[:800]
Y_train = labels[:800]
X_test = points[800:]
Y_test = labels[800:]
_ = plt.title("function to be learned")
# We define a 2 input 1 output dCGPANN with sigmoid nonlinearities
dcgpann = dcgpy.encode_ffnn(2,1,[50,20],["sig", "sig", "sig"], 5)
std = 1.5
# Weight/biases initialization is made using a normal distribution
dcgpann.randomise_weights(mean = 0., std = std)
dcgpann.randomise_biases(mean = 0., std = std)
# We show the initial MSE
print("Starting error:", dcgpann.loss(X_test,Y_test, "MSE"))
print("Net complexity (number of active weights):", dcgpann.n_active_weights())
print("Net complexity (number of unique active weights):", dcgpann.n_active_weights(unique=True))
print("Net complexity (number of active nodes):", len(dcgpann.get_active_nodes()))
x = dcgpann.get()
w = dcgpann.get_weights()
b = dcgpann.get_biases()
res = []
# And show a visualization of the FFNN encoded in a CGP
dcgpann.visualize(show_nonlinearities=True)
import timeit
start_time = timeit.default_timer()
lr0 = 0.3
for i in tqdm(range(5000)):
lr = lr0 #* np.exp(-0.0001 * i)
loss = dcgpann.sgd(X_train, Y_train, lr, 32, "MSE", parallel = 4)
res.append(loss)
elapsed = timeit.default_timer() - start_time
# Print the time taken to train and the final result on the test set
print("Time (s): ", elapsed)
print("End MSE: ", dcgpann.loss(X_test,Y_test, "MSE"))
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
# We define Stochastic Gradient Descent as an optimizer
sgd = optimizers.SGD(lr=0.3)
# We define weight initializetion
initializerw = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None)
initializerb = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None)
model = Sequential([
Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
])
# For a mean squared error regression problem
model.compile(optimizer=sgd,
loss='mse')
# Train the model, iterating on the data in batches of 32 samples
start_time = timeit.default_timer()
history = model.fit(X_train, Y_train, epochs=5000, batch_size=32, verbose=False)
elapsed = timeit.default_timer() - start_time
# Print the time taken to train and the final result on the test set
print("Time (s): ", elapsed)
print("End MSE: ", model.evaluate(X_train, Y_train))
# We plot for comparison the MSE during learning in the two cases
plt.semilogy(np.sqrt(history.history['loss']), label='Keras')
plt.semilogy(np.sqrt(res), label='dCGP')
plt.title('dCGP vs Keras')
plt.xlabel('epochs')
plt.legend()
_ = plt.ylabel('RMSE')
epochs = 5000
for i in range(10):
# dCGP
dcgpann = dcgpy.encode_ffnn(2,1,[50,20],["sig", "sig", "sig"], 5)
dcgpann.randomise_weights(mean = 0., std = std)
dcgpann.randomise_biases(mean = 0., std = std)
res = []
for i in tqdm(range(epochs)):
lr = lr0 #* np.exp(-0.0001 * i)
loss = dcgpann.sgd(X_train, Y_train, lr, 32, "MSE", parallel = 4)
res.append(loss)
# Keras
model = Sequential([
Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
])
model.compile(optimizer=sgd, loss='mse')
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=32, verbose=False)
plt.semilogy(np.sqrt(history.history['loss']), color = 'b')
plt.semilogy(np.sqrt(res), color = 'C1')
plt.title('dCGP vs Keras')
plt.xlabel('epochs')
_ = plt.ylabel('RMSE')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data set
Step2: Encoding and training a FFNN using dCGP
Step3: Same training is done using Keras (Tensor Flow backend)
Step4: Repeating ten times the same comparison
|
12,049
|
<ASSISTANT_TASK:>
Python Code:
# Prints Hello, world!
print('Hello, world!')
def hello():
'''Function that prints Hello, world!
'''
print('Hello, world!')
hello()
x = 4
print(x)
# Addition
print(x+5)
# Subtraction
print(x-5)
# Multiplication
print(x*5)
# Division
print(x/5)
print(x//5) #integer
# Modulus (remainder)
print(x%5)
print(x%2)
# Exponent
print(x**2)
# Importing the module
import math
y = math.pow(x, 2)
print(y)
def exponent(n):
x = 3
# You can change the value of x here if you want, or make it a parameter of the function.
return math.pow(x,n)
exponent(3)
result = exponent(4)
print(result)
print(result + 3)
import random
# Generating random integer between 5 and 10.
z = random.randint(5,10)
print(z)
name = 'shreya'
print(name)
# Print the length of name.
print(len(name))
# Concatenation example
print(name + 'test')
print(name+x)
print(type(name))
print(type(x))
print(name + str(x))
name[0]
name[-1]
name[0:2]
example_list = [1, 2, 3, 4]
print(example_list[0])
print(example_list[-1])
print(example_list[0:2])
print(sum(example_list))
x = 5
# Prints whether x is greater or less than 10.
if x > 10:
print('Greater than 10')
else:
print('Less than 10')
# Prints type of condition.
print(type(x>10))
# Prints value of condition.
print(x>10)
if x > 5:
print('Greater than 5')
elif x < 5:
print('Less than 5')
else:
print('Equal to 5')
y = 10
if y > 2 and x > 2:
print('y greater than 2 AND x greater than 2')
if y > 5 and x > 5:
print('y greater than 2 AND x greater than 5')
if y > 5 or x > 5:
print('y greater than 2 OR x greater than 5')
def goodbye():
pass
#Implement this function so that it operates similarly to the hello function in the first cell
# Implement a conditional statement such that if the value of x is greater than 5, 'Hello, world' is printed
# If not, print 'Goodbye, world'
# Calculate average of example_list
x = 0
while x < 5:
print('Executing while loop')
x +=1
y = 5
while y < 10:
print('Executing while loop')
y+=1
if y==7:
print('y=7')
break
n = 5
for i in range(0,n):
print('Executing for loop, iteration #'+str(i))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A function is a way of calling the same sequence, without having to rewrite it.
Step2: A variable is a storage location. It represents a value. Below, the variable 'x' is assigned to the integer value 4.
Step3: Below are examples of the basic math operations in Python. The ones you might not be used to are integer division (which uses //) which returns the closest integer value when the number is divided. Modulus (uses %) returns the remainder.
Step4: Python provides a lot of pre-written code, called "modules" which can be imported so we don't have to reinvent the wheel. For example, the below pow function works much faster than the above implementation (only noticable for larger values).
Step5: We can convert the above cell into a function. The value that we pass in (called a "parameter") to exponent can be easily changed. Feel free to change the values of x and n and see how the output changes accordingly.
Step6: If a function returns a value, you can assign the output of the function to a variable.
Step7: Another useful module is random. Below we generate a random integer between the two parameters to randint. You will notice that if you run the below cell multiple times, the output will be different since it is generated randomly.
Step8: A string is a sequence of characters. This is the type of 'Hello, world!' which we printed earlier. Strings can be concatenated and spliced. You can use the len function to find the length.
Step9: Types are string, integer, float (decimal), etc. Types often are not compatible. What happens if you run the following?
Step10: In Python, you can easily see what type each variable is as follows
Step11: How to fix that problem
Step12: You can access parts of a string with an index value. In computer science, index values begin with 0.
Step13: You can access elements from the back of the string as follows
Step14: Get substrings by using colons.
Step15: An array or list is a list of elements accessible by index values, similar to strings. They also have some built in features like the sum function shown below.
Step16: Conditional statements execute only if the condition is true. Booleans are types which evaluate to True or False in Python.
Step17: Multiple conditions can be handled with elif.
Step18: and and or are two basic logical constructs in computer science.
Step19: Practice Exercises
Step20: Loops
Step21: The below loop ends when it hits the break when the value of y is 7.
Step22: A for loop executes for the number of times indicated.
|
12,050
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# Lotka Volterra model
# initialise parameters
k1 = 1.5
k2 = 1.
k3 = 3.
k4 = 1.
def my_dxdt(s,t):
Function returns values of derivatives of Lotka Volterra model
return [k1*s[0] - k2*s[0]*s[1], - k3*s[1]+k4*s[0]*s[1]]
def my_euler_solver(dxdt, s0, timegrid):
Implementation of a simple Euler method (constant stepsize)
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + dxdt(s,time)[i] * hi
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# get solutions
s0=[5,10]
my_euler_result_e3 = my_euler_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_euler_result_e4 = my_euler_solver(my_dxdt, s0, timegrid_e4)
def my_heun_solver(dxdt, s0, timegrid):
Implementation of the Heun method (constant stepsize)
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + (hi/2)*(dxdt(s,time)[i]+dxdt([s[k]+hi*dxdt(s,time)[k] for k in range(len(s))], time+hi)[i])
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# plot results
s0=[5,10]
my_heun_result_e3 = my_heun_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_heun_result_e4 = my_heun_solver(my_dxdt, s0, timegrid_e4)
import scipy.integrate
timegrid = np.linspace(0,20,2000)
s0 = [5,10]
result = scipy.integrate.odeint(my_dxdt, s0, timegrid)
plt.figure(1)
plt.plot(timegrid_e3, my_euler_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_euler_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_euler_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_euler_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Euler method')
plt.figure(2)
plt.plot(timegrid_e3, my_heun_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_heun_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Heun method')
plt.figure(3)
plt.plot(timegrid, result.T[0], label='X')
plt.plot(timegrid, result.T[1], label='Y')
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('odeint')
plt.plot(timegrid, result.T[0], label='X odeint')
plt.plot(timegrid, result.T[1], label='Y odeint')
plt.legend(loc=2)
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X Heun")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y Heun")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Comparison odeint & Heun method')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: ODE to joy
Step3: To test the accuracy, we run the simulation with 2 different time grids, one with a step size of 0.01 and one with step size 0.001
Step5: Heun's method
Step6: Let's simulate the same system also with odeint, the standard ODE solver of the scipy Python package.
Step7: And now we compare the results. I marked the amplitude and position of the maxima with red dotted lines.
Step8: As you can see, the Heun's method seems to have already a remarkable accuracy, even if it is a very simple method. Let's compare the results of odeint and the Heun's method directly
|
12,051
|
<ASSISTANT_TASK:>
Python Code:
#@title Imports and Definitions
import numpy as np
import os
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import gin
from rigl import sparse_utils
from rigl.rigl_tf2 import init_utils
from rigl.rigl_tf2 import utils
from rigl.rigl_tf2 import train
from rigl.rigl_tf2 import networks
from rigl.rigl_tf2 import mask_updaters
import functools
pruning_params = utils.get_pruning_params(mode='constant', final_sparsity = 0., begin_step=int(1e10))
INPUT_SHAPE = (28, 28, 3)
class Lenet5(tf.keras.Model):
def __init__(self,
input_shape,
num_classes,
activation: str,
hidden_sizes = (6, 16, 120, 84)):
super(Lenet5, self).__init__()
l = tf.keras.layers
kwargs = {'activation': activation}
filter_fn = lambda _: True
wrap_fn = functools.partial(utils.maybe_prune_layer, params=pruning_params, filter_fn=filter_fn)
self.conv1 = wrap_fn(l.Conv2D(hidden_sizes[0], 5, input_shape=input_shape, **kwargs))
self.pool1 = l.MaxPool2D(pool_size=(2, 2))
self.conv2 = wrap_fn(l.Conv2D(hidden_sizes[1], 5, input_shape=input_shape, **kwargs))
self.pool2 = l.MaxPool2D(pool_size=(2, 2))
self.flatten = l.Flatten()
self.dense1 = wrap_fn(l.Dense(hidden_sizes[2], **kwargs))
self.dense2 = wrap_fn(l.Dense(hidden_sizes[3], **kwargs))
self.dense3 = wrap_fn(l.Dense(num_classes, **kwargs))
self.build((1,)+input_shape)
def call(self, inputs):
x = inputs
results = {}
for l_name in ['conv1', 'pool1', 'conv2', 'pool2', 'flatten', 'dense1', 'dense2', 'dense3']:
x = getattr(self, l_name)(x)
results[l_name] = x
return results
def get_mask_random_numpy(mask_shape, sparsity):
Creates a random sparse mask with deterministic sparsity.
Args:
mask_shape: list, used to obtain shape of the random mask.
sparsity: float, between 0 and 1.
Returns:
numpy.ndarray
all_ones = np.abs(np.ones(mask_shape))
n_zeros = int(np.floor(sparsity * all_ones.size))
rand_vals = np.random.uniform(size=mask_shape, high=range(1,mask_shape[-1]+1))
randflat=rand_vals.flatten()
randflat.sort()
t = randflat[n_zeros]
all_ones[rand_vals<=t] = 0
return all_ones
def create_convnet(sparsity=0, weight_init_method = None, scale=2, method='fanin_normal'):
model = Lenet5(INPUT_SHAPE, num_classes, 'relu')
if sparsity > 0:
all_masks = [layer.pruning_vars[0][1] for layer in model.layers if isinstance(layer, utils.PRUNING_WRAPPER)]
for mask in all_masks:
new_mask = tf.cast(get_mask_random_numpy(mask.shape, sparsity), dtype=mask.dtype)
mask.assign(new_mask)
if weight_init_method:
all_weights = [layer.pruning_vars[0][0] for layer in model.layers if isinstance(layer, utils.PRUNING_WRAPPER)]
for mask, param in zip(all_masks, all_weights):
if weight_init_method == 'unit':
new_init = init_utils.unit_scaled_init(mask, method=method, scale=scale)
elif weight_init_method == 'layer':
new_init = init_utils.layer_scaled_init(mask, method=method, scale=scale)
else:
raise ValueError
param.assign(new_init)
return model
# Let's create a 95% sparse Lenet-5.
model = create_convnet(sparsity=0.95, weight_init_method='layer', scale=2, method='fanin_normal')
# Random input signal
random_input = tf.random.normal((1000,) + INPUT_SHAPE)
output_dict = model(random_input)
all_stds = []
for k in ['dense1', 'dense2', 'dense3']:
out_dim = output_dict[k].shape[-1]
stds = np.std(np.reshape(output_dict[k], (-1,out_dim)),axis=0)
all_stds.append(stds)
print('Mean deviation per neuron', np.mean(np.concatenate(all_stds, axis=0)))
print('Mean deviation per output neuron', np.mean(all_stds[-1]))
print('Deviation at output', np.std(random_input))
def propagate_signal(sparsity, init_method, batch_size=500):
model = create_convnet(sparsity=sparsity, weight_init_method=init_method)
random_input = tf.random.normal((batch_size,) + INPUT_SHAPE)
# print(np.mean(random_input), np.std(random_input))
output_dict = model(random_input)
out_std = np.std(output_dict['dense3'])
all_stds = []
for k in ['dense1', 'dense2', 'dense3']:
out_dim = output_dict[k].shape[-1]
stds = np.std(np.reshape(output_dict[k], (-1,out_dim)),axis=0)
all_stds.append(stds)
meanstd = np.mean(np.concatenate(all_stds, axis=0))
return meanstd, out_std
import itertools, collections
import numpy as np
all_results = collections.defaultdict(dict)
N_EXP = 3
for s in np.linspace(0.8,0.98,5):
print(s)
for method, name in zip((None, 'unit', 'layer'), ('Masked Dense', 'Ours', 'Scaled-Init')):
all_results[name][s] = [propagate_signal(s, method) for _ in range(N_EXP)]
import matplotlib.pyplot as plt
for k, v in all_results.items():
# if k == 'Masked Dense':
# continue
x = sorted(v.keys())
y = [np.mean([vv[1] for vv in v[kk]])+1e-5 for kk in x]
plt.plot(x, y, label=k)
plt.hlines(y=1, color='r', xmin=0, xmax=1)
plt.yscale('log')
plt.title('std(output)')
plt.legend()
plt.show()
for k, v in all_results.items():
# if k == 'Masked Dense':
# continue
x = sorted(v.keys())
y = [np.mean([vv[0] for vv in v[kk]])+1e-5 for kk in x]
plt.plot(x, y, label=k)
plt.yscale('log')
plt.hlines(y=1, color='r', xmin=0, xmax=1)
plt.title('mean(std_per_neuron)')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Copyright 2020 Google LLC.
Step2: Here we demonstrate how we can calculate the standard deviation of random noise at initialization for layer-wise scaled initialization of Liu et. al.
Step3: Now we define the code above as a function and use it on a grid to plot signal propagation at different sparsities.
|
12,052
|
<ASSISTANT_TASK:>
Python Code:
def imprime(i):
print (i)
def imprimeLista(l):
for e in l:
imprime (e)
imprimeLista([1, 3, 5, 7])
def fatorial(n):
fat = 1
while n > 1:
fat *= n
n -= 1
return fat
print(fatorial(3))
print(fatorial(6))
import sys
sys.setrecursionlimit(50)
# Ao executar esta função, o python ficará processando até
# ocorrer um estouro de pilha de memória (stack overflow).
def fatorial(n):
return n * fatorial(n-1)
print(fatorial(6))
sys.setrecursionlimit(1000)
def fatorial(n):
# Parte TRIVIAL
if n == 0:
return 1
# Parte GERAL ou RECURSIVA
else:
return n * fatorial(n-1)
print(fatorial(3))
print(fatorial(6))
print(fatorial(20))
def hanoi(num_discos, hasteOrigem, hasteDestino, hasteAuxiliar):
# Parte TRIVIAL
if num_discos == 1:
print ("Mover o disco da haste " + hasteOrigem + " para a haste " + hasteDestino + ".")
# Parte GERAL
else:
hanoi(num_discos-1, hasteOrigem, hasteAuxiliar, hasteDestino)
print ("Mover o disco da haste " + hasteOrigem + " para a haste " + hasteDestino + ".")
hanoi(num_discos-1, hasteAuxiliar, hasteDestino, hasteOrigem)
hanoi(4, "A", "C", "B")
from time import sleep
def contadorRegressivo(n):
if n == 0:
print ("BOOM!")
else:
print (str(n) + "s")
sleep(1)
contadorRegressivo(n-1)
contadorRegressivo(3)
space = ""
from time import sleep
def contadorRegressivo(n):
space = "||||" * (3-n)
print(space + "contadorRegressivo(n = {}) # Chamado pela {}ª vêz!".format(n,4-n))
if n == 0:
print(space + " if {} == 0: (VERDADEIRO)".format(n))
print(space + " print (\"BOOM!\")")
else:
print(space + " if {} == 0: (FALSO)".format(n))
print(space + " else:")
print(space + " print (str(n = {}) + \"s\")".format(n))
print(space + " sleep(1)")
print(space + " contadorRegressivo(n-1) # Recursão!")
contadorRegressivo(n-1)
if n < 3:
print(space + " # Retornando para contadorRegressivo(n = {})".format(n+1))
else:
print(" # Fim da execução")
contadorRegressivo(3)
n1 = 1
n2 = 1
for i in range(1, 20):
n1, n2 = n2, (n1+n2)
print (n1)
print ("...")
def Fibonacci(n):
Código
Fibonacci(5)
Fibonacci(7)
def Euclides (a, b):
while b != 0:
a, b = b, a % b
return a
print(Euclides(10, 8))
print(Euclides(21, 13))
print(Euclides(63, 108))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Entretanto, nada impede de uma função chamar ela mesma!
Step2: Uma outra forma de cálcular o fatorial é
Step3: O principal problema do exemplo anterior é que a função não sabe quando terminar!
Step4: Apesar de parecer que a recursão complica a solução, conforme demosntrado no fatorial (exemplo anterior), uma função recursiva pode facilitar a implementação de muitos problemas, pois permite dividir o problema em problemas menores para solucioná-lo.
Step5: Funcionou? Verifique
Step6: Neste exemplo, ao chamar a função contadorRegressivo(n = 3), o Python irá verificar que n é diferetne de 0, imprimir "3s", dormir por um segundo e chamar a função contadorRegressivo(n = 2).
Step7: Atividade 1
Step9: Faça um código que calcule a sequência de Fibonacci utilizando recursão.
Step10: Atividade 2
|
12,053
|
<ASSISTANT_TASK:>
Python Code:
def plasma_frequency(n, q, m):
'''
Returns the plasma angular frequency for a given species.
'''
omega_p = sqrt(n*q**2/(m*epsilon_0))
return omega_p
def cyclotron_frequency(q, m, B0):
'''
Returns the cyclotron angular frequency for a given species.
'''
omega_c = np.abs(q)*B0/m
return omega_c
class Species:
def __init__(self, m, q, description=None):
self.m = m
self.q = q
self.description = description
def omega_p(self, n):
return plasma_frequency(n, self.q, self.m)
def omega_c(self, B0):
return cyclotron_frequency(self.q, self.m, B0)
def __repr__(self):
return 'Specie:{}. Mass:{} kg, charge:{} C'.format(self.description, self.m, self.q)
electron = Species(electron_mass, -elementary_charge, description='Electron')
print(electron)
deuterium = Species(physical_constants['deuteron mass'][0], +elementary_charge, description='Deuterium')
print(deuterium)
def K_perp(species, n, B0, f):
K_perp = 1
omega = 2*np.pi*f
for k, specie in enumerate(species):
K_perp -= specie.omega_p(n[k])**2 / (omega**2 - specie.omega_c(B0)**2)
return K_perp
def K_parallel(species, n, f):
K_parallel = 1
omega = 2*np.pi*f
for k,specie in enumerate(species):
K_parallel -= specie.omega_p(n[k])**2 / omega**2
return K_parallel
def K_cross(species, n, B0, f):
K_cross = 0
omega = 2*np.pi*f
for k, specie in enumerate(species):
K_cross += np.sign(specie.q) * specie.omega_c(B0) * specie.omega_p(n[k])**2 / (omega*(omega**2 - specie.omega_c(B0)**2))
return -1j*K_cross
plasma = (electron, deuterium)
n_e = 1e17 # m^-3
n_D = 1e17 # m^-3
n = (n_e, n_D)
B0 = 1 # T
f = 5e9 # Hz
print(K_perp(plasma, n, B0, f))
print(K_parallel(plasma, n, f))
print(K_cross(plasma, n, B0, f))
np.sign(electron.q)
freqs = np.logspace(6, 11, 1001)
loglog(freqs, abs(K_parallel(plasma, n, freqs)), lw=2)
loglog(freqs, abs(K_perp(plasma, n, B0, freqs)), lw=2)
loglog(freqs, abs(1j*K_cross(plasma, n, B0, freqs)), lw=2)
xlabel('f [Hz]', fontsize=16)
yticks(fontsize=16)
xticks(fontsize=16)
grid(True)
legend(('$K_\parallel$', '$K_\perp$', '$K_X$' ), fontsize=16)
axvline(deuterium.omega_c(B0)/(2*pi), lw=2, ls='--', color='k')
text(x=2.5e6, y=1e4, s='$\omega_{c,D}$', fontsize=16)
axvline(deuterium.omega_p(n_e)/(2*pi), lw=2, ls='--', color='g')
text(x=1e8, y=1e5, s='$\omega_{p,D}$', fontsize=16)
axvline(electron.omega_p(n_e)/(2*pi), lw=2, ls='--', color='g')
text(x=1e9, y=1e5, s='$\omega_{p,e}$', fontsize=16)
axvline(electron.omega_c(B0)/(2*pi), lw=2, ls='--', color='k')
text(x=1e10, y=1e1, s='$\omega_{c,e}$', fontsize=16)
def solve_dispersion_relation(plasma, n, B0, f, theta):
S = K_perp(plasma, n, B0, f)
P = K_parallel(plasma, n, f)
D = 1j*K_cross(plasma, n, B0, f)
R = S+D
L = S-D
A = S*np.sin(theta)**2 + P*np.cos(theta)**2
B = R*L*np.sin(theta)**2 + P*S*(1+np.cos(theta)**2)
C = P*R*L
p = (A,B,C)
n = np.roots(p)
return n
diel_index = np.array([solve_dispersion_relation(plasma, n, B0=3, f=f, theta=0) for f in freqs])
loglog(freqs, real(diel_index[:,0]), lw=2)
loglog(freqs, real(diel_index[:,1]), lw=2)
grid(True)
xlabel('f [Hz]', fontsize=16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's define a convenient object
Step2: The cold plasma tensor
|
12,054
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * batch_size))
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Writing a training loop from scratch
Step2: Introduction
Step3: Let's train it using mini-batch gradient with a custom training loop.
Step4: Here's our training loop
Step5: Low-level handling of metrics
Step6: Here's our training & evaluation loop
Step7: Speeding-up your training step with tf.function
Step8: Let's do the same with the evaluation step
Step9: Now, let's re-run our training loop with this compiled training step
Step10: Much faster, isn't it?
Step11: Let's build a really simple model that uses it
Step12: Here's what our training step should look like now
Step13: Summary
Step14: Then let's create a generator network,
Step15: Here's the key bit
Step16: Let's train our GAN, by repeatedly calling train_step on batches of images.
|
12,055
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import sys
from pprint import pprint
from astropy.time import Time
from itertools import izip
import matplotlib.pyplot as plt
import tables
import numpy as np
from astropy.table import Table, join
%matplotlib inline
sys.path.insert(0, '/home/aldcroft/git/starcheck')
from starcheck import star_probs
tstart = Time('2014:001').cxcsec
filename = '/proj/sot/ska/data/acq_stats/acq_stats.h5'
with tables.openFile(filename) as h5:
acq_stats = h5.root.data.readWhere('guide_tstart > {}'.format(tstart))
acq_stats = Table(acq_stats)
acq_stats[:3]
acq_probs = star_probs.acq_success_prob(date=acq_stats['guide_tstart'],
t_ccd=acq_stats['ccd_temp'],
mag=acq_stats['mag_aca'],
color=acq_stats['color1'])
acq_stats['acq_prob'] = acq_probs
def acqid_colname(without_ms):
return 'acqid_no_MS' if without_ms else 'acqid'
probs = acq_stats.copy() # for compatibility with previous convention in notebook
probs[acqid_colname(True)] = probs['acqid'] | ((probs['img_func'] == 'star')
& ~probs['ion_rad'] & ~probs['sat_pix'])
# Group by obsid for later
pg = probs.group_by('obsid')
def get_probs_groups_dict():
return {grp['obsid'][0].tolist(): grp for grp in pg.groups}
def get_n_expected(without_ms=False):
obsids = pg['obsid'].groups.aggregate(lambda x: x[0])
n_expected = pg['acq_prob'].groups.aggregate(np.sum)
n_acqs = pg[acqid_colname(without_ms)].groups.aggregate(np.sum)
return n_acqs, n_expected, obsids
def get_n_or_fewer_expected(n_or_fewer, without_ms=False):
lte_probs = []
for i0, i1 in izip(pg.groups.indices[:-1], pg.groups.indices[1:]):
n_acq_probs, n_or_fewer_probs = star_probs.prob_n_acq(pg['acq_prob'][i0:i1])
lte_probs.append(n_or_fewer_probs[n_or_fewer])
return np.array(lte_probs)
def plot_n_expected(n_acqs, n_expected):
plt.clf()
plt.plot(n_expected, n_acqs + np.random.uniform(-0.3, 0.3, size=len(n_acqs)), '.', alpha=0.3)
plt.plot([0,8], [0,8], '--m')
plt.ylim(-0.1, 8.5)
plt.xlim(-0.1, 8.5)
plt.grid()
def plot_acq_prob_vs_actual(data=probs, verbose=False, without_ms=False):
bins = np.array([0, 0.4, 0.5, 0.6, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0])
p_actuals = []
p_acq_probs = []
for p0, p1 in zip(bins[:-1], bins[1:]):
ok = (data['acq_prob'] >= p0) & (data['acq_prob'] < p1)
n = np.sum(ok)
if n > 10:
pok = data[acqid_colname(without_ms)][ok]
p_actuals.append(np.sum(pok) / n)
pok = data['acq_prob'][ok]
p_acq_probs.append(np.mean(pok))
if verbose:
print(p0, p1, len(pok))
plt.clf()
plt.plot(p_acq_probs, p_actuals, '.-')
plt.grid()
plt.plot([0, 1], [0, 1], '--r')
plt.title('Actual vs predicted ACQ success{}'
.format(' (without MS)' if without_ms else ''))
plt.xlabel('Mean predicted acq probability')
plt.ylabel('Actual acq success rate')
plot_acq_prob_vs_actual();
plot_acq_prob_vs_actual(without_ms=True);
def plot_n_expected_per_obsid(without_ms=False):
n_acqs, n_expected, obsids = get_n_expected(without_ms)
plt.plot(n_expected, n_acqs + np.random.uniform(-0.1, 0.1, size=len(n_acqs)), '.', alpha=0.5)
plt.xlim(0, 8.5)
plt.ylim(0, 8.5)
plt.grid()
plt.plot([0,8.5], [0, 8.5], '--r')
plt.plot([2,8.5], [0, 6.5], '--m')
plt.xlabel('Predicted acq stars')
plt.ylabel('Actual acq stars')
plt.title('Acq stars per obsid');
plot_n_expected_per_obsid()
plot_n_expected_per_obsid(without_ms=True)
def get_acq_probs_table(probs):
n_acq_probs, n_or_fewer_probs = star_probs.prob_n_acq(probs)
dat = Table([np.arange(9), n_acq_probs * 100, n_or_fewer_probs * 100],
names=['n_stars', 'P(n)', 'P(n_or_fewer)'])
dat['P(n)'].format = '{0:.1f}'
dat['P(n_or_fewer)'].format = '{0:.1f}'
return dat
get_acq_probs_table([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
print_acq_probs_table([0.985, 0.985, 0.985, 0.985, 0.015, 0.015, 0.015, 0.015])
def plot_lte_per_obsid(n_lte, without_ms=False):
n_acqs, n_expected, obsids = get_n_expected(without_ms)
lte_probs = get_n_or_fewer_expected(n_lte, without_ms)
plt.semilogx(lte_probs,
n_acqs + np.random.uniform(-0.1, 0.1, size=len(n_acqs)),
'.', alpha=0.5)
# plt.xlim(0, 1)
plt.ylim(0, 8.5)
plt.grid()
plt.xlabel('Probability for {} or fewer stars'.format(n_lte))
plt.ylabel('Actual acq stars')
plt.title('P({} or fewer stars) vs. N actual acq stars'
.format(n_lte));
print('Expected # of {} or fewer: {:.2f}'.format(n_lte, np.sum(lte_probs)))
print('Actual # of {} or fewer: {:d}'.format(n_lte, np.count_nonzero(n_acqs <= n_lte)))
plot_lte_per_obsid(2, without_ms=False)
plot_lte_per_obsid(3, without_ms=False)
plot_lte_per_obsid(2, without_ms=True)
plot_lte_per_obsid(1, without_ms=False)
lte_2_probs = get_n_or_fewer_expected(2, without_ms=False)
n_acqs, n_expected, obsids = get_n_expected(without_ms=False)
ok = lte_2_probs > 0.01
Table([obsids[ok], n_acqs[ok], n_expected[ok], lte_2_probs[ok]],
names=['obsid', 'n_acq_actual', 'n_expected', 'P(lte2)'])
import webbrowser
def open_mica(obsid):
webbrowser.open_new_tab('http://kadi.cfa.harvard.edu/mica/?obsid_or_date={}'.format(obsid))
open_mica(15662)
open_mica(17631)
twos = n_acqs == 2
print(obsids[twos])
pg.groups[twos]
threes = n_acqs == 3
print(obsids[threes])
low_n_exp = n_expected < 4.5
Table([obsids[low_n_exp], n_expected[low_n_exp], n_acqs[low_n_exp]],
names=['obsid', 'n_expected', 'n_acq_actual'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mean actual success rate vs. predicted within bins
Step2: Actual number of stars per obsid vs. expected
Step3: Why mean expected number of stars is not ideal
Step4: Eight faint stars with P_acq = 0.5
Step5: Four bright stars and four very faint stars
Step6: Actual number of stars per obsid vs. P(2 or fewer)
|
12,056
|
<ASSISTANT_TASK:>
Python Code:
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
csd = csd_morlet(epochs, tmin=0, tmax=0.5, decim=20,
frequencies=np.linspace(6, 10, 4),
n_cycles=2.5) # short signals, must live with few cycles
# Compute DICS spatial filter and estimate source power.
filters = make_dics(epochs.info, forward, csd, reg=0.5, verbose='error')
print(filters)
stc, freqs = apply_dics_csd(csd, filters)
message = 'DICS source power in the 8-12 Hz frequency band'
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the raw data
Step2: Computing the cross-spectral density matrix at 4 evenly spaced frequencies
|
12,057
|
<ASSISTANT_TASK:>
Python Code:
from pymc3 import Model, Normal, Uniform, NUTS, sample, find_MAP, traceplot, summary, df_summary, trace_to_dataframe
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
x = np.array([1,2,3,4,5,6,7,8,9,10])
y =np.array([5.19, 6.56, 9.19, 8.09, 7.6, 7.08, 6.74, 9.3, 9.98, 11.5])
df = pd.DataFrame({'x':x, 'y':y})
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=100)
beta = Normal('beta', mu=0, sd=100)
sigma = Uniform('sigma', lower=0, upper=10000)
# Expected value of outcome
mu = alpha + beta*x
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=y)
#find_MAP can be used to establish a useful starting point for the sampling
map_estimate = find_MAP(model=basic_model)
print(map_estimate)
import scipy
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=scipy.optimize.fmin_powell)
# draw 2000 posterior samples
trace = sample(2000, start=start)
traceplot(trace, lines={k: v['mean'] for k, v in df_summary(trace).iterrows()})
summary(trace)
df_summary(trace, alpha =0.1)
def plot_posterior_cr(mdl, trc, rawdata, xlims, npoints=1000):
'''
Convenience fn: plot the posterior predictions from mdl given trcs
'''
## extract traces
trc_mu = trace_to_dataframe(trc)[['alpha','beta']]
trc_sd = trace_to_dataframe(trc)['sigma']
## recreate the likelihood
x = np.linspace(xlims[0], xlims[1], npoints).reshape((npoints,1))
X = x ** np.ones((npoints,2)) * np.array([0, 1])
like_mu = np.dot(X,trc_mu.T) + trc_mu.median()['alpha']
like_sd = np.tile(trc_sd.T,(npoints,1))
like = np.random.normal(like_mu, like_sd)
## Calculate credible regions and plot over the datapoints
dfp = pd.DataFrame(np.percentile(like,[2.5, 25, 50, 75, 97.5], axis=1).T
,columns=['025','250','500','750','975'])
dfp['x'] = x
pal = sns.color_palette('Purples')
f, ax1d = plt.subplots(1,1, figsize=(7,7))
ax1d.fill_between(dfp['x'], dfp['025'], dfp['975'], alpha=0.5
,color=pal[1], label='CR 95%')
ax1d.fill_between(dfp['x'], dfp['250'], dfp['750'], alpha=0.4
,color=pal[4], label='CR 50%')
ax1d.plot(dfp['x'], dfp['500'], alpha=0.5, color=pal[5], label='Median')
_ = plt.legend()
_ = ax1d.set_xlim(xlims)
_ = sns.regplot(x='x', y='y', data=rawdata, fit_reg=False
,scatter_kws={'alpha':0.8,'s':80, 'lw':2,'edgecolor':'w'}, ax=ax1d)
xlims = (df['x'].min() - np.ptp(df['x'])/10
,df['x'].max() + np.ptp(df['x'])/10)
plot_posterior_cr(basic_model, trace, df, xlims)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create dataset and plot
Step2: Create the model
Step3: Priors....
Step4: Find and assign the priors, then perform MCMC using the NUT sampler for 2000 iterations
Step5: Inspect the traces
|
12,058
|
<ASSISTANT_TASK:>
Python Code:
import sys
fishes = ["teri", "kakap", "tongkol"]
items = (" sword"," halberd"," axe"," knife"," blade")
for fish in fishes:
print("ini ikan "+fish)
for item in items:
print(item)
print(fishes[0])
print(items[-1])
del(fishes[-1])
for fish in fishes:
print("ini ikan "+fish)
name = "John"
age = 21
cars = "Nissan GTR"
years = 2018
print("My name is %s" % name)
print("Now, I am %d years old" % age)
print("I ride a %s %d" % (cars, years))
pii = "3.141592653589793238462643"
fishes = ("teri", "kakap", "tongkol")
# Get a data
print(pii[0])
print(pii[-2])
print(pii[-3])
print(fishes[2])
# Slice
print(pii[10:15])
print(fishes[1:2])
#!/usr/bin/python
# Filename: using_dict.py
# 'AB' is short for 'A'ddress'B'ook
AB = { 'Swaroop' : 'swaroopch@byteofpython.info',
'Larry' : 'larry@wall.org',
'Matsumoto' : 'matz@ruby-lang.org',
'Spammer' : 'spammer@hotmail.com'
}
print("Swaroop's address is %s" % AB['Swaroop'])
# Adding a key/value pair
AB['Guido'] = 'guido@python.org'
# Deleting a key/value pair
del AB['Spammer']
print('\nThere are %d contacts in the address-book\n' % len(AB))
for name, address in AB.items():
print('Contact %s at %s' % (name, address))
if 'Guido' in AB: # OR AB.has_key('Guido')
print("\nGuido's address is %s" % AB['Guido'])
cities = {'CA':'San Francisco','MI':'Detroit','FL':'Jacksonville'}
cities['NY'] = 'New York'
cities['OR'] = 'Portland'
def find_city(themap, state):
if state in themap:
return themap[state]
else:
return "Not found."
cities['_find'] = find_city
while True:
print("State? (ENTER to quit)")
state = input("> ")
if not state:
break
# this line is the most important ever! study!
city_found = cities['_find'](cities, state)
print(city_found)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tuple
Step2: Sequences
Step3: Dictionary
Step4: With it yo can see that if we can identify the name, we can get the other content of the fields. Now look at the following example (Learn Python the Hardway). We can also adding another stuff into our dictionary by dictionary['key']='value'.
|
12,059
|
<ASSISTANT_TASK:>
Python Code:
def NumberOfSolutions(x , y , z , n ) :
ans = 0
for i in range(x + 1 ) :
for j in range(y + 1 ) :
temp = n - i - j
if temp >= 0 :
temp = min(temp , z )
ans += temp + 1
return ans
if __name__== "__main __":
x , y , z , n = 1 , 2 , 3 , 4
print(NumberOfSolutions(x , y , z , n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,060
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
model.evaluate(x_test, y_test)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_raw(bytes_input, tf.uint8)
resized = tf.reshape(decoded, shape=(32, 32, 3))
recast = tf.cast(resized, tf.float32)
rescale = tf.cast(recast / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
tf.saved_model.save(
model,
model_path_to_deploy,
signatures={
"serving_default": serving_fn,
},
)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
ENDPOINT_NAME = "cifar10_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
MIN_NODES = 1
MAX_NODES = 1
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
import base64
bytes = (test_image * 255).astype(np.uint8).tobytes()
b64str = base64.b64encode(np.ascontiguousarray(bytes)).decode("utf-8")
def predict_image(image, endpoint, parameters_dict):
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: {"b64": image}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters_dict
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_image(b64str, endpoint_id, None)
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Vertex constants
Step12: Hardware Accelerators
Step13: Container (Docker) image
Step14: Machine Type
Step15: Tutorial
Step16: Train a model
Step17: Prepare your disk specification
Step18: Define the worker pool specification
Step19: Assemble a job specification
Step20: Examine the training package
Step21: Task.py contents
Step22: Store training script on your Cloud Storage bucket
Step23: Train the model
Step24: Now get the unique identifier for the custom job you created.
Step25: Get information on a custom job
Step26: Deployment
Step27: Load the saved model
Step28: Evaluate the model
Step29: Perform the model evaluation
Step30: Upload the model for serving
Step31: Get the serving function signature
Step32: Upload the model
Step33: Get Model resource information
Step34: Deploy the Model resource
Step35: Now get the unique identifier for the Endpoint resource you created.
Step36: Compute instance scaling
Step37: Deploy Model resource to the Endpoint resource
Step38: Make a online prediction request
Step39: Send the prediction request
Step40: Undeploy the Model resource
Step41: Cleaning up
|
12,061
|
<ASSISTANT_TASK:>
Python Code:
import cobra.test
import os
from os.path import join
data_dir = cobra.test.data_dir
print("mini test files: ")
print(", ".join(i for i in os.listdir(data_dir) if i.startswith("mini")))
textbook_model = cobra.test.create_test_model("textbook")
ecoli_model = cobra.test.create_test_model("ecoli")
salmonella_model = cobra.test.create_test_model("salmonella")
cobra.io.read_sbml_model(join(data_dir, "mini_fbc2.xml"))
cobra.io.write_sbml_model(textbook_model, "test_fbc2.xml")
cobra.io.read_sbml_model(join(data_dir, "mini_cobra.xml"))
cobra.io.write_sbml_model(
textbook_model, "test_cobra.xml", use_fbc_package=False)
cobra.io.load_json_model(join(data_dir, "mini.json"))
cobra.io.save_json_model(textbook_model, "test.json")
cobra.io.load_yaml_model(join(data_dir, "mini.yml"))
cobra.io.save_yaml_model(textbook_model, "test.yml")
cobra.io.load_matlab_model(
join(data_dir, "mini.mat"), variable_name="mini_textbook")
cobra.io.load_matlab_model(join(data_dir, "mini.mat"))
cobra.io.save_matlab_model(textbook_model, "test.mat")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SBML
Step2: There are other dialects of SBML prior to FBC 2 which have previously been use to encode COBRA models. The primary ones is the "COBRA" dialect which used the "notes" fields in SBML files.
Step3: JSON
Step4: YAML
Step5: MATLAB
Step6: If the mat file contains only a single model, cobra can figure out which variable to read from, and the variable_name parameter is unnecessary.
Step7: Saving models to mat files is also relatively straightforward
|
12,062
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from qutip.qip.circuit import QubitCircuit
from qutip.qip.operations import gate_sequence_product
import numpy as np
from qutip.qip.models.spinchain import CircularSpinChain
from qutip.qip.models.spinchain import LinearSpinChain
from qutip.qip.device import CircularSpinChain, LinearSpinChain
from qutip.qip.noise import RandomNoise
N = 3
qc = QubitCircuit(N)
qc.add_gate("CNOT", targets=[0], controls=[2])
U_ideal = gate_sequence_product(qc.propagators())
U_ideal
p1 = CircularSpinChain(N, correct_global_phase=True)
U_list = p1.run(qc)
U_physical = gate_sequence_product(U_list)
U_physical.tidyup(atol=1e-5)
(U_ideal - U_physical).norm()
p1.qc0.gates
p1.qc1.gates
p1.qc2.gates
p1.get_full_tlist()
p1.plot_pulses();
p2 = LinearSpinChain(N, correct_global_phase=True)
U_list = p2.run(qc)
U_physical = gate_sequence_product(U_list)
U_physical.tidyup(atol=1e-5)
(U_ideal - U_physical).norm()
p2.qc0.gates
p2.qc1.gates
p2.qc2.gates
p2.get_full_tlist()
p2.plot_pulses();
from qutip import basis, fidelity
N = 1
plus_state = (basis(2,0) + basis(2,1)).unit()
qc = QubitCircuit(N=N)
qc.add_gate("SNOT", targets=0)
processor = LinearSpinChain(N=N)
processor.load_circuit(qc)
end_state = processor.run_state(init_state=basis(2, 0), analytical=False).states[-1]
fidelity(end_state, plus_state)
processor.add_noise(RandomNoise(rand_gen=np.random.normal, dt=0.1, loc=0.1, scale=0.2))
end_state = processor.run_state(init_state=basis(2, 0), analytical=False).states[-1]
fidelity(end_state, plus_state)
from qutip.bloch import Bloch
b = Bloch()
b.add_states([end_state, plus_state])
b.make_sphere()
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If your qutip version is lower than 4.4.1 please run the following cell
Step2: Otherwise please run this cell
Step3: Hamiltonian
Step4: The non-adjacent interactions are broken into a series of adjacent ones by the program automatically.
Step5: Circular Spin Chain Model Implementation
Step6: The results obtained from the physical implementation agree with the ideal result.
Step7: The gates are first convert to gates with adjacent interactions moving in the direction with the least number of qubits in between.
Step8: They are then converted into the basis [ISWAP, RX, RZ]
Step9: The time for each applied gate
Step10: The pulse can be plotted as
Step11: Linear Spin Chain Model Implementation
Step12: The results obtained from the physical implementation agree with the ideal result.
Step13: The gates are first convert to gates with adjacent interactions moving in the direction with the least number of qubits in between.
Step14: They are then converted into the basis [ISWAP, RX, RZ]
Step15: The time for each applied gate
Step16: The pulse can be plotted as
Step17: Numerical simulation
Step18: As the control noise is coherent noise, the result of this noise is still a pure state. Therefore, we can visualize it on a Bloch sphere.
Step19: Software versions
|
12,063
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
a = torch.rand((10, 1000, 23))
lengths = torch.randint(1000, (10,))
for i_batch in range(10):
a[i_batch, :lengths[i_batch], :] = 2333
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,064
|
<ASSISTANT_TASK:>
Python Code:
A = np.eye(5)
print(A)
datafile = 'ex1\\ex1data1.txt'
df = pd.read_csv(datafile, header=None, names=['Population', 'Profit'])
def plot_data(x, y):
plt.figure(figsize=(10, 6))
plt.plot(x, y, '.', label='Training Data')
plt.xlabel("Population of City in 10,000s", fontsize=16)
plt.ylabel("Profit in $10,000s", fontsize=16)
import os
import sys
import datetime as dt
fp_list_master = ['C:', 'Users', 'szahn', 'Dropbox', 'Statistics & Machine Learning', 'coursera_ml_notes']
fp = os.sep.join(fp_list_master)
fp_fig = fp + os.sep + 'LaTeX Notes' + os.sep + 'Figures'
print(os.path.isdir(fp), os.path.isdir(fp_fig))
plot_data(df['Population'], df['Profit'])
#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_1_plot_data.pdf')
# set the number of training examples
m = len(df['Population'])
# create an array from the dataframe (missing column for x_0 values)
X = df['Population'].values
# add in the first column of the array for x_0 values
X = X[:, np.newaxis]
X = np.insert(X, 0, 1, axis=1)
y = df['Profit'].values
y = y[:, np.newaxis]
theta_values = np.array([[0.], [0]])
print(theta_values.shape)
print(X.shape, end='\n\n')
_ = np.dot(X, theta_values)
print(_.shape)
# define the hypothesis
def h(theta, X):
Takes the dot product of the matrix X and the vector theta,
yielding a predicted result.
return np.dot(X, theta)
def compute_cost(X, y, theta):
Takes the design matrix X and output vector y, and computes the cost of
the parameters stored in the vector theta.
The dimensions must be as follows:
- X must be m x n
- y must be m x 1
- theta must be n x 1
m = len(y)
J = 1 / (2*m) * np.dot((np.dot(X, theta) - y).T, (np.dot(X, theta) - y))
return J
# define column vector theta = [[0], [0]]
theta = np.zeros((2, 1))
# compute the cost function for our existing X and y, with our new theta vector
# verify that the cost for our theta of zeros is 32.07
compute_cost(X, y, theta)
def gradient_descent(X, y, theta, alpha, num_iters):
m = len(y)
J_history = []
theta_history = []
for i in range(num_iters):
J_history.append(float(compute_cost(X, y, theta)))
theta_history.append(theta)
theta = theta - (alpha / m) * np.dot(X.T, (np.dot(X, theta) - y))
return theta, J_history, theta_history
# set up some initial parameters for gradient descent
theta_initial = np.zeros((2, 1))
iterations = 1500
alpha = 0.01
theta_final, J_hist, theta_hist = gradient_descent(X, y,
theta_initial,
alpha, iterations)
def plot_cost_convergence(J_history):
abscissa = list(range(len(J_history)))
ordinate = J_history
plt.figure(figsize=(10, 6))
plt.plot(abscissa, ordinate, '.')
plt.title('Convergence of the Cost Function', fontsize=24)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlabel('Iteration Number', fontsize=18)
plt.ylabel('Cost Function', fontsize=18)
plt.xlim(min(abscissa) - max(abscissa) * 0.05, 1.05 * max(abscissa))
plot_cost_convergence(J_hist)
plt.ylim(4.3, 6.9)
#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_viz_j_of_theta.pdf')
plot_data(df['Population'], df['Profit'])
x_min = min(df.Population)
x_max = max(df.Population)
abscissa = np.linspace(x_min, x_max, 50)
hypot = lambda x: theta_final[0] + theta_final[1] * x
ordinate = [hypot(x) for x in abscissa]
plt.plot(abscissa, ordinate, label='Hypothesis h(x) = {:.2f} + {:.2f}x'.format(
float(theta_final[0]), float(theta_final[1])), color='indianred')
plt.legend(loc=4, frameon=True, fontsize=16)
# plt.savefig(fp_fig + os.sep + 'linreg_hw_2_3_plot_lin_reg.pdf')
from mpl_toolkits.mplot3d import axes3d, Axes3D
from matplotlib import cm
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function by $\theta_0$ and $\theta_1$', fontsize=24)
theta_0_hist = [x[0] for x in theta_hist]
theta_1_hist = [x[1] for x in theta_hist]
theta_hist_end = len(theta_0_hist) - 1
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.plot(theta_0_hist, theta_1_hist, J_hist, 'r',
label='Cost Minimization Path')
plt.plot(theta_0_hist[0], theta_1_hist[0], J_hist[0], 'ro',
label='Cost Minimization Start')
plt.plot(theta_0_hist[theta_hist_end],
theta_1_hist[theta_hist_end],
J_hist[theta_hist_end], 'co', label='Cost Minimization Finish')
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function Minimization', fontsize=24)
plt.legend(fontsize=12)
plt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_plot_surface_plot.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 Linear Regression with One Variable
Step2: 2.2 Gradient Descent
Step3: Let's make the (totally random) guess that $\theta_0$ = 0 and $\theta_1$ = 0. In that case, we have the following output from the hypothesis function.
Step6: 2.2.3 Computing the Cost $J(\theta)$
Step8: Gradient Descent
Step9: After running the batch gradient descent algorithm, we can plot the convergence of $J(\theta)$ over the number of iterations.
Step10: 2.4 Visualizing $J(\theta)$
|
12,065
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from collections import Counter
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
# Importing Games
from game_types import PrisonersDilemma
from game_types import Coordination
# Importing Strategies
from strategies import chaos as c
from strategies import defect as d
from strategies import machine_learning as ml
from strategies import pavlov as p
from strategies import grim as g
# Create agents and play the game for 10000 iteratations
agent1 = c.Chaos()
agent2 = d.Defect()
game = PrisonersDilemma(agent1, agent2)
game.play(10000)
# Grab Data
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('Chaos Agent Vs Defect Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add([1,2,5],width-.05))
_ = ax.set_xticklabels(('1', '2', '5'))
_ = ax.legend((a1[0], a2[0]), ('Chaos Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Defect Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
# play the game
agent1 = g.Grim()
agent2 = p.Pavlov()
game = PrisonersDilemma(agent1, agent2)
game.play(10000)
# get data from game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('Grim Agent Vs Pavlov Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add([0,4,5],width/2))
_ = ax.set_xticklabels(('0', '4', '5'))
_ = ax.legend((a1[0], a2[0]), ('Grim Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
# Play the Game
agent1 = ml.QLearn()
agent2 = p.Pavlov()
game = PrisonersDilemma(agent1, agent2)
game.play(10000)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs Pavlov Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add([1,2,4,5],width/2))
_ = ax.set_xticklabels(('1', '2', '4', '5'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
print(agent1_util_vals, agent2_util_vals)
# Play the Game
N = 10000
agent1 = ml.QLearn()
agent2 = c.Chaos()
game = PrisonersDilemma(agent1, agent2)
game.play(N)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs Chaos Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add(x2,width/2))
_ = ax.set_xticklabels(('1', '2', '4', '5'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Chaos Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
# Play the Game
N = 10000
agent1 = ml.QLearn()
agent2 = ml.QLearn()
game = PrisonersDilemma(agent1, agent2)
game.play(N)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs QLearning Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add(x2,width/2))
_ = ax.set_xticklabels(('1', '2', '4', '5'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
# Play the Game
N = 200000 # Play a longer game
# agent 1's parameters are bit more short sighted
agent1 = ml.QLearn(decay=0.4, lr=0.03, explore_period=30000, explore_random_prob=0.4, exploit_random_prob=0.2)
# agent 2's parameters think more about the future
agent2 = ml.QLearn(decay=0.6, lr=0.2, explore_period=40000, explore_random_prob=0.4, exploit_random_prob=0.1)
game = PrisonersDilemma(agent1, agent2)
game.play(N)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs QLearning Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add(x2,width/2))
_ = ax.set_xticklabels(('1', '2', '4', '5'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
print(agent1_util_vals, agent2_util_vals)
# Create agents and play the game for 10000 iteratations
agent1 = c.Chaos()
agent2 = d.Defect()
game = Coordination(agent1, agent2)
game.play(10000)
# Grab Data
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('Chaos Agent Vs Defect Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add([0,1, 2],width-.05))
_ = ax.set_xticklabels(('0','1','2'))
_ = ax.legend((a1[0], a2[0]), ('Chaos Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Defect Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
print(agent1_util_vals,agent2_util_vals)
# play the game
agent1 = g.Grim()
agent2 = p.Pavlov()
game = Coordination(agent1, agent2)
game.play(10000)
# get data from game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('Grim Agent Vs Pavlov Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add([0,1,2],width/2))
_ = ax.set_xticklabels(('0', '1', '2'))
_ = ax.legend((a1[0], a2[0]), ('Grim Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
print(agent1_util_vals, agent2_util_vals)
# Play the Game
N = 10000
agent1 = ml.QLearn()
agent2 = c.Chaos()
game = Coordination(agent1, agent2)
game.play(N)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs Chaos Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add(x2,width/2))
_ = ax.set_xticklabels(('0', '1', '2'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'Chaos Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
# Play the Game
N = 10000
agent1 = ml.QLearn()
agent2 = ml.QLearn()
game = Coordination(agent1, agent2)
game.play(N)
# Get Data from Game
agent1_util_vals = Counter(game.data['A'])
agent2_util_vals = Counter(game.data['B'])
a1_total_score = sum(game.data['A'])
a2_total_score = sum(game.data['B'])
# Plot the results
x1, y1, x2, y2 = [], [], [], []
for i, j in zip(agent1_util_vals, agent2_util_vals):
x1.append(i)
y1.append(agent1_util_vals[i])
x2.append(j)
y2.append(agent2_util_vals[j])
fig, ax = plt.subplots(figsize=(12,6))
width = 0.35
a1 = ax.bar(x1, y1, width, color='#8A9CEF')
a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange')
_ = ax.set_title('QLearning Agent Vs QLearning Agent')
_ = ax.set_ylabel('Number of Games')
_ = ax.set_xlabel('Utility Values')
ax.set_xticks(np.add(x2,width/2))
_ = ax.set_xticklabels(('0','1', '2'))
_ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)),
'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1))
plt.show()
print(agent1_util_vals, agent2_util_vals)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulating Games
Step2: In this scenario defecting is the domiant strategy. Where the agent is better off defecting no matter what other agents do.
Step3: Both strategies start out cooperating, Grim never defects because pavlov never defects. Pavlov never loses a round so it doesn't change it's strategy.
Step4: Pavlov's simple rules out performs Q Learning here which is interesting.
Step5: Q-Learning VS Chaos
Step6: Q Learning significantly outperforms the Chaos Agent because the Q Learning Agent learns pretty quickly that defecting yields the highest expected utility (talked about more in appendix).
Step7: Here both QLearning Agents tend to mirror each other. I assume this is because they have the same inital parameters which will yield the same expected utility.
Step8: (I haven't had the time to look through the actions of both agents but one is short sighted and the other is not, which yields the Orange QLearning agent a higher total utility score.)
Step9: Iterated Coordination Game
Step10: Here Defect isn't a domiant strategy. The defect agent only recieves a non 0 utility value if the chaos agent sees the movie they intended to see. A Mixed Strategy is needed.
Step11: Grim VS Pavlov
Step12: Grim loses in the first round and always goes to other movie, the Pavlov Agent even won a round where they both ended up at the same movie and never changed it's strategy.
Step13: Q-Learning Vs Chaos
Step14: This is different from Prisoner's Dilema, the QLearning Agent is trying to cooperate with the chaos agent but can never predict which movie he is going to.
Step15: Still playing around with this one, but both do pretty bad here.
|
12,066
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import scipy.interpolate
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
xmin, xmax = 0., 4*np.pi
x = np.linspace(xmin, xmax, 10)
y = np.sin(x)
x2 = np.linspace(xmin, xmax, 100)
# Linear interpolation with extrapolation
f = scipy.interpolate.interp1d(x, y,
kind='linear',
fill_value="extrapolate")
y2 = f(x2)
plt.plot(x, y, "o:b", label="original")
plt.plot(x2, y2, ".-r", label="interpolated")
plt.legend();
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splrep.html#scipy.interpolate.splrep
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splprep.html#scipy.interpolate.splprep
spl = scipy.interpolate.splrep(x, y)
y2 = scipy.interpolate.splev(x2, spl)
plt.plot(x, y, "o:b", label="original")
plt.plot(x2, y2, ".-r", label="interpolated")
plt.legend();
spl = scipy.interpolate.splrep(x, y,
xb=x[0], xe=x[-1], # The interval to fit
#s=0., # A smoothing factor
k=1) # The degree fo the spline fit
y2 = scipy.interpolate.splev(x2, spl)
plt.plot(x, y, "o:b", label="original")
plt.plot(x2, y2, ".-r", label="interpolated")
plt.legend();
# Spline linear interpolation with extrapolation (should be the same than spline1...)
f = scipy.interpolate.interp1d(x, y,
kind='slinear',
fill_value="extrapolate")
y2 = f(x2)
plt.plot(x, y, "o:b", label="original")
plt.plot(x2, y2, ".-r", label="interpolated")
plt.legend();
# Build data
x = np.arange(-1*np.pi, 1*np.pi, np.pi/4)
y = np.arange(-1*np.pi, 1*np.pi, np.pi/4)
xx, yy = np.meshgrid(x, y)
z = np.sin(xx) + np.sin(yy)
# Plot data
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig)
#ax.plot_wireframe(xx, yy, z)
surf = ax.plot_surface(xx,
yy,
z,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
rstride=1,
cstride=1,
shade=False)
plt.title("Original data")
plt.show();
f = scipy.interpolate.interp2d(x, y, z, kind='linear',
bounds_error=True) # Let 'f' raise an exception when the requested point is outside the range defined by x and y
# Build data
x_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
y_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
xx_hd,yy_hd = np.meshgrid(x_hd, y_hd)
z_hd = np.zeros(xx_hd.shape)
for xi in range(z_hd.shape[0]):
for yi in range(z_hd.shape[1]):
z_hd[xi, yi] = f(x_hd[xi], y_hd[yi])
# Plot data
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig)
surf = ax.plot_surface(xx_hd,
yy_hd,
z_hd,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
rstride=1,
cstride=1,
shade=False)
plt.show();
# Build data
x_nu = np.arange(-1*np.pi, 1*np.pi, np.pi/4)
y_nu = np.arange(-1*np.pi, 1*np.pi, np.pi/4)
x_nu = x_nu.tolist()
y_nu = y_nu.tolist()
del x_nu[2]
del y_nu[2]
xx, yy = np.meshgrid(x_nu, y_nu)
z_nu = np.sin(xx) + np.sin(yy)
f = scipy.interpolate.interp2d(x_nu, y_nu, z_nu, kind='linear',
bounds_error=True) # Let 'f' raise an exception when the requested point is outside the range defined by x and y
# Build data
x_nu_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
y_nu_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
xx_nu_hd,yy_nu_hd = np.meshgrid(x_nu_hd, y_nu_hd)
z_nu_hd = np.zeros(xx_nu_hd.shape)
for xi in range(z_nu_hd.shape[0]):
for yi in range(z_nu_hd.shape[1]):
z_nu_hd[xi, yi] = f(x_nu_hd[xi], y_nu_hd[yi])
# Plot data
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig)
surf = ax.plot_surface(xx_nu_hd,
yy_nu_hd,
z_nu_hd,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
rstride=1,
cstride=1,
shade=False)
surf = ax.plot_surface(xx_hd,
yy_hd,
z_hd,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
rstride=1,
cstride=1,
alpha=0.5,
shade=False)
plt.show();
f = scipy.interpolate.interp2d(x, y, z, kind='cubic',
bounds_error=True) # Let 'f' raise an exception when the requested point is outside the range defined by x and y
# Build data
x_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
y_hd = np.arange(-1*np.pi, 1*np.pi-np.pi/4, np.pi/32)
xx_hd,yy_hd = np.meshgrid(x_hd, y_hd)
z_hd = np.zeros(xx_hd.shape)
for xi in range(z_hd.shape[0]):
for yi in range(z_hd.shape[1]):
z_hd[xi, yi] = f(x_hd[xi], y_hd[yi])
# Plot data
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig)
surf = ax.plot_surface(xx_hd,
yy_hd,
z_hd,
cmap='gnuplot2', # 'jet' # 'gnuplot2'
rstride=1,
cstride=1,
shade=False)
plt.show();
def func(x, y):
return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
points = np.random.rand(1000, 2)
values = func(points[:,0], points[:,1])
grid_z0 = scipy.interpolate.griddata(points, values, (grid_x, grid_y), method='nearest')
grid_z1 = scipy.interpolate.griddata(points, values, (grid_x, grid_y), method='linear')
grid_z2 = scipy.interpolate.griddata(points, values, (grid_x, grid_y), method='cubic')
plt.subplot(221)
plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower')
plt.plot(points[:,0], points[:,1], 'k.', ms=1)
plt.title('Original')
plt.subplot(222)
plt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower')
plt.title('Nearest')
plt.subplot(223)
plt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower')
plt.title('Linear')
plt.subplot(224)
plt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower')
plt.title('Cubic')
plt.gcf().set_size_inches(6, 6)
plt.tight_layout()
plt.show()
class InterpoledGridData:
def __init__(self, x, y, z, interpolation_method='linear', fill_value=float('nan')):
self.x = x
self.y = y
self.z = z
self.fill_value = fill_value
self.interpolation_method = interpolation_method
def __call__(self, x1_mesh, x2_mesh):
z = scipy.interpolate.griddata(points = (self.x, self.y),
values = self.z,
xi = (x1_mesh, x2_mesh),
fill_value=self.fill_value,
method = self.interpolation_method)
if z.ndim == 0:
z = float(z)
return z
x = np.random.rand(1000)
y = np.random.rand(1000)
z = func(x, y)
f = InterpoledGridData(x, y, z, interpolation_method='cubic')
f(0.5, 0.5)
x_hd = np.linspace(x.min(),
x.max(),
100)
y_hd = np.linspace(y.min(),
y.max(),
100)
xx_hd, yy_hd = np.meshgrid(x_hd, y_hd)
z_hd = f(xx_hd, yy_hd)
# Plot data
fig = plt.figure(figsize=(12, 8))
ax = axes3d.Axes3D(fig, azim=150, elev=30)
surf = ax.plot_surface(xx_hd,
yy_hd,
z_hd,
cmap='gnuplot2',
rstride=1,
cstride=1,
vmin=np.nanmin(z_hd),
vmax=np.nanmax(z_hd),
shade=False)
plt.show();
fig, ax = plt.subplots(figsize=(12, 8))
im = ax.pcolormesh(xx_hd, yy_hd, z_hd,
#shading='gouraud',
cmap='gnuplot2')
plt.colorbar(im, ax=ax)
plt.show();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Official documentation
Step2: Interpolate 1D functions
Step3: Linear interpolation
Step4: B-Splines interpolation
Step5: Spline linear interpolation
Step6: Interpolate 2D functions
Step7: Linear interpolation
Step8: Non uniform grid data
Step9: Cubic splines
Step10: Interpolate unstructured D-dimensional data
Step11: Callable version
|
12,067
|
<ASSISTANT_TASK:>
Python Code:
# dependencies
import matplotlib.pyplot as plt
import pickle
import numpy as np
f = open('final_dataset.pickle','rb')
dataset = pickle.load(f)
sample_image = dataset['train_dataset'][0]
sample_label = dataset['train_labels'][0]
print(sample_label)
plt.figure()
plt.imshow(sample_image)
plt.show()
# lets make Wxx and Mxx for this images
# Wxx
x = np.array([0,0,0])
y = np.array([0,1,0])
final = np.subtract.outer(y,x)
print(final)
# 1. flatten the whole image into n-pixels
x_vectors = sample_image.flatten()
# dimensions must be of the form img_len,1
# for this x_vector the weights must be of the order 1,num_perceptrons
# but this gives me a sparse matrix of the order img_len,num_perceptrons
# what we do here is take sum row wise we will get like [1,0,0,1] will become [2]
# and therefore we will have outputs as img_len,1
# so here to get the weights for x_vectors we have to multiply matrices of the order
# img_len,1 and 1,img_len
#x_vectors = np.array([1,2,5,1])
weights = np.subtract.outer(x_vectors, x_vectors)
print(weights)
add_individual = np.add(weights, x_vectors)
# pg-6 now perform row wise max
result = [max(row) for row in add_individual]
np.testing.assert_array_almost_equal(x_vectors, result)
print('done')
# for k=1 dimesions of Mxx and Wxx are same
import cv2
erode_img = sample_image
# kernel is a pixel set like a cross( or any shape) which convolves and erodes
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(erode_img,kernel,iterations = 1)
plt.figure()
plt.imshow(erosion)
plt.show()
# Now lets try to do some recall
x_eroded = erosion
x_eroded_vector = x_eroded.flatten()
add_individual = np.add(weights, x_eroded_vector)
result = np.array([max(row) for row in add_individual])
# now lets reshape the result to 128 x 128
result.shape = (128, 128)
plt.figure()
plt.imshow(result)
plt.show()
# now lets see the amount of recall error
result = result.flatten()
np.testing.assert_array_almost_equal(result, x_vectors)
print('done 0%')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets add some erosive noise to the image and then lets see the recall
|
12,068
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import sys # Required for system access (below)
import os # Required for os access (below)
sys.path.append(os.path.join(os.path.dirname(os.getcwd()), 'dependencies'))
from neuron_readExportedGeometry import * # Required to interpret hoc files
# Convert the given hoc file into a geo object
geo = demoReadsilent('/home/cosmo/marderlab/test/878_043_GM_scaled.hoc')
tips, ends = geo.getTips() # Store all the tip segments in a list, "tips"
# Also store the associated ends in "ends"
find = PathDistanceFinder(geo, geo.soma) # Set up a PDF object for the
# given geo object, anchored at
# the soma
paths = [find.pathTo(seg) for seg in tips] # List of all paths
counted = [] # Initialize a list for keeping track of which segments have
# already been measured
cablelength = 0 # Initialize a running total of cable length
for path in paths: # Sort through each path
pruned = [seg for seg in path if seg not in counted] # Limit the paths
# we work with to
# those which have
# not already been
# measured
forfind = PathDistanceFinder(geo, pruned[0]) # Initialize a PDF
# anchored at the earliest
# unmeasured segment
cablelength += forfind.distanceTo(pruned[-1]) # Add the distance
# between the anchor and
# the tip segment to the
# running total
for seg in pruned: # Add all of the measured segments to "counted"
counted.append(seg)
print(cablelength)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, load up a neuron hoc file as a geo object
Step2: Now that we have a geo object ready to go, let's make a list of all the neurite paths from soma to tip
Step3: Finally, it's time to calculate the cable length! Let's create a for loop that keeps a running list of which paths have already been measured while adding everything together
|
12,069
|
<ASSISTANT_TASK:>
Python Code:
# this is code cell -> click on it, then press Shift+Enter
from IPython.display import YouTubeVideo
YouTubeVideo('vasBnRS3tQk')
import packages.initialization
import pioneer3dx as p3dx
p3dx.init()
import motion_widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialization
Step2: Motion
|
12,070
|
<ASSISTANT_TASK:>
Python Code:
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%%bash
# Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model1_rawdata
OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg') AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
))
%%bigquery
CREATE OR REPLACE TABLE
serverlessml.cleaned_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bigquery
-- LIMIT 0 is a free query, this allows us to check that the table exists.
SELECT * FROM serverlessml.cleaned_training_data
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model2_cleanup
OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg') AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model2_cleanup)
%%bigquery
-- This training takes on the order of 15 minutes.
CREATE OR REPLACE MODEL
serverlessml.model3b_dnn
OPTIONS(input_label_cols=['fare_amount'],
model_type='dnn_regressor', hidden_units=[32, 8]) AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model3b_dnn)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model3b_dnn, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
'unused' AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a BigQuery Dataset and Google Cloud Storage Bucket
Step2: Model 1
Step3: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Step4: Let's report just the error we care about, the Root Mean Squared Error (RMSE)
Step5: We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.
Step6: Model 2
Step7: Model 3
Step8: Nice!
|
12,071
|
<ASSISTANT_TASK:>
Python Code:
# Setup code. Make sure you run this first!
import os
import random
import math
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow import keras
from learntools.core import binder; binder.bind(globals())
from learntools.embeddings.ex1_embedding_layers import *
#_RM_
input_dir = '../input/movielens_preprocessed'
#_UNCOMMENT_
#input_dir = '../input'
# Load a 10% subset of the full MovieLens data.
df = pd.read_csv(os.path.join(input_dir, 'mini_rating.csv'))
# Some hyperparameters. (You might want to play with these later)
LR = .005 # Learning rate
EPOCHS = 8 # Default number of training epochs (i.e. cycles through the training data)
hidden_units = (32,4) # Size of our hidden layers
def build_and_train_model(movie_embedding_size=8, user_embedding_size=8, verbose=2, epochs=EPOCHS):
tf.set_random_seed(1); np.random.seed(1); random.seed(1) # Set seeds for reproducibility
user_id_input = keras.Input(shape=(1,), name='user_id')
movie_id_input = keras.Input(shape=(1,), name='movie_id')
user_embedded = keras.layers.Embedding(df.userId.max()+1, user_embedding_size,
input_length=1, name='user_embedding')(user_id_input)
movie_embedded = keras.layers.Embedding(df.movieId.max()+1, movie_embedding_size,
input_length=1, name='movie_embedding')(movie_id_input)
concatenated = keras.layers.Concatenate()([user_embedded, movie_embedded])
out = keras.layers.Flatten()(concatenated)
# Add one or more hidden layers
for n_hidden in hidden_units:
out = keras.layers.Dense(n_hidden, activation='relu')(out)
# A single output: our predicted rating
out = keras.layers.Dense(1, activation='linear', name='prediction')(out)
model = keras.Model(
inputs = [user_id_input, movie_id_input],
outputs = out,
)
model.compile(
tf.train.AdamOptimizer(LR),
loss='MSE',
metrics=['MAE'],
)
history = model.fit(
[df.userId, df.movieId],
df.y,
batch_size=5 * 10**3,
epochs=epochs,
verbose=verbose,
validation_split=.05,
)
return history
# Train two models with different embedding sizes and save the training statistics.
# We'll be using this later in the exercise.
history_8 = build_and_train_model(verbose=0)
history_32 = build_and_train_model(32, 32, verbose=0)
print("Setup complete!")
# embedding_variables should contain all the variables you would use an embedding layer for
# For your convenience, we've initialized it with all variables in the dataset, so you can
# just delete or comment out the variables you want to exclude.
embedding_variables = {
'stream_id',
'user_id',
'song_id',
'timestamp',
'artist_id',
'song_duration',
'explicit',
'user_country',
}
part1.check()
#%%RM_IF(PROD)%%
# Incorrect
embedding_variables = {
'stream_id',
}
part1.check()
#%%RM_IF(PROD)%%
# Correct 1
embedding_variables = {
'user_id',
'song_id',
'artist_id',
}
part1.check()
#%%RM_IF(PROD)%%
# Correct 2 (optional extra val)
embedding_variables = {
'user_id',
'song_id',
'artist_id',
'user_country',
}
part1.check()
#_COMMENT_IF(PROD)_
part1.solution()
history_FS = (15, 5)
def plot_history(histories, keys=('mean_absolute_error',), train=True, figsize=history_FS):
if isinstance(histories, tf.keras.callbacks.History):
histories = [ ('', histories) ]
for key in keys:
plt.figure(figsize=history_FS)
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=str(name).title()+' Val')
if train:
plt.plot(history.epoch, history.history[key], color=val[0].get_color(), alpha=.5,
label=str(name).title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.title(key)
plt.xlim([0,max(max(history.epoch) for (_, history) in histories)])
plot_history([
('base model', history_8),
])
plot_history([
('8-d embeddings', history_8),
('32-d embeddings', history_32),
])
# Example: shrinking movie embeddings and growing user embeddings
#history_biguser_smallmovie = build_and_train_model(movie_embedding_size=4, user_embedding_size=16)
#%%RM_IF(PROD)%%
history_biguser_smallmovie = build_and_train_model(movie_embedding_size=4, user_embedding_size=16)
#%%RM_IF(PROD)%%
plot_history([
('8-d embeddings', history_8),
('32-d embeddings', history_32),
('m4u16', history_biguser_smallmovie),
])
#_COMMENT_IF(PROD)_
part2.solution()
#_COMMENT_IF(PROD)_
part3.a.solution()
#_COMMENT_IF(PROD)_
part3.b.solution()
user_embedding_size = movie_embedding_size = 8
user_id_input = keras.Input(shape=(1,), name='user_id')
movie_id_input = keras.Input(shape=(1,), name='movie_id')
user_embedded = keras.layers.Embedding(df.userId.max()+1, user_embedding_size,
input_length=1, name='user_embedding')(user_id_input)
movie_embedded = keras.layers.Embedding(df.movieId.max()+1, movie_embedding_size,
input_length=1, name='movie_embedding')(movie_id_input)
concatenated = keras.layers.Concatenate()([user_embedded, movie_embedded])
out = keras.layers.Flatten()(concatenated)
# Add one or more hidden layers
for n_hidden in hidden_units:
out = keras.layers.Dense(n_hidden, activation='relu')(out)
# A single output: our predicted rating (before adding bias)
out = keras.layers.Dense(1, activation='linear', name='prediction')(out)
################################################################################
############################# YOUR CODE GOES HERE! #############################
# TODO: you need to create the variable movie_bias. Its value should be the output of calling a layer.
# I recommend giving the layer that holds your biases a distinctive name (this will help in an upcoming question)
#movie_bias =
################################################################################
out = keras.layers.Add()([out, movie_bias])
model_bias = keras.Model(
inputs = [user_id_input, movie_id_input],
outputs = out,
)
model_bias.compile(
tf.train.AdamOptimizer(LR),
loss='MSE',
metrics=['MAE'],
)
model_bias.summary()
#_COMMENT_IF(PROD)_
part3.c.hint()
#_COMMENT_IF(PROD)_
part3.c.solution()
#%%RM_IF(PROD)%%
# Bad solution (no bias adding)
user_id_input = keras.Input(shape=(1,), name='user_id')
movie_id_input = keras.Input(shape=(1,), name='movie_id')
user_embedded = keras.layers.Embedding(df.userId.max()+1, user_embedding_size,
input_length=1, name='user_embedding')(user_id_input)
movie_embedded = keras.layers.Embedding(df.movieId.max()+1, movie_embedding_size,
input_length=1, name='movie_embedding')(movie_id_input)
concatenated = keras.layers.Concatenate()([user_embedded, movie_embedded])
out = keras.layers.Flatten()(concatenated)
# Add one or more hidden layers
for n_hidden in hidden_units:
out = keras.layers.Dense(n_hidden, activation='relu')(out)
# A single output: our predicted rating
out = keras.layers.Dense(1, activation='linear', name='prediction')(out)
bias_embedded = keras.layers.Embedding(df.movieId.max()+1, 1, input_length=1, name='bias',
)(movie_id_input)
movie_bias = keras.layers.Flatten()(bias_embedded)
#out = keras.layers.Add()([out, movie_bias])
model_bias = keras.Model(
inputs = [user_id_input, movie_id_input],
outputs = out,
)
model_bias.compile(
tf.train.AdamOptimizer(LR),
loss='MSE',
metrics=['MAE'],
)
part3.c.check()
#%%RM_IF(PROD)%%
# Canonical correct solution
user_embedding_size = movie_embedding_size = 8
user_id_input = keras.Input(shape=(1,), name='user_id')
movie_id_input = keras.Input(shape=(1,), name='movie_id')
user_embedded = keras.layers.Embedding(df.userId.max()+1, user_embedding_size,
input_length=1, name='user_embedding')(user_id_input)
movie_embedded = keras.layers.Embedding(df.movieId.max()+1, movie_embedding_size,
input_length=1, name='movie_embedding')(movie_id_input)
concatenated = keras.layers.Concatenate()([user_embedded, movie_embedded])
out = keras.layers.Flatten()(concatenated)
# Add one or more hidden layers
for n_hidden in hidden_units:
out = keras.layers.Dense(n_hidden, activation='relu')(out)
# A single output: our predicted rating (before adding bias)
out = keras.layers.Dense(1, activation='linear', name='prediction')(out)
################################################################################
############################# YOUR CODE GOES HERE! #############################
# TODO: you need to create the variable movie_bias. Its value should be the output of calling a layer.
# I recommend giving the layer that holds your biases a distinctive name (this will help in an upcoming question)
bias_embedded = keras.layers.Embedding(df.movieId.max()+1, 1, input_length=1, name='bias',
)(movie_id_input)
movie_bias = keras.layers.Flatten()(bias_embedded)
################################################################################
out = keras.layers.Add()([out, movie_bias])
model_bias = keras.Model(
inputs = [user_id_input, movie_id_input],
outputs = out,
)
model_bias.compile(
tf.train.AdamOptimizer(LR),
loss='MSE',
metrics=['MAE'],
)
part3.c.check()
history_bias = model_bias.fit(
[df.userId, df.movieId],
df.y,
batch_size=5 * 10**3,
epochs=EPOCHS,
verbose=2,
validation_split=.05,
);
plot_history([
('no_bias', history_8),
('bias', history_bias),
]);
bias_layer = None
part3.d.check()
(b,) = bias_layer.get_weights()
print("Loaded biases with shape {}".format(b.shape))
#_COMMENT_IF(PROD)_
part3.d.solution()
#%%RM_IF(PROD)%%
# Bad solution (wrong layer)
bias_layer = model_bias.get_layer(index=10)
part3.d.check()
(b,) = bias_layer.get_weights()
print("Loaded biases with shape {}".format(b.shape))
#%%RM_IF(PROD)%%
# Canonical correct solution
bias_layer = model_bias.get_layer('bias')
part3.d.check()
(b,) = bias_layer.get_weights()
print("Loaded biases with shape {}".format(b.shape))
movies = pd.read_csv(os.path.join(input_dir, 'movie.csv'), index_col=0,
usecols=['movieId', 'title', 'genres', 'year'])
ntrain = math.floor(len(df) * .95)
df_train = df.head(ntrain)
# Mapping from original movie ids to canonical ones
mids = movieId_to_canon = df.groupby('movieId')['movieId_orig'].first()
# Add bias column
movies.loc[mids.values, 'bias'] = b
# Add columns for number of ratings and average rating
g = df_train.groupby('movieId_orig')
movies.loc[mids.values, 'n_ratings'] = g.size()
movies.loc[mids.values, 'mean_rating'] = g['rating'].mean()
movies.dropna(inplace=True)
movies.head()
from IPython.display import display
n = 10
display(
"Largest biases...",
movies.sort_values(by='bias', ascending=False).head(n),
"Smallest biases...",
movies.sort_values(by='bias').head(n),
)
n = 1000
mini = movies.sample(n, random_state=1)
fig, ax = plt.subplots(figsize=(13, 7))
ax.scatter(mini['mean_rating'], mini['bias'], alpha=.4)
ax.set_xlabel('Mean rating')
ax.set_ylabel('Bias');
#%%RM_IF(PROD)%%
# Experiment: Draw line of expected fit
n = 1000
mini = movies.sample(n, random_state=1)
fig, ax = plt.subplots(figsize=(13, 7))
ax.scatter(mini['mean_rating'], mini['bias'], alpha=.4)
row = df.iloc[0]
global_mean_rating = row.rating - row.y
lw = 1
ax.axhline(0, ls='--', color='grey', lw=lw)
ax.axvline(global_mean_rating, ls='--', label='Overall mean rating', color='indigo', lw=lw)
# Draw line of expected fit (if biases were to match data means)
pt = (global_mean_rating, 0)
x0, y0 = 0.5, (0.5 - global_mean_rating)
x1, y1 = 5, (5 - global_mean_rating)
ylim0 = ax.get_ylim()
ax.plot([x0, x1], [y0, y1], linestyle='-', lw=3, color='lime', alpha=.5)
ax.set_ylim(ylim0)
ax.set_xlabel('Mean rating')
ax.set_ylabel('Bias');
#%%RM%%
# (Discussion in solution text doesn't really make sense anymore now that we're thresholding obscure movies
# for this exercise.)
#_COMMENT_IF(PROD)_
part3.e.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Part 2
Step3: At the start of the notebook we also trained a model with 64-dimensional movie and user embeddings. How do you expect the results to compare? Make a prediction, then run the cell below to see.
Step4: If you're feeling experimental, feel free to try some other configurations. So far we've varied movie and user embedding size in lock step, but there's no reason they have to be the same. Do you have an intuition about whether one or the other should be bigger?
Step5: When you're ready, uncomment the cell below for some explanation of what's going on.
Step6: Part 3
Step7: Do you have an idea about what the bias values will look like? Are there certain movies you expect will have high or low biases?
Step8: Coding up biases
Step9: No idea where to start? Don't panic! Uncomment and run the line below for some hints.
Step10: Training
Step11: How does it compare to the results we got from the model without biases? Run the code cell below to compare their loss over the course of training.
Step12: How did adding biases affect our results?
Step13: Once you've successfully set the value of bias_layer, run the cell below which loads a dataframe containing movie metadata and adds the biases found in the previous step as a column.
Step14: Which movies have the lowest and highest learned biases? Run the cell below to find out.
Step15: Run the cell below to generate a scatter plot of movies' average ratings against the biases learned for those movies.
Step16: Considering this plot and the list of our highest and lowest bias movies, do our model's learned biases agree with what you expected?
|
12,072
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
return (2/L) * np.sin((nx * np.pi * x)/L) * np.sin((ny * np.pi * y)/L)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
print(well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1))
# YOUR CODE HERE
x, y = np.meshgrid(np.linspace(0, 1, 10), np.linspace(0, 1, 10))
plt.contourf(well2d(x, y, 3, 2, 1), 20, cmap="gnuplot")
plt.colorbar()
plt.tick_params(axis = "x", direction = "out", length = 5)
plt.tick_params(axis = "y", direction = "out", length = 5)
plt.box(False)
assert True # use this cell for grading the contour plot
# YOUR CODE HERE
x, y = np.meshgrid(np.linspace(0, 1, 10), np.linspace(0, 1, 10))
plt.pcolormesh(well2d(x, y, 3, 2, 1), cmap="gnuplot", alpha=0.9)
plt.colorbar()
plt.tick_params(axis = "x", direction = "out", length = 5)
plt.tick_params(axis = "y", direction = "out", length = 5)
plt.box(False)
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
12,073
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
r1 = 0.641 # Stator resistance
x1 = 0.750 # Stator reactance
r2 = 0.300 # Rotor resistance for single cage motor
r2i = 0.400 # Rotor resistance for inner cage of double-cage motor
r2o = 3.200 # Rotor resistance for outercage of double-cage motor
x2 = 0.500 # Rotor reactance for single cage motor
x2i = 3.300 # Rotor reactance for inner cage of double-cage motor
x2o = 0.500 # Rotor reactance for outer cage of double-cage motor
xm = 26.3 # Magnetization branch reactance
v_phase = 460 / sqrt(3) # Phase voltage
n_sync = 1800 # Synchronous speed (r/min)
w_sync = n_sync * 2*pi/60 # Synchronous speed (rad/s)
v_th = v_phase * ( xm / sqrt(r1**2 + (x1 + xm)**2) )
z_th = ((1j*xm) * (r1 + 1j*x1)) / (r1 + 1j*(x1 + xm))
r_th = real(z_th)
x_th = imag(z_th)
s = linspace(0, 1, 50) # slip
s[0] = 0.001 # avoid divide-by-zero problems
nm = (1 - s) * n_sync # mechanical speed
t_ind1 = ((3 * v_th**2 * r2/s) /
(w_sync * ((r_th + r2/s)**2 + (x_th + x2)**2)))
y_r = 1/(r2i + 1j*s*x2i) + 1/(r2o + 1j*s*x2o)
z_r = 1/y_r # Effective rotor impedance
r2eff = real(z_r) # Effective rotor resistance
x2eff = imag(z_r) # Effective rotor reactance
t_ind2 = ((3 * v_th**2 * (r2eff) / s) /
(w_sync * ((r_th + (r2eff)/s)**2 + (x_th + x2eff)**2)))
rc('text', usetex=True) # enable LaTeX commands for plot
plot(nm, t_ind1,'b',
nm, t_ind2,'k--',
lw=2)
xlabel(r'$\mathbf{n_{m}}\ [rpm]$')
ylabel(r'$\mathbf{\tau_{ind}}\ [Nm]$')
title ('Induction motor torque-speed characteristic')
legend ((r'Single-cage design','Double-cage design'), loc = 3);
grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, initialize the values needed in this program.
Step2: Calculate the Thevenin voltage and impedance from Equations 7-41a
Step3: Now calculate the torque-speed characteristic for many slips between 0 and 1.
Step4: Calculate torque for the single-cage rotor using
Step5: Calculate resistance and reactance of the double-cage rotor at this slip, and then use those values to calculate the induced torque.
Step6: Calculate induced torque for double-cage rotor.
Step7: Plot the torque-speed curve
|
12,074
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
boston = load_boston()
#print boston.DESCR
X = boston.data
y = boston.target
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.25, random_state=42) # Complete
from sklearn.neural_network import MLPRegressor
clf = MLPRegressor(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(Xtrain, ytrain)
# Look at the weights
print [coef.shape for coef in clf.coefs_]
ypred = clf.predict(Xtest)
#print ypred, ytest
fig = plt.figure(figsize=(6, 6))
plt.scatter(ytest,ypred)
plt.xlabel("Actual Value [x$1000]")
plt.ylabel("Predicted Value [x$1000]")
plt.show()
from sklearn.model_selection import cross_val_predict
yCVpred = cross_val_predict(clf, X, y, cv=5) # Complete
fig = plt.figure(figsize=(6, 6))
plt.scatter(y,yCVpred)
plt.xlabel("Actual Value [x$1000]")
plt.ylabel("Predicted Value [x$1000]")
plt.show()
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets, cross_validation, neural_network, svm, metrics
from sklearn.neural_network import MLPClassifier
digits = datasets.load_digits()
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
plt.show()
# To apply a classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Create a classifier: a support vector classifier
classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(15,) )
# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
print("Training set score: %f" % classifier.score(data[n_samples / 2:], digits.target[n_samples / 2:]))
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples / 2:]
predicted = classifier.predict(data[n_samples / 2:])
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
from sklearn.model_selection import cross_val_score
hidden_size = np.arange(5,20)
scores = np.array([])
for sz in hidden_size:
classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(sz,) )
#classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
scores = np.append(scores, np.mean(cross_val_score(classifier, data, digits.target, cv=5)))
#plt.plot(hidden_size,scores)
fig = plt.figure()
ax = plt.gca()
ax.plot(hidden_size,scores,'x-')
plt.show()
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets, cross_validation, neural_network, svm, metrics
from sklearn.neural_network import MLPClassifier
digits = datasets.load_digits()
resize = 32 #Size of larger image to embed the digits
images_ex = np.zeros((digits.target.size,resize,resize))
for index, image in enumerate(digits.images):
offrow = np.random.randint(low=0,high=resize-8,size=1)
offcol = np.random.randint(low=0,high=resize-8,size=1)
images_ex[index,offrow:offrow+8,offcol:offcol+8] = digits.images[index,:,:]
for jj in range(1,4):
fig = plt.figure()
ax1 = fig.add_subplot(1,2,2)
ax1.imshow(images_ex[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest')
ax2 = fig.add_subplot(1,2,1)
ax2.imshow(digits.images[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest')
plt.title(digits.target[jj])
plt.show()
# To apply a classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data_ex = images_ex.reshape((n_samples,-1))
# Create a classifier: Multi-layer perceptron
classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(64,) )
classifier.fit(data_ex[:n_samples / 2], digits.target[:n_samples / 2])
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples / 2:]
predicted = classifier.predict(data_ex[n_samples / 2:])
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
#Create a model
model = Sequential()
#Use two sparse layers to learn useful, translation-invariant features
model.add(Convolution2D(32,7,7,border_mode='valid',input_shape=(32,32,1)))
model.add(Activation('relu'))
model.add(Convolution2D(32,5,5))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
#Add dense layers to do the actual classification
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',metrics=['accuracy'])
model.summary()
#Keras has some particular requirements for data formats...
dataX = images_ex.reshape(images_ex.shape[0],images_ex.shape[1],images_ex.shape[2],1)
dataY = np_utils.to_categorical(digits.target)
#Train the model. We get a summary of performance after each training epoch
model.fit(dataX, dataY, validation_split=0.1, batch_size=128, nb_epoch=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Of course, that only predicts the value for a fraction of the data set. I don't think that I have made it entirely clear how to use cross-validation to get a prediction for the full training set, so let's do that now. We'll use Scikit-Learn's cross_val_predict.
Step2: Let's try to use the multi-layer perceptron classifier on the digits data set. We will use a single hidden layer to keep the training time reasonable.
Step3: This looks pretty good! In general increasing the size of the hidden layer will improve performance at the cost of longer training time. Now try training networks with a hidden layer size of 5 to 20. At what point does performance stop improving?
Step4: Our basic perceptron can do a pretty good job recognizing handwritten digits, assuming the digits are all centered in an 8x8 image. What happens if we embed the digit images at random locations within a 32x32 image? Try increasing the size of the hidden layer and see if we can improve the performance.
Step5: Well that fell apart quickly! We're at roughly the point where neural networks faded from popularity in the 90s. Perceptrons generated intense interest because they were biologically inspired and could be applied generically to any supervised learning problem. However they weren't extensible to more realistic problems, and for supervised learning there were techniques such as support vector machines that provided better performance and avoided the explosion in training time seen for large perceptrons.
|
12,075
|
<ASSISTANT_TASK:>
Python Code:
import sys
from os.path import *
import os
from astropy.io import fits
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from matplotlib.colors import LogNorm
from pylab import figure, cm
import astropy.time
import astropy.units as u
from astropy.coordinates import get_sun
import sunpy.map
from sunpy import sun
import numpy as np
%matplotlib inline
infile = 'data/Sol_16208/20201001001/event_cl/nu20201001001B06_chu3_N_cl.evt'
hdulist = fits.open(infile)
evtdata = hdulist[1].data
x = np.array([])
y = np.array([])
hdr=hdulist[1].header
for field in hdr.keys():
if field.find('TYPE') != -1:
if hdr[field] == 'X':
print(hdr[field][5:8])
xval = field[5:8]
if hdr[field] == 'Y':
print(hdr[field][5:8])
yval = field[5:8]
ra_ref = hdr['TCRVL'+xval]*u.deg
x0 = hdr['TCRPX'+xval]
delx = hdr['TCDLT'+xval] * u.deg
dec_ref = hdr['TCRVL'+yval]*u.deg
y0 = hdr['TCRPX'+yval]
dely = hdr['TCDLT'+yval]*u.deg
x = evtdata['X']
y = evtdata['Y']
pi =evtdata['PI']
met = evtdata['TIME']*u.s
# Conver the NuSTAR epoch times to astropy datetime objects
mjdref=hdulist[1].header['MJDREFI']
time = astropy.time.Time(mjdref*u.d+met, format = 'mjd')
# Convert X and Y to RA/DEC
ra_x = ra_ref + (x - x0) * delx / np.cos(dec_ref)
dec_y = dec_ref + (y - y0) * dely
print("Loaded: ", len(ra_x), " counts.")
hdulist.close()
def get_sun_pos(last_met):
sun_time = astropy.time.Time(mjdref*u.d+last_met, format = 'mjd')
astro_sun_pos = get_sun(sun_time)
# Get the center of the Sun, and assign it degrees.
sun_pos = np.array([astro_sun_pos.ra.deg, astro_sun_pos.dec.deg])* u.deg
# Solar NP roll angle:
sun_np=sun.solar_north(t=sun_time).cgs
return sun_pos, sun_np;
# How often you want to update the solar ephemeris:
tstep = 5. * u.s
last_met = met[0] - tstep * 2.
last_i = 0
sun_x = np.zeros_like(ra_x)
sun_y = np.zeros_like(dec_y)
tic()
for i in np.arange(len(ra_x)):
if ( (met[i] - last_met) > tstep ):
(sun_pos, sun_np) = get_sun_pos(last_met)
last_met = met[i]
# Rotation matrix for a counter-clockwise rotation since we're going
# back to celestial north from solar north
rotMatrix = np.array([[np.cos(sun_np), np.sin(sun_np)],
[-np.sin(sun_np), np.cos(sun_np)]])
# Diagnostics
# di = (i -last_i)
# print("Updating Sun position...")
# if di > 0:
# print(i, di)
# dt = toc()
# tic()
# last_i = i
# print("Time per event: ",dt / float(di) )
# From here on we do things for every photon:
ph_pos = np.array([ra_x[i].value, dec_y[i].value]) * u.deg
offset = sun_pos - ph_pos
# Account for East->West conversion for +X direction in heliophysics coords
offset = offset*[-1., 1.]
# Project the offset onto the Sun
delta_offset = ((np.dot(offset, rotMatrix)).to(u.arcsec))
sun_x[i] = delta_offset[0]
sun_y[i] = delta_offset[1]
print("Proccessed: ", i, " of ", len(ra_x))
hdulist = fits.open(infile)
tbldata=hdulist[1].data
hdr=hdulist[1].header
hdulist.close()
# change to 0-3000 pixels:
maxX = 3000
maxY = 3000
x0 = maxX / 2.
y0 = maxY / 2.
# Header keywords
for field in hdr.keys():
if field.find('TYPE') != -1:
if hdr[field] == 'X':
print(hdr[field][5:8])
xval = field[5:8]
if hdr[field] == 'Y':
print(hdr[field][5:8])
yval = field[5:8]
delx = hdr['TCDLT'+xval] * u.deg
dely = hdr['TCDLT'+yval]*u.deg
out_sun_x=1.0*(sun_x / delx) + x0
out_sun_y=(sun_y / dely) + y0
newdelx = delx.to(u.arcsec).value
newdely = dely.to(u.arcsec).value
tbldata['X'] = out_sun_x
tbldata['Y'] = out_sun_y
hdr['TCRVL'+xval] = 0.
hdr['TCRPX'+xval] = x0
hdr['TCDLT'+xval] = 1.0*delx.to(u.arcsec).value
hdr['TLMAX'+xval] = maxX
hdr['TCRVL'+yval] = 0.
hdr['TCRPX'+yval] = x0
hdr['TCDLT'+yval] = dely.to(u.arcsec).value
hdr['TLMAX'+yval] = maxY
# # Make the new filename:
(sfile, ext)=splitext(infile)
outfile=sfile+'_sunpos.evt'
# Remove output file if necessary
if isfile(outfile):
print(outfile, 'exists! Removing old version...')
os.remove(outfile)
fits.writeto(outfile, tbldata, hdr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1
Step1: Get the data from the FITS file.
Step2: Rotate to solar coordinates
Step3: Write the output to a new FITS file.
|
12,076
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import time
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
source_sentences[:50].split('\n')
target_sentences[:50].split('\n')
def extract_character_vocab(data):
special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
from distutils.version import LooseVersion
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 15
decoding_embedding_size = 15
# Learning Rate
learning_rate = 0.001
def get_model_inputs():
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length
def encoding_layer(input_data, rnn_size, num_layers,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
# Process the input we'll feed to the decoder
def process_decoder_input(target_data, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
target_sequence_length, max_target_sequence_length, enc_state, dec_input):
# 1. Decoder Embedding
target_vocab_size = len(target_letter_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return training_decoder_output, inference_decoder_output
def seq2seq_model(input_data, targets, lr, target_sequence_length,
max_target_sequence_length, source_sequence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers):
# Pass the input data through the encoder. We'll ignore the encoder output, but use the state
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
source_sequence_length,
source_vocab_size,
encoding_embedding_size)
# Prepare the target sequences we'll feed to the decoder in training mode
dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)
# Pass encoder state and decoder inputs to the decoders
training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int,
decoding_embedding_size,
num_layers,
rnn_size,
target_sequence_length,
max_target_sequence_length,
enc_state,
dec_input)
return training_decoder_output, inference_decoder_output
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()
# Create the training and inference logits
training_decoder_output, inference_decoder_output = seq2seq_model(input_data,
targets,
lr,
target_sequence_length,
max_target_sequence_length,
source_sequence_length,
len(source_letter_to_int),
len(target_letter_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')
# Create the weights for sequence_loss
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths
# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>']))
display_step = 20 # Check training loss after every 20 batches
checkpoint = "best_model.ckpt"
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(1, epochs+1):
for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
get_batches(train_target, train_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>'])):
# Training step
_, loss = sess.run(
[train_op, cost],
{input_data: sources_batch,
targets: targets_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths})
# Debug message updating us on the status of the training
if batch_i % display_step == 0 and batch_i > 0:
# Calculate validation cost
validation_loss = sess.run(
[cost],
{input_data: valid_sources_batch,
targets: valid_targets_batch,
lr: learning_rate,
target_sequence_length: valid_targets_lengths,
source_sequence_length: valid_sources_lengths})
print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'
.format(epoch_i,
epochs,
batch_i,
len(train_source) // batch_size,
loss,
validation_loss[0]))
# Save Model
saver = tf.train.Saver()
saver.save(sess, checkpoint)
print('Model Trained and Saved')
def source_to_seq(text):
'''Prepare the text for the model'''
sequence_length = 7
return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))
input_sentence = 'hello'
text = source_to_seq(input_sentence)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
#Multiply by batch_size to match the model's input parameters
answer_logits = sess.run(logits, {input_data: [text]*batch_size,
target_sequence_length: [len(text)]*batch_size,
source_sequence_length: [len(text)]*batch_size})[0]
pad = source_letter_to_int["<PAD>"]
print('Original Text:', input_sentence)
print('\nSource')
print(' Word Ids: {}'.format([i for i in text]))
print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text])))
print('\nTarget')
print(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))
print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
Step2: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
Step3: Preprocess
Step4: This is the final shape we need them to be in. We can now proceed to building the model.
Step5: Hyperparameters
Step6: Input
Step7: Sequence to Sequence Model
Step8: 2.2 Decoder
Step9: Set up the decoder components
Step10: 2.3 Seq2seq model
Step11: Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this
Step14: Get Batches
Step15: Train
Step16: Prediction
|
12,077
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# import all shogun classes
from modshogun import *
import random
import numpy as np
import matplotlib.pyplot as plt
from math import exp
# plot likelihood for three different noise lebels $\sigma$ (which is not yet squared)
sigmas=np.array([0.5,1,2])
# likelihood instance
lik=GaussianLikelihood()
# A set of labels to consider
lab=RegressionLabels(np.linspace(-4.0,4.0, 200))
# A single 1D Gaussian response function, repeated once for each label
# this avoids doing a loop in python which would be slow
F=np.zeros(lab.get_num_labels())
# plot likelihood for all observations noise levels
plt.figure(figsize=(12, 4))
for sigma in sigmas:
# set observation noise, this is squared internally
lik.set_sigma(sigma)
# compute log-likelihood for all labels
log_liks=lik.get_log_probability_f(lab, F)
# plot likelihood functions, exponentiate since they were computed in log-domain
plt.plot(lab.get_labels(), map(exp,log_liks))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$y_i$")
plt.title("Regression Likelihoods for different observation noise levels")
_=plt.legend(["sigma=$%.1f$" % sigma for sigma in sigmas])
def generate_regression_toy_data(n=50, n_test=100, x_range=15, x_range_test=20, noise_var=0.4):
# training and test sine wave, test one has more points
X_train = np.random.rand(n)*x_range
X_test = np.linspace(0,x_range_test, 500)
# add noise to training observations
y_test = np.sin(X_test)
y_train = np.sin(X_train)+np.random.randn(n)*noise_var
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_regression_toy_data()
plt.figure(figsize=(16,4))
plt.plot(X_train, y_train, 'ro')
plt.plot(X_test, y_test)
plt.legend(["Noisy observations", "True model"])
plt.title("One-Dimensional Toy Regression Data")
plt.xlabel("$\mathbf{x}$")
_=plt.ylabel("$\mathbf{y}$")
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# compute covariances for different kernel parameters
taus=np.asarray([.1,4.,32.])
Cs=np.zeros(((len(X_train), len(X_train), len(taus))))
for i in range(len(taus)):
# compute unscalled kernel matrix (first parameter is maximum size in memory and not very important)
kernel=GaussianKernel(10, taus[i])
kernel.init(feats_train, feats_train)
Cs[:,:,i]=kernel.get_kernel_matrix()
# plot
plt.figure(figsize=(16,5))
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
plt.imshow(Cs[:,:,i], interpolation="nearest")
plt.xlabel("Covariate index")
plt.ylabel("Covariate index")
_=plt.title("tau=%.1f" % taus[i])
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP prior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(np.zeros(len(X_train)), Cs[:,:,i], 3)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=X_train.argsort()
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP posterior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# create inference method instance with very small observation noise to make
inf=ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
C_post=inf.get_posterior_covariance()
m_post=inf.get_posterior_mean()
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(m_post, C_post, 5)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=sorted(range(len(X_train)),key=lambda x:X_train[x])
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.plot(X_train, y_train, 'r*')
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
# helper function that plots predictive distribution and data
def plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances):
# evaluate predictive distribution in this range of y-values and preallocate predictive distribution
y_values=np.linspace(-3,3)
D=np.zeros((len(y_values), len(X_test)))
# evaluate normal distribution at every prediction point (column)
for j in range(np.shape(D)[1]):
# create gaussian distributio instance, expects mean vector and covariance matrix, reshape
gauss=GaussianDistribution(np.array(means[j]).reshape(1,), np.array(variances[j]).reshape(1,1))
# evaluate predictive distribution for test point, method expects matrix
D[:,j]=np.exp(gauss.log_pdf_multiple(y_values.reshape(1,len(y_values))))
plt.pcolor(X_test,y_values,D)
plt.colorbar()
plt.contour(X_test,y_values,D)
plt.plot(X_test,y_test, 'b', linewidth=3)
plt.plot(X_test,means, 'm--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.legend(["Truth", "Prediction", "Data"])
plt.figure(figsize=(18,10))
plt.suptitle("GP inference for different kernel widths")
for i in range(len(taus)):
plt.subplot(len(taus),1,i+1)
# create GP instance using inference method and train
# use Shogun objects from above
inf.set_kernel(GaussianKernel(10,taus[i]))
gp=GaussianProcessRegression(inf)
gp.train()
# predict labels for all test data (note that this produces the same as the below mean vector)
means = gp.apply(feats_test)
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# note: y_predicted == means
# plot predictive distribution and training data
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("tau=%.1f" % taus[i])
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
gp = GaussianProcessRegression(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, feats_train, labels_train, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP, train
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
best_sigma=GaussianLikelihood.obtain_from_generic(inf.get_model()).get_sigma()
print "Selected tau (kernel bandwidth):", best_width
print "Selected gamma (kernel scaling):", best_scale
print "Selected sigma (observation noise):", best_sigma
# train gp
gp.train()
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# plot predictive distribution
plt.figure(figsize=(18,5))
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("Maximum Likelihood II based inference")
# two classification likelihoods in Shogun
logit=LogitLikelihood()
probit=ProbitLikelihood()
# A couple of Gaussian response functions, 1-dimensional here
F=np.linspace(-5.0,5.0)
# Single observation label with +1
lab=BinaryLabels(np.array([1.0]))
# compute log-likelihood for all values in F
log_liks_logit=np.zeros(len(F))
log_liks_probit=np.zeros(len(F))
for i in range(len(F)):
# Shogun expects a 1D array for f, not a single number
f=np.array(F[i]).reshape(1,)
log_liks_logit[i]=logit.get_log_probability_f(lab, f)
log_liks_probit[i]=probit.get_log_probability_f(lab, f)
# in fact, loops are slow and Shogun offers a method to compute the likelihood for many f. Much faster!
log_liks_logit=logit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
log_liks_probit=probit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
# plot the sigmoid functions, note that Shogun computes it in log-domain, so we have to exponentiate
plt.figure(figsize=(12, 4))
plt.plot(F, np.exp(log_liks_logit))
plt.plot(F, np.exp(log_liks_probit))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$f_i$")
plt.title("Classification Likelihoods")
_=plt.legend(["Logit", "Probit"])
def generate_classification_toy_data(n_train=100, mean_a=np.asarray([0, 0]), std_dev_a=1.0, mean_b=3, std_dev_b=0.5):
# positive examples are distributed normally
X1 = (np.random.randn(n_train, 2)*std_dev_a+mean_a).T
# negative examples have a "ring"-like form
r = np.random.randn(n_train)*std_dev_b+mean_b
angle = np.random.randn(n_train)*2*np.pi
X2 = np.array([r*np.cos(angle)+mean_a[0], r*np.sin(angle)+mean_a[1]])
# stack positive and negative examples in a single array
X_train = np.hstack((X1,X2))
# label positive examples with +1, negative with -1
y_train = np.zeros(n_train*2)
y_train[:n_train] = 1
y_train[n_train:] = -1
return X_train, y_train
def plot_binary_data(X_train, y_train):
plt.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plt.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
X_train, y_train=generate_classification_toy_data()
plot_binary_data(X_train, y_train)
_=plt.title("2D Toy classification problem")
# for building combinations of arrays
from itertools import product
# convert training data into Shogun representation
train_features = RealFeatures(X_train)
train_labels = BinaryLabels(y_train)
# generate all pairs in 2d range of testing data (full space), discretisation resultion is n_test
n_test=50
x1 = np.linspace(X_train[0,:].min()-1, X_train[0,:].max()+1, n_test)
x2 = np.linspace(X_train[1,:].min()-1, X_train[1,:].max()+1, n_test)
X_test = np.asarray(list(product(x1, x2))).T
# convert testing features into Shogun representation
test_features = RealFeatures(X_test)
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# you can easily switch between probit and logit likelihood models
# by uncommenting/commenting the following lines:
# create probit likelihood model
# lik = ProbitLikelihood()
# create logit likelihood model
lik = LogitLikelihood()
# you can easily switch between Laplace and EP approximation by
# uncommenting/commenting the following lines:
# specify Laplace approximation inference method
#inf = LaplacianInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# specify EP approximation inference method
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
# create and train GP classifier, which uses Laplace approximation
gp = GaussianProcessClassification(inf)
gp.train()
test_labels=gp.apply(test_features)
# plot data and decision boundary
plot_binary_data(X_train, y_train)
plt.pcolor(x1, x2, test_labels.get_labels().reshape(n_test, n_test))
_=plt.title('Decision boundary')
# obtain probabilities for
p_test = gp.get_probabilities(test_features)
# create figure
plt.title('Training data, predictive probability and decision boundary')
# plot training data
plot_binary_data(X_train, y_train)
# plot decision boundary
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
# plot probabilities
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
# generate some non-negative kernel widths
widths=2**np.linspace(-5,6,20)
# compute marginal likelihood under Laplace apprixmation for every width
# use Shogun objects from above
marginal_likelihoods=np.zeros(len(widths))
for i in range(len(widths)):
# note that GP training is automatically done/updated if a parameter is changed. No need to call train again
kernel.set_width(widths[i])
marginal_likelihoods[i]=-inf.get_negative_log_marginal_likelihood()
# plot marginal likelihoods as a function of kernel width
plt.plot(np.log2(widths), marginal_likelihoods)
plt.title("Log Marginal likelihood for different kernels")
plt.xlabel("Kernel Width in log-scale")
_=plt.ylabel("Log-Marginal Likelihood")
print "Width with largest marginal likelihood:", widths[marginal_likelihoods.argmax()]
# again, use Shogun objects from above, but a few extremal widths
widths_subset=np.array([widths[0], widths[marginal_likelihoods.argmax()], widths[len(widths)-1]])
plt.figure(figsize=(18, 5))
for i in range(len(widths_subset)):
plt.subplot(1,len(widths_subset),i+1)
kernel.set_width(widths_subset[i])
# obtain and plot predictive distribution
p_test = gp.get_probabilities(test_features)
title_str="Width=%.2f, " % widths_subset[i]
if i is 0:
title_str+="too complex, overfitting"
elif i is 1:
title_str+="just right"
else:
title_str+="too smooth, underfitting"
plt.title(title_str)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, train_features, train_labels, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
print "Selected kernel bandwidth:", best_width
print "Selected kernel scale:", best_scale
# train gp
gp.train()
# visualise predictive distribution
p_test = gp.get_probabilities(test_features)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
# parameter space, increase resolution if you want finer plots, takes long though
resolution=5
widths=2**np.linspace(-4,10,resolution)
scales=2**np.linspace(-5,10,resolution)
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
inf.set_tolerance(1e-3)
# compute marginal likelihood for every parameter combination
# use Shogun objects from above
marginal_likelihoods=np.zeros((len(widths), len(scales)))
for i in range(len(widths)):
for j in range(len(scales)):
kernel.set_width(widths[i])
inf.set_scale(scales[j])
marginal_likelihoods[i,j]=-inf.get_negative_log_marginal_likelihood()
# contour plot of marginal likelihood as a function of kernel width and scale
plt.contour(np.log2(widths), np.log2(scales), marginal_likelihoods)
plt.colorbar()
plt.xlabel("Kernel width (log-scale)")
plt.ylabel("Kernel scale (log-scale)")
_=plt.title("Log Marginal Likelihood")
# plot our found best parameters
_=plt.plot([np.log2(best_width)], [np.log2(best_scale)], 'r*', markersize=20)
# for measuring runtime
import time
# simple regression data
X_train, y_train, X_test, y_test = generate_regression_toy_data(n=1000)
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# inducing features (here: a random grid over the input space, try out others)
n_inducing=10
#X_inducing=linspace(X_train.min(), X_train.max(), n_inducing)
X_inducing=np.random.rand(X_train.min()+n_inducing)*X_train.max()
feats_inducing=RealFeatures(X_inducing.reshape(1,len(X_inducing)))
# create FITC inference method and GP instance
inf = FITCInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma), feats_inducing)
gp = GaussianProcessRegression(inf)
start=time.time()
gp.train()
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
print "FITC inference took %.2f seconds" % (time.time()-start)
# exact GP
start=time.time()
inf_exact = ExactInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma))
inf_exact.set_scale(best_scale)
gp_exact = GaussianProcessRegression(inf_exact)
gp_exact.train()
means_exact = gp_exact.get_mean_vector(feats_test)
variances_exact = gp_exact.get_variance_vector(feats_test)
print "Exact inference took %.2f seconds" % (time.time()-start)
# comparison plot FITC and exact inference, plot 95% confidence of both predictive distributions
plt.figure(figsize=(18,5))
plt.plot(X_test, y_test, color="black", linewidth=3)
plt.plot(X_test, means, 'r--', linewidth=3)
plt.plot(X_test, means_exact, 'b--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.plot(X_inducing, np.zeros(len(X_inducing)), 'g*', markersize=15)
# tube plot of 95% confidence
error=1.96*np.sqrt(variances)
plt.plot(X_test,means-error, color='red', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means-error,means+error,color='red', alpha=0.3)
error_exact=1.96*np.sqrt(variances_exact)
plt.plot(X_test,means_exact-error_exact, color='blue', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means_exact-error_exact,means_exact+error_exact,color='blue', alpha=0.3)
# plot upper confidence lines later due to legend
plt.plot(X_test,means+error, color='red', alpha=0.3, linewidth=3)
plt.plot(X_test,means_exact+error_exact, color='blue', alpha=0.3, linewidth=3)
plt.legend(["True", "FITC prediction", "Exact prediction", "Data", "Inducing points", "95% FITC", "95% Exact"])
_=plt.title("Comparison FITC and Exact Regression")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some Formal Background (Skip if you just want code examples)
Step2: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
Step3: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http
Step4: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x})
Step5: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
Step6: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http
Step7: The question now is
Step8: Now we can output the best parameters and plot the predictive distribution for those.
Step9: Now the predictive distribution is very close to the true data generating process.
Step10: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
Step11: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http
Step12: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http
Step13: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http
Step14: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient
Step15: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition
Step16: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http
Step17: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
Step18: Our found maximum nicely matches the result of the "grid-search". The take home message for this is
|
12,078
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.metrics import confusion_matrix
y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]
confusion_matrix(y_true, y_pred)
y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
print(classification_report(y_true, y_pred, target_names=["ant", "bird", "cat"]))
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
model = LogisticRegression().fit(X, y)
print(confusion_matrix(y, model.predict(X)))
print(classification_report(y, model.predict(X)))
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y, model.decision_function(X))
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.show()
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
iris = load_iris()
model = LogisticRegression().fit(iris.data, iris.target)
from sklearn.metrics import roc_curve
fpr0, tpr0, thresholds0 = roc_curve(iris.target, model.decision_function(iris.data)[:, 0], pos_label=0)
fpr1, tpr1, thresholds1 = roc_curve(iris.target, model.decision_function(iris.data)[:, 1], pos_label=1)
fpr2, tpr2, thresholds2 = roc_curve(iris.target, model.decision_function(iris.data)[:, 2], pos_label=2)
fpr0, tpr0, thresholds0
plt.plot(fpr0, tpr0, "r-", label="class 0 ")
plt.plot(fpr1, tpr1, "g-", label="class 1")
plt.plot(fpr2, tpr2, "b-", label="class 2")
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlim(-0.05, 1.0)
plt.ylim(0, 1.05)
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
print(confusion_matrix(iris.target, model1.predict(iris.data)))
print(classification_report(iris.target, model1.predict(iris.data)))
from sklearn.preprocessing import label_binarize
yb0 = label_binarize(iris.target, classes=[0, 1, 2])
yb1 = label_binarize(model1.predict(iris.data), classes=[0, 1, 2])
print(yb0[:, 0].sum(), yb1[:, 0].sum())
plt.plot(yb0[:, 0], 'ro-', markersize=10, alpha=0.4, label="actual class 0")
plt.plot(yb1[:, 0], 'bs-', markersize=10, alpha=0.4, label="predicted class 0")
plt.legend()
plt.xlim(0, len(iris.target)-1);
plt.ylim(-0.1, 1.1);
print(yb0[:, 1].sum(), yb1[:, 1].sum())
plt.plot(yb0[:, 1], 'ro-', markersize=10, alpha=0.6, label="actual class 1")
plt.plot(yb1[:, 1], 'bs-', markersize=10, alpha=0.6, label="predicted class 1")
plt.legend()
plt.xlim(45, 145);
plt.ylim(-0.1, 1.1);
from sklearn.metrics import auc
auc(fpr0, tpr0), auc(fpr1, tpr1), auc(fpr2, tpr2)
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, weights=[0.9, 0.1], random_state=4)
model = LogisticRegression().fit(X, y)
from sklearn.metrics import precision_recall_curve
pre, rec, thresholds = precision_recall_curve(y, model.decision_function(X))
plt.plot(rec, pre)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.show()
y.sum(), len(y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이진 분류 결과표 Binary Confusion Matrix
Step2: ROC 커브
Step3: Multi-Class 예제
Step4: AUC (Area Under the Curve)
Step5: Precision-Recall 커브
|
12,079
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_constraint('semidetached', 'primary')
b['requiv@constraint@primary']
b['requiv_max@constraint@primary']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Semi-Detached Systems
Step3: We can view the constraint on requiv by accessing the constraint
Step4: Now whenever any of the relevant parameters (q, ecc, syncpar, sma) are changed, the value of requiv will change to match the critical value as defined by requiv_max.
|
12,080
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../../metal')
import metal
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pickle
with open("data/basics_tutorial.pkl", 'rb') as f:
X, Y, L, D = pickle.load(f)
from metal.utils import split_data
Ls = split_data(L, splits=[0.8, 0.1, 0.1], seed=123)
import os
from metal.contrib.visualization.analysis import view_label_matrix
# This if statement and others like it are for our continuous integration tests; you can ignore them.
if 'CI' not in os.environ:
view_label_matrix(Ls[0])
from metal.contrib.visualization.analysis import view_overlaps
if 'CI' not in os.environ:
view_overlaps(Ls[0])
from metal.contrib.visualization.analysis import view_conflicts
if 'CI' not in os.environ:
view_conflicts(Ls[0], normalize=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We reload the synthetic data from the basics tutorial.
Step2: In this case, there isn't a lot to see, since these label matrices were created synthetically. However, these visualizations can often help you to discover unexpected phenomena in your label matrices, such as unexpectedly high or low conflict rates between certain labeling functions, or portions of your dataset with surprisingly low or high coverage.
|
12,081
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
df = pandas.read_csv('muct76.csv', header=0, usecols=np.arange(2,154), dtype=float)
df.head()
X = df.iloc[:, ::2].values
Y = df.iloc[:, 1::2].values
print(X.shape, Y.shape)
plt.plot(X[0,:], Y[0,:])
plt.show()
xmeans = np.mean(X, axis=1)
ymeans = np.mean(Y, axis=1)
xmeans.shape
## mean-centering each image
X = (X.T - xmeans).T
Y = (Y.T - ymeans).T
for i, (fx, fy) in enumerate(zip(X, Y)):
interpupils_dist = np.abs(fx[31] - fx[36])
X[i,:] = X[i,:] * 100 / interpupils_dist
Y[i,:] = Y[i,:] * 100 / interpupils_dist
plt.figure(figsize=(10,10))
plt.subplot(2, 2, 1)
plt.plot(X[0,:], Y[0,:])
plt.subplot(2, 2, 2)
plt.plot(X[100,:], Y[100,:])
plt.subplot(2, 2, 3)
plt.plot(X[200,:], Y[200,:])
plt.subplot(2, 2, 4)
plt.plot(X[300,:], Y[300,:])
plt.show()
mean_face_x = np.mean(X, axis=0)
mean_face_y = np.mean(Y, axis=0)
plt.plot(mean_face_x, mean_face_y)
plt.show()
D = np.concatenate((X, Y), axis=1)
D.shape
cov_mat = np.cov(D.T)
cov_mat.shape
eig_values, eig_vectors = np.linalg.eig(cov_mat)
print(eig_values.shape, eig_vectors.shape)
plt.plot(eig_values)
num_eigs = 20
Phi_matrix = eig_vectors[:,:num_eigs]
Phi_matrix.shape
def construct_newface(meanface, Phi_matrix, b):
face = meanface + np.dot(Phi_matrix, b )
return (face[:76], face[76:])
meanface = np.concatenate((mean_face_x, mean_face_y))
plt.figure(figsize=(10,10))
for i in range(4):
plt.subplot(2, 2, i+1)
b = np.zeros(shape=num_eigs)
for j in (-0.025, -0.02, -0.015, -0.01, 0, 0.01, 0.015, 0.02, 0.025):
b[i] = j*eig_values[i]
xnew, ynew = construct_newface(meanface, Phi_matrix, b=b)
plt.plot(xnew, ynew)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After downloading the dataset, unzip the landmarks, and use a file named muct76.csv.
Step2: The df contains both $x$ and $y$ coordinates of landmark points. There are 3755 images in the dataset, but the reverse (mirrored) images are also considered here, so total we have 7510 rows. Next, let us split the $x$, $y$ coordinates into separate numpy arrays for convinience.
Step3: Now, let us visualize it to get more familar with the data. We can plot the landmarks for one image using matplotlib. Landmarks identify eyebrows, eyes, face boundary, nose, upper and lower lips. These facial points are exteremly useful for a lot of computer vision applications.
Step4: Step 1
Step5: Scaling faces
Step6: Compute the mean face
Step7: Compute the eigne faces
Step8: Eigen vectors and eigen values of covariance matrix
Step9: Finally, everything is ready to generate new faces. Applying a distortion along those eigen faces generates a new face according to the following formula
Step10: For clarity, we generate distorion vectors of all zero values except one non-zero element. That non-zero element is a multiplication of the corresponding eigen-value times a small factor.
|
12,082
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
import cPickle as pickle
import os
%matplotlib inline
covSim = np.array([[1.0, .8, .2, .2],
[.8, 1.0, .3, .3],
[.3, .3, 1.0, .8],
[.2, .2, .8, 1.0]])
np.random.seed(111)
datSim = np.random.multivariate_normal([2, 3, 8, 9], covSim, size=1000)
dfSim = pd.DataFrame(data=datSim, columns=['sample1', 'sample2', 'sample3', 'sample4'])
# Save for comparing in sas
dfSim.to_csv('/home/jfear/tmp/dfSim.csv', index=False)
dfSim.head()
# Calculate the covaranice matrix from the data
covHat = dfSim.cov()
covHat
# Get the inverse of the covarance matrix
covHatInv = np.linalg.inv(covHat)
covHatInv
# Calculate the column means
colMean = dfSim.mean(axis=0)
colMean
# Subtract the mean from each value
dfSimCenter = (dfSim - colMean).T
dfSimCenter.head()
# Calculate the mahalanobis distance
MD = np.sqrt(np.dot(np.dot(dfSimCenter.T, covHatInv), dfSimCenter))
MDval = np.diag(MD)
plt.scatter(x=range(len(MDval)), y=MDval)
plt.axhline(np.percentile(MDval, 95), ls='--', lw=2)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generation of a simulated data set
Step2: Calculate Mahalanobis distance
|
12,083
|
<ASSISTANT_TASK:>
Python Code:
# 载入所需要的库
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# 载入学生数据集
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# TODO: 计算学生的数量
n_students = None
# TODO: 计算特征数量
n_features = None
# TODO: 计算通过的学生数
n_passed = None
# TODO: 计算未通过的学生数
n_failed = None
# TODO: 计算通过率
grad_rate = None
# 输出结果
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
# 提取特征列
feature_cols = list(student_data.columns[:-1])
# 提取目标列 ‘passed’
target_col = student_data.columns[-1]
# 显示列的列表
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# 将数据分割成特征数据和目标数据(即X_all 和 y_all)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# 通过打印前5行显示特征信息
print "\nFeature values:"
print X_all.head()
def preprocess_features(X):
''' 预处理学生数据,将非数字的二元特征转化成二元值(0或1),将分类的变量转换成虚拟变量
'''
# 初始化一个用于输出的DataFrame
output = pd.DataFrame(index = X.index)
# 查看数据的每一个特征列
for col, col_data in X.iteritems():
# 如果数据是非数字类型,将所有的yes/no替换成1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# 如果数据类型是类别的(categorical),将它转换成虚拟变量
if col_data.dtype == object:
# 例子: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# 收集转换后的列
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
# TODO:在这里导入你可能需要使用的另外的功能
# TODO:设置训练集的数量
num_train = None
# TODO:设置测试集的数量
num_test = X_all.shape[0] - num_train
# TODO:把数据集混洗和分割成上面定义的训练集和测试集
X_train = None
X_test = None
y_train = None
y_test = None
# 显示分割的结果
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
def train_classifier(clf, X_train, y_train):
''' 用训练集训练分类器 '''
# 开始计时,训练分类器,然后停止计时
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' 用训练好的分类器做预测并输出F1值'''
# 开始计时,作出预测,然后停止计时
start = time()
y_pred = clf.predict(features)
end = time()
# 输出并返回结果
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' 用一个分类器训练和预测,并输出F1值 '''
# 输出分类器名称和训练集大小
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# 训练一个分类器
train_classifier(clf, X_train, y_train)
# 输出训练和测试的预测结果
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
# TODO:从sklearn中引入三个监督学习模型
# from sklearn import model_A
# from sklearn import model_B
# from skearln import model_C
# TODO:初始化三个模型
clf_A = None
clf_B = None
clf_C = None
# TODO:设置训练集大小
X_train_100 = None
y_train_100 = None
X_train_200 = None
y_train_200 = None
X_train_300 = None
y_train_300 = None
# TODO:对每一个分类器和每一个训练集大小运行'train_predict'
# train_predict(clf, X_train, y_train, X_test, y_test)
# TODO: 导入 'GridSearchCV' 和 'make_scorer'
# TODO:创建你希望调整的参数列表
parameters = None
# TODO:初始化分类器
clf = None
# TODO:用'make_scorer'创建一个f1评分函数
f1_scorer = None
# TODO:在分类器上使用f1_scorer作为评分函数运行网格搜索
grid_obj = None
# TODO: Fit the grid search object to the training data and find the optimal parameters
# TODO:用训练集训练grid search object来寻找最佳参数
grid_obj = None
# Get the estimator
# 得到预测的结果
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
# 输出经过调参之后的训练集和测试集的F1值
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习
Step2: 数据准备
Step3: 预处理特征列
Step4: 实现
Step5: 训练和评价模型
Step6: 练习
Step7: 结果表格
|
12,084
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot
pyplot.rc("figure", figsize=(10, 10))
from fuzzylogic.classes import Domain
from fuzzylogic.functions import R, S, alpha
T = Domain("test", 0, 30, res=0.1)
T.up = R(1,10)
T.up.plot()
T.down = S(20, 29)
T.down.plot()
T.polygon = T.up & T.down
T.polygon.plot()
T.inv_polygon = ~T.polygon
T.inv_polygon.plot()
from fuzzylogic.classes import Domain, Set
from fuzzylogic.functions import (sigmoid, gauss, trapezoid,
triangular_sigmoid, rectangular)
T = Domain("test", 0, 70, res=0.1)
T.sigmoid = sigmoid(1,1,20)
T.sigmoid.plot()
T.gauss = gauss(10, 0.01, c_m=0.9)
T.gauss.plot()
T.trapezoid = trapezoid(25, 30, 35, 40, c_m=0.9)
T.trapezoid.plot()
T.triangular_sigmoid = triangular_sigmoid(40, 70, c=55)
T.triangular_sigmoid.plot()
from fuzzylogic.classes import Domain
from fuzzylogic.functions import alpha, triangular
from fuzzylogic.hedges import plus, minus, very
numbers = Domain("numbers", 0, 20, res=0.1)
close_to_10 = alpha(floor=0.2, ceiling=0.8, func=triangular(0, 20))
close_to_5 = triangular(1, 10)
numbers.foo = minus(close_to_5)
numbers.bar = very(close_to_10)
numbers.bar.plot()
numbers.foo.plot()
numbers.baz = numbers.foo + numbers.bar
numbers.baz.plot()
numbers(8)
from fuzzylogic.classes import Domain
from fuzzylogic.functions import bounded_sigmoid
T = Domain("temperature", 0, 100, res=0.1)
T.cold = bounded_sigmoid(5,15, inverse=True)
T.cold.plot()
T.hot = bounded_sigmoid(20, 40)
T.hot.plot()
T.warm = ~T.hot & ~T.cold
T.warm.plot()
T(10)
from fuzzylogic.classes import Domain
from fuzzylogic.functions import alpha, trapezoid
N = Domain("numbers", 0, 6, res=0.01)
N.two_or_so = alpha(floor=0, ceiling=0.7, func=trapezoid(0, 1.9, 2.1, 4))
N.two_or_so.plot()
N.x = N.two_or_so.normalized()
N.x.plot()
from fuzzylogic.classes import Domain, Set, Rule
from fuzzylogic.hedges import very
from fuzzylogic.functions import R, S
temp = Domain("Temperature", -80, 80)
hum = Domain("Humidity", 0, 100)
motor = Domain("Speed", 0, 2000)
temp.cold = S(0,20)
temp.hot = R(15,30)
hum.dry = S(20,50)
hum.wet = R(40,70)
motor.fast = R(1000,1500)
motor.slow = ~motor.fast
R1 = Rule({(temp.hot, hum.dry): motor.fast})
R2 = Rule({(temp.cold, hum.dry): very(motor.slow)})
R3 = Rule({(temp.hot, hum.wet): very(motor.fast)})
R4 = Rule({(temp.cold, hum.wet): motor.slow})
rules = Rule({(temp.hot, hum.dry): motor.fast,
(temp.cold, hum.dry): very(motor.slow),
(temp.hot, hum.wet): very(motor.fast),
(temp.cold, hum.wet): motor.slow,
})
rules == R1 | R2 | R3 | R4 == sum([R1, R2, R3, R4])
values = {hum: 45, temp: 22}
print(R1(values), R2(values), R3(values), R4(values), "=>", rules(values))
table =
hum.dry hum.wet
temp.cold very(motor.slow) motor.slow
temp.hot motor.fast very(motor.fast)
from fuzzylogic.classes import rule_from_table
table_rules = rule_from_table(table, globals())
assert table_rules == rules
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: let's show off a few interesting functions ;)
Step2: Domains
Step3: Many times you end up with sets that never hit 1 like with sigmoids, triangular funcs that hit the border of the domain or after operations with other sets. Then it is often needed to normalize (define max(set) == 1). Note that Set.normalized() returns a set that (unlike other set ops) is already bound to the domain and given the name "normalized_{set.name}". This can't be circumvented because normalizing is only defined on a given domain.
Step4: Inference
Step6: There are a few things to note in this example. Firstly, make sure to pass in the values as a single dictionary at the end, not as parameters.
|
12,085
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
Gradient Descent
Example
f(x) = x**4 -3*x**3 + 2
f'(x) = 4*x**3 - 9*x**2
We want f'(x) = 0 - slope is zero / minima
Initial guess: x=4
f'(4) = 112
Second guess: x=-1
f'(-1) = -13
=> f'(4) > 0 => decrease x | f'(-1) < 0 => increase x
x_new = x_old - f'(x) ??? => not idea as this would be too stepp
x_new = x_old - alpha * f'(x) => what kind of value for alpha??
alpha = 0.01
x_old = 0
x_new = 4
precision = 0.00001
def f_derivative(x):
return 4*x**3 - 9*x**2
while abs(x_new - x_old) > precision:
x_old = x_new
x_new = x_old - alpha * f_derivative(x_old)
# sigmoid function
def sigmoid(x, deriv=False):
if (deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
# input dataset
X = np.array([
[0, 0, 1],
[0, 1, 1],
[1, 0, 1],
[1, 1, 1]
])
# output dataset
y = np.array([[0, 0, 1, 1]]).T
# fix the seed
np.random.seed(1)
weights0 = 2*np.random.random((3,1)) - 1
weights0
for i in range(10000):
# forward propagation
l0 = X
l1 = sigmoid(np.dot(l0, weights0))
# how much did we miss?
l1_error = y - l1
# multiply how much we missed by the slope of the sigmoid at the values in l1
l1_delta = l1_error * sigmoid(l1, True)
# update weights
weights0 += np.dot(l0.T, l1_delta)
l1
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x):
return np.tanh(x)
def tanh_prime(x):
return 1.0 - x**2
class NeuralNetwork(object):
# fix the seed
np.random.seed(1)
def __init__(self, layers, learning_rate, epochs):
self.activation = sigmoid
self.activation_prime = sigmoid_prime
self.weights = []
# input layers -> hidden layers
for i in range(1, len(layers) - 1):
# + 1 due to bias
r = 2 * np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# hidden layers -> output layer
r = 2 * np.random.random((layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
self.learning_rate = learning_rate
self.epochs = epochs
def train(self, X, y):
# Add bias to X
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for epoch in range(self.epochs):
# forward propagation
i = np.random.randint(X.shape[0])
a = [X[i]] # mini-batch
for l in range(len(self.weights)):
z = np.dot(a[l], self.weights[l])
activation = self.activation(z)
a.append(activation) # (row, a_hidden_layer, a_output_layer)
# how much did we miss??
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T) * self.activation_prime(a[l]))
# [level3(output)->level2(hidden)] => [level2(hidden)->level3(output)]
deltas.reverse()
# backpropagation => update weights
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += self.learning_rate * layer.T.dot(delta)
if epoch % 10000 == 0: print("epochs: {}".format(epoch))
def predict(self, x):
a = np.concatenate((np.ones(1).T, np.array(x)), axis=0)
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
Convergence depends on the random initialization of weights
learning_rate = 0.2
epochs = 100000
nn = NeuralNetwork([2, 10, 1], learning_rate, epochs)
# XOR Problem
X = np.array([[0, 0],
[1, 0],
[0, 1],
[1, 1]])
y = np.array([0, 1, 1, 0])
nn.train(X, y)
for i in X:
print(i, nn.predict(i))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Gradient Descent
Step3: Example for Backprop
Step5: Example for Backprop of a simple Neural Networks
|
12,086
|
<ASSISTANT_TASK:>
Python Code:
#-- SET ME Tutorial should work with most binary black hole events
#-- Default is no event selection; you MUST select one to proceed.
eventname = ''
eventname = 'GW150914'
#eventname = 'GW151226'
#eventname = 'LVT151012'
#eventname = 'GW170104'
# want plots?
make_plots = 1
plottype = "png"
#plottype = "pdf"
# Standard python numerical analysis imports:
import numpy as np
from scipy import signal
from scipy.interpolate import interp1d
from scipy.signal import butter, filtfilt, iirdesign, zpk2tf, freqz
import h5py
import json
# the IPython magic below must be commented out in the .py file, since it doesn't work there.
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
# LIGO-specific readligo.py
import readligo as rl
# you might get a matplotlib warning here; you can ignore it.
# Read the event properties from a local json file
fnjson = "../data/BBH_events_v3.json"
try:
events = json.load(open(fnjson,"r"))
except IOError:
print("Cannot find resource file "+fnjson)
print("You can download it from https://losc.ligo.org/s/events/"+fnjson)
print("Quitting.")
quit()
# did the user select the eventname ?
try:
events[eventname]
except:
print('You must select an eventname that is in '+fnjson+'! Quitting.')
quit()
# Extract the parameters for the desired event:
event = events[eventname]
fn_H1 = "../data/" + event['fn_H1'] # File name for H1 data
fn_L1 = "../data/" + event['fn_L1'] # File name for L1 data
fn_template = "../data/" + event['fn_template'] # File name for template waveform
fs = event['fs'] # Set sampling rate
tevent = event['tevent'] # Set approximate event GPS time
fband = event['fband'] # frequency band for bandpassing signal
print("Reading in parameters for event " + event["name"])
print(event)
#----------------------------------------------------------------
# Load LIGO data from a single file.
# FIRST, define the filenames fn_H1 and fn_L1, above.
#----------------------------------------------------------------
try:
# read in data from H1 and L1, if available:
strain_H1, time_H1, chan_dict_H1 = rl.loaddata(fn_H1, 'H1')
strain_L1, time_L1, chan_dict_L1 = rl.loaddata(fn_L1, 'L1')
except:
print("Cannot find data files!")
print("You can download them from https://losc.ligo.org/s/events/"+eventname)
print("Quitting.")
quit()
# both H1 and L1 will have the same time vector, so:
time = time_H1
# the time sample interval (uniformly sampled!)
dt = time[1] - time[0]
# Let's look at the data and print out some stuff:
print('time_H1: len, min, mean, max = ', \
len(time_H1), time_H1.min(), time_H1.mean(), time_H1.max() )
print('strain_H1: len, min, mean, max = ', \
len(strain_H1), strain_H1.min(),strain_H1.mean(),strain_H1.max())
print( 'strain_L1: len, min, mean, max = ', \
len(strain_L1), strain_L1.min(),strain_L1.mean(),strain_L1.max())
#What's in chan_dict? (See also https://losc.ligo.org/tutorials/)
bits = chan_dict_H1['DATA']
print("For H1, {0} out of {1} seconds contain usable DATA".format(bits.sum(), len(bits)))
bits = chan_dict_L1['DATA']
print("For L1, {0} out of {1} seconds contain usable DATA".format(bits.sum(), len(bits)))
# plot +- deltat seconds around the event:
# index into the strain time series for this time interval:
deltat = 5
indxt = np.where((time >= tevent-deltat) & (time < tevent+deltat))
print(tevent)
if make_plots:
plt.figure()
plt.plot(time[indxt]-tevent,strain_H1[indxt],'r',label='H1 strain')
plt.plot(time[indxt]-tevent,strain_L1[indxt],'g',label='L1 strain')
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('strain')
plt.legend(loc='lower right')
plt.title('Advanced LIGO strain data near '+eventname)
make_psds = 1
if make_psds:
# number of sample for the fast fourier transform:
NFFT = 4*fs
Pxx_H1, freqs = mlab.psd(strain_H1, Fs = fs, NFFT = NFFT)
Pxx_L1, freqs = mlab.psd(strain_L1, Fs = fs, NFFT = NFFT)
# We will use interpolations of the ASDs computed above for whitening:
psd_H1 = interp1d(freqs, Pxx_H1)
psd_L1 = interp1d(freqs, Pxx_L1)
# Here is an approximate, smoothed PSD for H1 during O1, with no lines. We'll use it later.
Pxx = (1.e-22*(18./(0.1+freqs))**2)**2+0.7e-23**2+((freqs/2000.)*4.e-23)**2
psd_smooth = interp1d(freqs, Pxx)
if make_plots:
# plot the ASDs, with the template overlaid:
f_min = 20.
f_max = 2000.
plt.figure(figsize=(10,8))
plt.loglog(freqs, np.sqrt(Pxx_L1),'g',label='L1 strain')
plt.loglog(freqs, np.sqrt(Pxx_H1),'r',label='H1 strain')
plt.loglog(freqs, np.sqrt(Pxx),'k',label='H1 strain, O1 smooth model')
plt.axis([f_min, f_max, 1e-24, 1e-19])
plt.grid('on')
plt.ylabel('ASD (strain/rtHz)')
plt.xlabel('Freq (Hz)')
plt.legend(loc='upper center')
plt.title('Advanced LIGO strain data near '+eventname)
BNS_range = 1
if BNS_range:
#-- compute the binary neutron star (BNS) detectability range
#-- choose a detector noise power spectrum:
f = freqs.copy()
# get frequency step size
df = f[2]-f[1]
#-- constants
# speed of light:
clight = 2.99792458e8 # m/s
# Newton's gravitational constant
G = 6.67259e-11 # m^3/kg/s^2
# one parsec, popular unit of astronomical distance (around 3.26 light years)
parsec = 3.08568025e16 # m
# solar mass
MSol = 1.989e30 # kg
# solar mass in seconds (isn't relativity fun?):
tSol = MSol*G/np.power(clight,3) # s
# Single-detector SNR for detection above noise background:
SNRdet = 8.
# conversion from maximum range (horizon) to average range:
Favg = 2.2648
# mass of a typical neutron star, in solar masses:
mNS = 1.4
# Masses in solar masses
m1 = m2 = mNS
mtot = m1+m2 # the total mass
eta = (m1*m2)/mtot**2 # the symmetric mass ratio
mchirp = mtot*eta**(3./5.) # the chirp mass (FINDCHIRP, following Eqn 3.1b)
# distance to a fiducial BNS source:
dist = 1.0 # in Mpc
Dist = dist * 1.0e6 * parsec /clight # from Mpc to seconds
# We integrate the signal up to the frequency of the "Innermost stable circular orbit (ISCO)"
R_isco = 6. # Orbital separation at ISCO, in geometric units. 6M for PN ISCO; 2.8M for EOB
# frequency at ISCO (end the chirp here; the merger and ringdown follow)
f_isco = 1./(np.power(R_isco,1.5)*np.pi*tSol*mtot)
# minimum frequency (below which, detector noise is too high to register any signal):
f_min = 20. # Hz
# select the range of frequencies between f_min and fisco
fr = np.nonzero(np.logical_and(f > f_min , f < f_isco))
# get the frequency and spectrum in that range:
ffr = f[fr]
# In stationary phase approx, this is htilde(f):
# See FINDCHIRP Eqns 3.4, or 8.4-8.5
htilde = (2.*tSol/Dist)*np.power(mchirp,5./6.)*np.sqrt(5./96./np.pi)*(np.pi*tSol)
htilde *= np.power(np.pi*tSol*ffr,-7./6.)
htilda2 = htilde**2
# loop over the detectors
dets = ['H1', 'L1']
for det in dets:
if det is 'L1': sspec = Pxx_L1.copy()
else: sspec = Pxx_H1.copy()
sspecfr = sspec[fr]
# compute "inspiral horizon distance" for optimally oriented binary; FINDCHIRP Eqn D2:
D_BNS = np.sqrt(4.*np.sum(htilda2/sspecfr)*df)/SNRdet
# and the "inspiral range", averaged over source direction and orientation:
R_BNS = D_BNS/Favg
print(det+' BNS inspiral horizon = {0:.1f} Mpc, BNS inspiral range = {1:.1f} Mpc'.format(D_BNS,R_BNS))
# function to whiten data
def whiten(strain, interp_psd, dt):
Nt = len(strain)
freqs = np.fft.rfftfreq(Nt, dt)
freqs1 = np.linspace(0,2048.,Nt/2+1)
# whitening: transform to freq domain, divide by asd, then transform back,
# taking care to get normalization right.
hf = np.fft.rfft(strain)
norm = 1./np.sqrt(1./(dt*2))
white_hf = hf / np.sqrt(interp_psd(freqs)) * norm
white_ht = np.fft.irfft(white_hf, n=Nt)
return white_ht
whiten_data = 1
if whiten_data:
# now whiten the data from H1 and L1, and the template (use H1 PSD):
strain_H1_whiten = whiten(strain_H1,psd_H1,dt)
strain_L1_whiten = whiten(strain_L1,psd_L1,dt)
# We need to suppress the high frequency noise (no signal!) with some bandpassing:
bb, ab = butter(4, [fband[0]*2./fs, fband[1]*2./fs], btype='band')
normalization = np.sqrt((fband[1]-fband[0])/(fs/2))
strain_H1_whitenbp = filtfilt(bb, ab, strain_H1_whiten) / normalization
strain_L1_whitenbp = filtfilt(bb, ab, strain_L1_whiten) / normalization
if make_plots:
# index into the strain time series for this time interval:
indxt = np.where((time >= tevent-deltat) & (time < tevent+deltat))
# pick a shorter FTT time interval, like 1/8 of a second:
NFFT = int(fs/8)
# and with a lot of overlap, to resolve short-time features:
NOVL = int(NFFT*15./16)
# and choose a window that minimizes "spectral leakage"
# (https://en.wikipedia.org/wiki/Spectral_leakage)
window = np.blackman(NFFT)
# the right colormap is all-important! See:
# http://matplotlib.org/examples/color/colormaps_reference.html
# viridis seems to be the best for our purposes, but it's new; if you don't have it, you can settle for ocean.
#spec_cmap='viridis'
spec_cmap='ocean'
# Plot the H1 spectrogram:
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_H1[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-deltat, deltat, 0, 2000])
plt.title('aLIGO H1 strain data near '+eventname)
# Plot the L1 spectrogram:
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_L1[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-deltat, deltat, 0, 2000])
plt.title('aLIGO L1 strain data near '+eventname)
if make_plots:
# plot the whitened data, zooming in on the signal region:
# pick a shorter FTT time interval, like 1/16 of a second:
NFFT = int(fs/16.0)
# and with a lot of overlap, to resolve short-time features:
NOVL = int(NFFT*15/16.0)
# choose a window that minimizes "spectral leakage"
# (https://en.wikipedia.org/wiki/Spectral_leakage)
window = np.blackman(NFFT)
# Plot the H1 whitened spectrogram around the signal
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_H1_whiten[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-0.5, 0.5, 0, 500])
plt.title('aLIGO H1 strain data near '+eventname)
# Plot the L1 whitened spectrogram around the signal
plt.figure(figsize=(10,6))
spec_H1, freqs, bins, im = plt.specgram(strain_L1_whiten[indxt], NFFT=NFFT, Fs=fs, window=window,
noverlap=NOVL, cmap=spec_cmap, xextent=[-deltat,deltat])
plt.xlabel('time (s) since '+str(tevent))
plt.ylabel('Frequency (Hz)')
plt.colorbar()
plt.axis([-0.5, 0.5, 0, 500])
plt.title('aLIGO L1 strain data near '+eventname)
# read in the template (plus and cross) and parameters for the theoretical waveform
try:
f_template = h5py.File(fn_template, "r")
except:
print("Cannot find template file!")
print("You can download it from https://losc.ligo.org/s/events/"+eventname+'/'+fn_template)
print("Quitting.")
quit()
# extract metadata from the template file:
template_p, template_c = f_template["template"][...]
t_m1 = f_template["/meta"].attrs['m1']
t_m2 = f_template["/meta"].attrs['m2']
t_a1 = f_template["/meta"].attrs['a1']
t_a2 = f_template["/meta"].attrs['a2']
t_approx = f_template["/meta"].attrs['approx']
f_template.close()
# the template extends to roughly 16s, zero-padded to the 32s data length. The merger will be roughly 16s in.
template_offset = 16.
# whiten the templates:
template_p_whiten = whiten(template_p,psd_H1,dt)
template_c_whiten = whiten(template_c,psd_H1,dt)
template_p_whitenbp = filtfilt(bb, ab, template_p_whiten) / normalization
template_c_whitenbp = filtfilt(bb, ab, template_c_whiten) / normalization
# Compute, print and plot some properties of the template:
# constants:
clight = 2.99792458e8 # m/s
G = 6.67259e-11 # m^3/kg/s^2
MSol = 1.989e30 # kg
# template parameters: masses in units of MSol:
t_mtot = t_m1+t_m2
# final BH mass is typically 95% of the total initial mass:
t_mfin = t_mtot*0.95
# Final BH radius, in km:
R_fin = 2*G*t_mfin*MSol/clight**2/1000.
# complex template:
template = (template_p + template_c*1.j)
ttime = time-time[0]-template_offset
# compute the instantaneous frequency of this chirp-like signal:
tphase = np.unwrap(np.angle(template))
fGW = np.gradient(tphase)*fs/(2.*np.pi)
# fix discontinuities at the very end:
# iffix = np.where(np.abs(np.gradient(fGW)) > 100.)[0]
iffix = np.where(np.abs(template) < np.abs(template).max()*0.001)[0]
fGW[iffix] = fGW[iffix[0]-1]
fGW[np.where(fGW < 1.)] = fGW[iffix[0]-1]
# compute v/c:
voverc = (G*t_mtot*MSol*np.pi*fGW/clight**3)**(1./3.)
# index where f_GW is in-band:
f_inband = fband[0]
iband = np.where(fGW > f_inband)[0][0]
# index at the peak of the waveform:
ipeak = np.argmax(np.abs(template))
# number of cycles between inband and peak:
Ncycles = (tphase[ipeak]-tphase[iband])/(2.*np.pi)
print('Properties of waveform template in {0}'.format(fn_template))
print("Waveform family = {0}".format(t_approx))
print("Masses = {0:.2f}, {1:.2f} Msun".format(t_m1,t_m2))
print('Mtot = {0:.2f} Msun, mfinal = {1:.2f} Msun '.format(t_mtot,t_mfin))
print("Spins = {0:.2f}, {1:.2f}".format(t_a1,t_a2))
print('Freq at inband, peak = {0:.2f}, {1:.2f} Hz'.format(fGW[iband],fGW[ipeak]))
print('Time at inband, peak = {0:.2f}, {1:.2f} s'.format(ttime[iband],ttime[ipeak]))
print('Duration (s) inband-peak = {0:.2f} s'.format(ttime[ipeak]-ttime[iband]))
print('N_cycles inband-peak = {0:.0f}'.format(Ncycles))
print('v/c at peak = {0:.2f}'.format(voverc[ipeak]))
print('Radius of final BH = {0:.0f} km'.format(R_fin))
if make_plots:
plt.figure(figsize=(10,16))
plt.subplot(4,1,1)
plt.plot(ttime,template_p)
plt.xlim([-template_offset,1.])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('strain')
plt.title(eventname+' template at D_eff = 1 Mpc')
plt.subplot(4,1,2)
plt.plot(ttime,template_p)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('strain')
#plt.title(eventname+' template at D_eff = 1 Mpc')
plt.subplot(4,1,3)
plt.plot(ttime,fGW)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('f_GW')
#plt.title(eventname+' template f_GW')
plt.subplot(4,1,4)
plt.plot(ttime,voverc)
plt.xlim([-1.1,0.1])
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('v/c')
#plt.title(eventname+' template v/c')
# -- To calculate the PSD of the data, choose an overlap and a window (common to all detectors)
# that minimizes "spectral leakage" https://en.wikipedia.org/wiki/Spectral_leakage
NFFT = 4*fs
psd_window = np.blackman(NFFT)
# and a 50% overlap:
NOVL = NFFT/2
# define the complex template, common to both detectors:
template = (template_p + template_c*1.j)
# We will record the time where the data match the END of the template.
etime = time+template_offset
# the length and sampling rate of the template MUST match that of the data.
datafreq = np.fft.fftfreq(template.size)*fs
df = np.abs(datafreq[1] - datafreq[0])
# to remove effects at the beginning and end of the data stretch, window the data
# https://en.wikipedia.org/wiki/Window_function#Tukey_window
try: dwindow = signal.tukey(template.size, alpha=1./8) # Tukey window preferred, but requires recent scipy version
except: dwindow = signal.blackman(template.size) # Blackman window OK if Tukey is not available
# prepare the template fft.
template_fft = np.fft.fft(template*dwindow) / fs
# loop over the detectors
dets = ['H1', 'L1']
for det in dets:
if det is 'L1': data = strain_L1.copy()
else: data = strain_H1.copy()
# -- Calculate the PSD of the data. Also use an overlap, and window:
data_psd, freqs = mlab.psd(data, Fs = fs, NFFT = NFFT, window=psd_window, noverlap=NOVL)
# Take the Fourier Transform (FFT) of the data and the template (with dwindow)
data_fft = np.fft.fft(data*dwindow) / fs
# -- Interpolate to get the PSD values at the needed frequencies
power_vec = np.interp(np.abs(datafreq), freqs, data_psd)
# -- Calculate the matched filter output in the time domain:
# Multiply the Fourier Space template and data, and divide by the noise power in each frequency bin.
# Taking the Inverse Fourier Transform (IFFT) of the filter output puts it back in the time domain,
# so the result will be plotted as a function of time off-set between the template and the data:
optimal = data_fft * template_fft.conjugate() / power_vec
optimal_time = 2*np.fft.ifft(optimal)*fs
# -- Normalize the matched filter output:
# Normalize the matched filter output so that we expect a value of 1 at times of just noise.
# Then, the peak of the matched filter output will tell us the signal-to-noise ratio (SNR) of the signal.
sigmasq = 1*(template_fft * template_fft.conjugate() / power_vec).sum() * df
sigma = np.sqrt(np.abs(sigmasq))
SNR_complex = optimal_time/sigma
# shift the SNR vector by the template length so that the peak is at the END of the template
peaksample = int(data.size / 2) # location of peak in the template
SNR_complex = np.roll(SNR_complex,peaksample)
SNR = abs(SNR_complex)
# find the time and SNR value at maximum:
indmax = np.argmax(SNR)
timemax = time[indmax]
SNRmax = SNR[indmax]
# Calculate the "effective distance" (see FINDCHIRP paper for definition)
# d_eff = (8. / SNRmax)*D_thresh
d_eff = sigma / SNRmax
# -- Calculate optimal horizon distnace
horizon = sigma/8
# Extract time offset and phase at peak
phase = np.angle(SNR_complex[indmax])
offset = (indmax-peaksample)
# apply time offset, phase, and d_eff to template
template_phaseshifted = np.real(template*np.exp(1j*phase)) # phase shift the template
template_rolled = np.roll(template_phaseshifted,offset) / d_eff # Apply time offset and scale amplitude
# Whiten and band-pass the template for plotting
template_whitened = whiten(template_rolled,interp1d(freqs, data_psd),dt) # whiten the template
template_match = filtfilt(bb, ab, template_whitened) / normalization # Band-pass the template
print('For detector {0}, maximum at {1:.4f} with SNR = {2:.1f}, D_eff = {3:.2f}, horizon = {4:0.1f} Mpc'
.format(det,timemax,SNRmax,d_eff,horizon))
if make_plots:
# plotting changes for the detectors:
if det is 'L1':
pcolor='g'
strain_whitenbp = strain_L1_whitenbp
template_L1 = template_match.copy()
else:
pcolor='r'
strain_whitenbp = strain_H1_whitenbp
template_H1 = template_match.copy()
# -- Plot the result
plt.figure(figsize=(10,8))
plt.subplot(2,1,1)
plt.plot(time-timemax, SNR, pcolor,label=det+' SNR(t)')
#plt.ylim([0,25.])
plt.grid('on')
plt.ylabel('SNR')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.legend(loc='upper left')
plt.title(det+' matched filter SNR around event')
# zoom in
plt.subplot(2,1,2)
plt.plot(time-timemax, SNR, pcolor,label=det+' SNR(t)')
plt.grid('on')
plt.ylabel('SNR')
plt.xlim([-0.15,0.05])
#plt.xlim([-0.3,+0.3])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.legend(loc='upper left')
plt.figure(figsize=(10,8))
plt.subplot(2,1,1)
plt.plot(time-tevent,strain_whitenbp,pcolor,label=det+' whitened h(t)')
plt.plot(time-tevent,template_match,'k',label='Template(t)')
plt.ylim([-10,10])
plt.xlim([-0.15,0.05])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.ylabel('whitened strain (units of noise stdev)')
plt.legend(loc='upper left')
plt.title(det+' whitened data around event')
plt.subplot(2,1,2)
plt.plot(time-tevent,strain_whitenbp-template_match,pcolor,label=det+' resid')
plt.ylim([-10,10])
plt.xlim([-0.15,0.05])
plt.grid('on')
plt.xlabel('Time since {0:.4f}'.format(timemax))
plt.ylabel('whitened strain (units of noise stdev)')
plt.legend(loc='upper left')
plt.title(det+' Residual whitened data after subtracting template around event')
# -- Display PSD and template
# must multiply by sqrt(f) to plot template fft on top of ASD:
plt.figure(figsize=(10,6))
template_f = np.absolute(template_fft)*np.sqrt(np.abs(datafreq)) / d_eff
plt.loglog(datafreq, template_f, 'k', label='template(f)*sqrt(f)')
plt.loglog(freqs, np.sqrt(data_psd),pcolor, label=det+' ASD')
plt.xlim(20, fs/2)
plt.ylim(1e-24, 1e-20)
plt.grid()
plt.xlabel('frequency (Hz)')
plt.ylabel('strain noise ASD (strain/rtHz), template h(f)*rt(f)')
plt.legend(loc='upper left')
plt.title(det+' ASD and template around event')
# make wav (sound) files from the whitened data, +-2s around the event.
from scipy.io import wavfile
# function to keep the data within integer limits, and write to wavfile:
def write_wavfile(filename,fs,data):
d = np.int16(data/np.max(np.abs(data)) * 32767 * 0.9)
wavfile.write(filename,int(fs), d)
deltat_sound = 2. # seconds around the event
# index into the strain time series for this time interval:
indxd = np.where((time >= tevent-deltat_sound) & (time < tevent+deltat_sound))
# write the files:
write_wavfile("../"+eventname+"_H1_whitenbp.wav",int(fs), strain_H1_whitenbp[indxd])
write_wavfile("../"+eventname+"_L1_whitenbp.wav",int(fs), strain_L1_whitenbp[indxd])
# re-whiten the template using the smoothed PSD; it sounds better!
template_p_smooth = whiten(template_p,psd_smooth,dt)
# and the template, sooming in on [-3,+1] seconds around the merger:
indxt = np.where((time >= (time[0]+template_offset-deltat_sound)) & (time < (time[0]+template_offset+deltat_sound)))
write_wavfile("../"+eventname+"_template_whiten.wav",int(fs), template_p_smooth[indxt])
from IPython.display import Audio
fna = "../"+eventname+"_template_whiten.wav"
print(fna)
Audio(fna)
fna = "../"+eventname+"_H1_whitenbp.wav"
print(fna)
Audio(fna)
# function that shifts frequency of a band-passed signal
def reqshift(data,fshift=100,sample_rate=4096):
Frequency shift the signal by constant
x = np.fft.rfft(data)
T = len(data)/float(sample_rate)
df = 1.0/T
nbins = int(fshift/df)
# print T,df,nbins,x.real.shape
y = np.roll(x.real,nbins) + 1j*np.roll(x.imag,nbins)
y[0:nbins]=0.
z = np.fft.irfft(y)
return z
# parameters for frequency shift
fs = 4096
fshift = 400.
speedup = 1.
fss = int(float(fs)*float(speedup))
# shift frequency of the data
strain_H1_shifted = reqshift(strain_H1_whitenbp,fshift=fshift,sample_rate=fs)
strain_L1_shifted = reqshift(strain_L1_whitenbp,fshift=fshift,sample_rate=fs)
# write the files:
write_wavfile("../"+eventname+"_H1_shifted.wav",int(fs), strain_H1_shifted[indxd])
write_wavfile("../"+eventname+"_L1_shifted.wav",int(fs), strain_L1_shifted[indxd])
# and the template:
template_p_shifted = reqshift(template_p_smooth,fshift=fshift,sample_rate=fs)
write_wavfile("../"+eventname+"_template_shifted.wav",int(fs), template_p_shifted[indxt])
fna = "../"+eventname+"_template_shifted.wav"
print(fna)
Audio(fna)
fna = "../"+eventname+"_H1_shifted.wav"
print(fna)
Audio(fna)
data_segments = 1
if data_segments:
# read in the data at 4096 Hz:
# fn = 'L-L1_LOSC_4_V1-1126259446-32.hdf5'
strain, time, chan_dict = rl.loaddata(fn_L1, 'H1')
print("Contents of all the key, value pairs in chan_dict")
for keys,values in chan_dict.items():
print(keys)
print(values)
print('Total number of non-NaNs in these data = ',np.sum(~np.isnan(strain)))
print('GPS start, GPS stop and length of all data in this file = ',time[0], time[-1],len(strain))
# select the level of data quality; default is "DATA" but "CBC_CAT3" is a conservative choice:
DQflag = 'CBC_CAT3'
# readligo.py method for computing segments (start and stop times with continuous valid data):
segment_list = rl.dq_channel_to_seglist(chan_dict[DQflag])
print('Number of segments with DQflag',DQflag,' = ',len(segment_list))
# loop over seconds and print out start, stop and length:
iseg = 0
for segment in segment_list:
time_seg = time[segment]
seg_strain = strain[segment]
print('GPS start, GPS stop and length of segment',iseg, \
'in this file = ',time_seg[0], time_seg[-1], len(seg_strain))
iseg = iseg+1
# here is where you would insert code to analyze the data in this segment.
# now look at segments with no CBC hardware injections:
DQflag = 'NO_CBC_HW_INJ'
segment_list = rl.dq_channel_to_seglist(chan_dict['NO_CBC_HW_INJ'])
print('Number of segments with DQflag',DQflag,' = ',len(segment_list))
iseg = 0
for segment in segment_list:
time_seg = time[segment]
seg_strain = strain[segment]
print('GPS start, GPS stop and length of segment',iseg, \
'in this file = ',time_seg[0], time_seg[-1], len(seg_strain))
iseg = iseg+1
# time vector around event
times = time-tevent
# zoom in on [-0.2,0.05] seconds around event
irange = np.nonzero((times >= -0.2) & (times < 0.05))
# construct a data structure for a csv file:
dat = [times[irange], strain_H1_whitenbp[irange],strain_L1_whitenbp[irange],
template_H1[irange],template_L1[irange] ]
datcsv = np.array(dat).transpose()
# make a csv filename, header, and format
fncsv = "../"+eventname+'_data.csv'
headcsv = eventname+' time-'+str(tevent)+ \
' (s),H1_data_whitened,L1_data_whitened,H1_template_whitened,L1_template_whitened'
fmtcsv = ",".join(["%10.6f"] * 5)
np.savetxt(fncsv, datcsv, fmt=fmtcsv, header=headcsv)
print("Wrote whitened data to file {0}".format(fncsv))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the event properties from a local json file (download in advance)
Step2: Read in the data
Step3: Data Gaps
Step4: The data are dominated by low frequency noise; there is no way to see a signal here, without some signal processing.
Step5: NOTE that we only plot the data between f_min = 20 Hz and f_max = 2000 Hz.
Step6: BBH range is >> BNS range!
Step7: Spectrograms
Step8: In the above spectrograms, you may see lots of excess power below ~20 Hz, as well as strong spectral lines at 500, 1000, 1500 Hz (also evident in the ASDs above). The lines at multiples of 500 Hz are the harmonics of the "violin modes" of the fibers holding up the mirrors of the Advanced LIGO interferometers.
Step9: Loud (high SNR) signals may be visible in these spectrograms. Compact object mergers show a characteristic "chirp" as the signal rises in frequency. If you can't see anything, try
Step10: Matched filtering to find the signal
Step11: Notes on these results
Step12: Listen to the whitened template and data
Step14: Frequency shift the audio files
Step15: Listen to the frequency-shifted template and data
Step16: Data segments
Step17: Comments on sampling rate
|
12,087
|
<ASSISTANT_TASK:>
Python Code:
map_coords = (45.1973288, 5.7103223) #( 45.166672, 5.71667 )
import pysolar.solar as solar
import datetime as dt
d = dt.datetime.now()
#d = dt.datetime(2017, 6, 20, 13, 30, 0, 130320)
solar.get_altitude( *map_coords, d)
solar.get_azimuth(*map_coords, d)
Alt = [ solar.get_altitude(*map_coords, dt.datetime(2017, 12, 21, h, 0, 0, 0)) for h in range(0, 24) ]
Az = [ solar.get_azimuth(*map_coords, dt.datetime(2017, 12, 21, h, 0, 0, 0)) for h in range(0, 24) ]
Az = np.array( Az )
Az[ Az < -180 ] = Az[ Az < -180 ]+360
plt.plot( Alt )
plt.plot( Az )
plt.plot([0, 24], [0, 0], ':'); plt.ylim([-120, 120]); plt.xlabel('hour of the day');
import pysolar.radiation as radiation
radiation.get_radiation_direct( d, 15 ) # W/m2
from numpy import genfromtxt
horizon_data = genfromtxt('horizon.csv', delimiter=',').T
horizon_data
def isUpperHorizon( azimuth, altitude_deg ):
h = np.interp(-azimuth, horizon_data[0, :], horizon_data[1, :])
if h > altitude_deg:
return 0
else:
return 1
isUpperHorizon( 20, 2 )
horizon_data[1, :].max()
import math
import pysolar.radiation as radiation
import pysolar.solar as solar
import datetime as dt
def get_radiation_direct(d, alt):
if alt>0:
return radiation.get_radiation_direct( d, alt ) # W/m2
else:
return 0
def get_flux_surface( coords, date, sigma, phi_C ):
# Surface orientation :
# sigma : deg, vertical angle of the surface, ref. to the horizontal
# phi_C : deg, azimuth, relative to south, with positive values in the southeast direction and negative values in
# the southwest
# Sun position
phi_S_deg = solar.get_azimuth( *coords, date ) # deg, azimuth of the sun,relative to south
beta_deg = solar.get_altitude( *coords, date ) # deg, altitude angle of the sun
I0 = get_radiation_direct( d, beta_deg ) # W/m2
I0 = I0* isUpperHorizon( phi_S_deg, beta_deg )
beta = beta_deg*math.pi/180 # rad
phi_S = phi_S_deg*math.pi/180 #rad
sigma = sigma*math.pi/180
phi_C = phi_C*math.pi/180
cosTheta = math.cos(beta)*math.cos( phi_S - phi_C )*math.sin( sigma ) + math.cos( sigma )*math.sin( beta )
if cosTheta >0 :
Isurf = I0*cosTheta # flux projeté, W/m2
else:
Isurf = 0 # mais diffuse...
return Isurf
def get_flux_total( coords, date ):
# Sun position
beta_deg = solar.get_altitude( *coords, date ) # deg, altitude angle of the sun
I0 = get_radiation_direct( d, beta_deg ) # W/m2
return I0
get_radiation_direct( d, -4 )
d = dt.datetime(2017, 6, 22, 11, 0, 0, 0)
sigma = 37
phi_C = 50
F = get_flux_surface( map_coords, d, sigma, phi_C )
print( F )
d
import pandas as pd
start = dt.datetime(2017, 6, 22, 0, 0, 0, 0)
end = dt.datetime(2017, 6, 22, 23, 59, 0, 0)
d_range = pd.date_range( start=start, end=end, freq='5min' )
F_tot = [ get_flux_total(map_coords, d ) for d in d_range ]
F_est = [ get_flux_surface(map_coords, d, sigma, phi_C ) for d in d_range ]
F_ouest = [ get_flux_surface(map_coords, d, sigma, phi_C+180 ) for d in d_range ]
F_sud = [ get_flux_surface(map_coords, d, 90, phi_C-90 ) for d in d_range ]
x = d_range.hour + d_range.minute/60
plt.figure(figsize=(12, 5))
plt.plot( x, F_est )
plt.plot( x, F_ouest )
plt.plot( x, F_sud )
plt.plot( x, F_tot, 'k:' )
plt.xlabel('hour of the day');
plt.ylabel('flux solaire projeté');
d_range.hour + d_range.minute/60
# Sun position
phi_S = solar.get_azimuth( *map_coords, d ) # deg, azimuth of the sun,relative to south
beta = solar.get_altitude( *map_coords, d ) # deg, altitude angle of the sun
I0 = radiation.get_radiation_direct( d, 65 ) # W/m2
cosTheta = math.cos(beta)*math.cos( phi_S - phi_C )*math.sin( sigma ) + math.cos( sigma )*math.sin( beta )
Isurf = I0*cosTheta # flux projeté, W/m2
cosTheta
Azi = np.array( [ solar.get_azimuth( *map_coords, d ) for d in d_range ] )
Azi[ Azi < -180 ] = Azi[ Azi < -180 ]+360
Alt = [ solar.get_altitude( *map_coords, d ) for d in d_range ]
Hor = [ np.interp(-a, horizon_data[0, :], horizon_data[1, :]) for a in Azi ]
plt.plot( Azi, Hor )
plt.plot( Azi, Alt )
plt.ylim([0, 80]);
Azi
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remarque
Step2: Projection sur une surface inclinée
Step3: Verif
|
12,088
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
fname = '../data/MenyanthesTest.men'
meny = ps.read.MenyData(fname)
# Get distances from metadata
xo = meny.H["Obsevation well"]['xcoord']
yo = meny.H["Obsevation well"]['ycoord']
distances = []
extraction_names = ['Extraction 2', 'Extraction 3']
for extr in extraction_names:
xw = meny.IN[extr]["xcoord"]
yw = meny.IN[extr]["ycoord"]
distances.append(np.sqrt((xo-xw)**2 + (yo-yw)**2))
extraction_names = [name.replace(" ", "_") for name in extraction_names] # replace spaces in names for Pastas
df = pd.DataFrame(distances, index=extraction_names, columns=['Distance to observation well'])
df
# plot timeseries
f1, axarr = plt.subplots(len(meny.IN)+1, sharex=True, figsize=(10,8))
oseries = meny.H['Obsevation well']["values"]
oseries.plot(ax=axarr[0], color='k')
axarr[0].set_title(meny.H['Obsevation well']["Name"])
for i, (name, data) in enumerate(meny.IN.items(), start=1):
data["values"].plot(ax=axarr[i])
axarr[i].set_title(name)
plt.tight_layout(pad=0)
oseries = ps.TimeSeries(meny.H['Obsevation well']['values'].dropna(), name="heads", settings="oseries")
# create model
ml = ps.Model(oseries)
prec = meny.IN['Precipitation']['values']
prec.index = prec.index.round("D")
prec.name = "prec"
evap = meny.IN['Evaporation']['values']
evap.index = evap.index.round("D")
evap.name = "evap"
rm = ps.RechargeModel(prec, evap, ps.Exponential, 'Recharge')
ml.add_stressmodel(rm)
stresses = []
for name in extraction_names:
# get extraction timeseries
s = meny.IN[name.replace("_", " ")]['values']
# convert index to end-of-month timeseries
s.index = s.index.to_period("M").to_timestamp("M")
# resample to daily values
s_daily = ps.utils.timestep_weighted_resample_fast(s, "D")
# create pastas.TimeSeries object
stress = ps.TimeSeries(s_daily.dropna(), name=name, settings="well")
# append to stresses list
stresses.append(stress)
for stress in stresses:
sm = ps.StressModel(stress, ps.Hantush, stress.name, up=False)
ml.add_stressmodel(sm)
ml.solve(solver=ps.LmfitSolve)
ml.plots.decomposition();
for i in range(len(extraction_names)):
name = extraction_names[i]
sm = ml.stressmodels[name]
p = ml.get_parameters(name)
gain = sm.rfunc.gain(p) * 1e6 / 365.25
print(f"{name}: gain = {gain:.3f} m / Mm^3/year")
df.at[name, 'gain StressModel'] = gain
ml_wm = ps.Model(oseries, oseries.name + "_wm")
rm = ps.RechargeModel(prec, evap, ps.Gamma, 'Recharge')
ml_wm.add_stressmodel(rm)
w = ps.WellModel(stresses, ps.HantushWellModel, "Wells", distances, settings="well")
ml_wm.add_stressmodel(w)
ml_wm.solve(solver=ps.LmfitSolve)
ml_wm.plots.decomposition();
ml_wm.plots.stacked_results(figsize=(10, 8));
wm = ml_wm.stressmodels["Wells"]
for i in range(len(extraction_names)):
# get parameters
p = wm.get_parameters(model=ml_wm, istress=i)
# calculate gain
gain = wm.rfunc.gain(p) * 1e6 / 365.25
name = wm.stress[i].name
print(f"{name}: gain = {gain:.3f} m / Mm^3/year")
df.at[name, 'gain WellModel'] = gain
df.style.format("{:.4f}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data from a Menyanthes file
Step2: Get the distances of the extractions to the observation well. Extraction 1 is about two times as far from the observation well as extraction 2 and 3. We will use this information later in our WellModel.
Step3: Then plot the observations, together with the diferent stresses in the Menyanthes file.
Step4: Create a model with a separate StressModel for each extraction
Step5: Get the precipitation and evaporation timeseries and round the index to remove the hours from the timestamps.
Step6: Create a recharge stressmodel and add to the model.
Step7: Get the extraction timeseries.
Step8: Add each of the extractions as a separate StressModel.
Step9: Solve the model.
Step10: Visualize the results
Step11: We can calculate the gain of each extraction (quantified as the effect on the groundwater level of a continuous extraction of ~1 Mm$^3$/yr).
Step12: Create a model with a WellModel
Step13: We have all the information we need to create a WellModel
Step14: Solve the model.
Step15: Visualize the results
Step16: Plot the stacked influence of each of the individual extraction wells in the results plot
Step17: Get parameters for each well (including the distance) and calculate the gain. The WellModel reorders the stresses from closest to the observation well, to furthest from the observation well. We have take this into account during the post-processing.
Step18: Compare individual StressModels and WellModel
|
12,089
|
<ASSISTANT_TASK:>
Python Code:
my_data = cellreader.CellpyData()
# only for my MacBook
filename = "/Users/jepe/scripting/cellpy/dev_data/out/20190204_FC_snx012_01_cc_01.h5"
assert os.path.isfile(filename)
my_data.load(filename)
%%timeit
my_data.make_summary()
%%timeit
my_data.make_step_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Timing
|
12,090
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
from itertools import product
import pickle
# Create some toy data to check your implementations
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
model = {}
model['W1'] = np.linspace(-0.2, 0.6, num=input_size*hidden_size).reshape(input_size, hidden_size)
model['b1'] = np.linspace(-0.3, 0.7, num=hidden_size)
model['W2'] = np.linspace(-0.4, 0.1, num=hidden_size*num_classes).reshape(hidden_size, num_classes)
model['b2'] = np.linspace(-0.5, 0.9, num=num_classes)
return model
def init_toy_data():
X = np.linspace(-0.2, 0.5, num=num_inputs*input_size).reshape(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
model = init_toy_model()
X, y = init_toy_data()
def two_layer_net(X, model, y=None, reg=0.0):
Compute the loss and gradients for a two layer fully connected NN.
The net has an input dimension of D, a hidden layer dimension of H,
and performs classification over C classes. We use a softmax loss function
and L2 regularization the the weight matrices. The two layer net should
use a ReLU nonlinearity after the first affine layer.
The two layer net has the following architecture:
input - fully connected layer - ReLU - fully connected layer - softmax
The outputs of the second fully-connected layer are the scores for each
class.
Inputs:
- X: Input data of shape (N, D). Each X[i] is a training sample.
- model: Dictionary mapping parameter names to arrays of parameter values.
It should contain the following:
- W1: First layer weights; has shape (D, H)
- b1: First layer biases; has shape (H,)
- W2: Second layer weights; has shape (H, C)
- b2: Second layer biases; has shape (C,)
- y: Vector of training labels. y[i] is the label for X[i], and each y[i]
is an integer in the range 0 <= y[i] < C. This parameter is optional;
if it is not passed then we only return scores, and if it is passed then
we instead return the loss and gradients.
- reg: Regularization strength.
Returns:
If y is not passed, return a matrix scores of shape (N, C) where
scores[i, c] is the score for class c on input X[i].
If y is passed, instead return a tuple of:
- loss: Loss (data loss and regularization loss) for this batch of training
samples.
- grads: Dictionary mapping parameter names to gradients of those
parameters with respect to the loss function. This should have the same
keys as model.
# unpack variables from the model dictionary
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
N, D = X.shape
# compute the forward pass
scores = None # shape (N, C)
# Layer 1
# ReLU forward implementation
# Ref: http://cs231n.github.io/neural-networks-1/
s1 = X.dot(W1) + b1 # shape (N, H)
resp1 = np.where(s1 > 0, s1, 0) # shape (N, H)
# Layer 2
s2 = resp1.dot(W2) + b2 # shape (N, C)
scores = s2
# If the targets are not given then jump out, we're done
if y is None:
return scores
# compute the loss
loss = None
f = scores.T - np.max(scores, axis=1) # shape (C, N)
f = np.exp(f)
p = f / np.sum(f, axis=0) # shape (C, N)
# loss function
_sample_ix = np.arange(N)
loss = np.mean(-np.log(p[y, _sample_ix]))
loss += (0.5 * reg) * np.sum(W1 * W1)
loss += (0.5 * reg) * np.sum(W2 * W2)
# compute the gradients
grads = {}
df = p # (C, N)
df[y, _sample_ix] -= 1
# (H, C) = ((C, N) x (N, H)).T
dW2 = df.dot(resp1).T / N # (H, C)
dW2 += reg * W2
grads['W2'] = dW2
# C = (C, N)
db2 = np.mean(df, axis=1) # C
grads['b2'] = db2
# (N, H) = (H, C)
dresp1 = W2.dot(df).T / N
ds1 = np.where(s1 > 0, dresp1, 0) # (N, H)
dW1 = X.T.dot(ds1) # (D, H)
dW1 += reg * W1
grads['W1'] = dW1
db1 = np.sum(ds1, axis=0) # H
grads['b1'] = db1
return loss, grads
scores = two_layer_net(X, model)
print(scores)
correct_scores = [[-0.5328368, 0.20031504, 0.93346689],
[-0.59412164, 0.15498488, 0.9040914 ],
[-0.67658362, 0.08978957, 0.85616275],
[-0.77092643, 0.01339997, 0.79772637],
[-0.89110401, -0.08754544, 0.71601312]]
# the difference should be very small. We get 3e-8
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
reg = 0.1
loss, _ = two_layer_net(X, model, y, reg)
print(loss)
correct_loss = 1.38191946092
# should be very small, we get 5e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = two_layer_net(X, model, y, reg)
# these should all be less than 1e-8 or so
for param_name in grads:
param_grad_num = eval_numerical_gradient(
lambda W: two_layer_net(X, model, y, reg)[0],
model[param_name],
verbose=False
)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
from cs231n.classifier_trainer import ClassifierTrainer
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.0, learning_rate_decay=1,
update='sgd', sample_batches=False,
num_epochs=100,
verbose=False)
print('Final loss with vanilla SGD: %f' % (loss_history[-1], ))
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=1,
update='momentum', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.494394
print('Final loss with momentum SGD: %f. We get: %f' % (loss_history[-1], correct_loss))
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=1,
update='rmsprop', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.439368
print('Final loss with RMSProp: %f. We get: %f' % (loss_history[-1], correct_loss))
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
from IPython.html import widgets
from IPython.display import display as ipydisplay
from cs231n.vis_utils import ParametersInspectorWindow
def init_two_layer_model(input_size, hidden_size, output_size):
Initialize the weights and biases for a two-layer fully connected
neural network.
The net has an input dimension of D, a hidden layer dimension of H, and
performs classification over C classes. Weights are initialized to small
random values and biases are initialized to zero.
Inputs:
- input_size: The dimension D of the input data
- hidden_size: The number of neurons H in the hidden layer
- ouput_size: The number of classes C
Returns:
A dictionary mapping parameter names to arrays of parameter values.
It has the following keys:
- W1: First layer weights; has shape (D, H)
- b1: First layer biases; has shape (H,)
- W2: Second layer weights; has shape (H, C)
- b2: Second layer biases; has shape (C,)
# initialize a model
model = {}
model['W1'] = 0.00001 * np.random.randn(input_size, hidden_size)
model['b1'] = np.zeros(hidden_size)
model['W2'] = 0.00001 * np.random.randn(hidden_size, output_size)
model['b2'] = np.zeros(output_size)
return model
w = widgets.IntProgress()
ipydisplay(w)
model = init_two_layer_model(32*32*3, 50, 10) # input size, hidden size, number of classes
trainer = ClassifierTrainer()
best_model, loss_history, train_acc, val_acc = trainer.train(
X_train, y_train, X_val, y_val,
model, two_layer_net,
# parameters to be tuned
num_epochs=7, reg=1,
momentum=0.9, learning_rate_decay = 0.95,
learning_rate=2.5e-5,
# end of parameters
progress_bar=w, verbose=True
)
train_acc, val_acc
# Plot the loss function and train / validation accuracies
def vis_history(loss_history, train_acc, val_acc):
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(loss_history)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(train_acc)
plt.plot(val_acc)
plt.legend(['Training accuracy', 'Validation accuracy'], loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
return fig
fig = vis_history(loss_history, train_acc, val_acc)
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(model):
plt.imshow(visualize_grid(model['W1'].T.reshape(-1, 32, 32, 3), padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(best_model)
scores_test = two_layer_net(X_test, best_model)
print('Test accuracy: ', np.mean(np.argmax(scores_test, axis=1) == y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: The neural network parameters will be stored in a dictionary (model below), where the keys are the parameter names and the values are numpy arrays. Below, we initialize toy data and a toy model that we will use to verify your implementations.
Step4: Forward pass
Step5: Forward pass
Step6: Backward pass
Step7: Train the network
Step8: Now fill in the momentum update in the first missing code block inside the train function, and run the same optimization as above but with the momentum update. You should see a much better result in the final obtained loss
Step9: Now also implement the RMSProp update rule inside the train function and rerun the optimization
Step11: Load the data
Step13: Train a network
Step14: Debug the training
Step15: Run on the test set
|
12,091
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
# Please adjust the random seed for new results
np.random.seed(11)
# See Part 2 for code comments
def core_hot_spot(ambient_temp, overload_ratio, t0=35, tc=30, N=1,
N0=0.5, Nc=0.8, L=1):
T = t0*((N*overload_ratio**2+1)**0.8) + tc*L*overload_ratio**2 + ambient_temp
return T
def calculate_dp(core_hot_spot, time, dp_initial):
A = 3.65*10**7
k = A*np.exp(-(117000/(8.314*(core_hot_spot+273.15))))
DPf=1/((k*24*7*time) + (1/dp_initial))
return DPf
def oil_contamination(dp):
CO=(-0.0028*dp + 6.28) + 0.13*np.random.randn()
CO2=(-0.0026*dp + 8.08) + 0.66*np.random.randn()
return CO, CO2
def tinsul_sim(temps, start_month, num_transformers, overload_ratio = 1):
trans_co = []
trans_co2 = []
for ii in range(num_transformers):
month_current = start_month
failed = False
dp_current = 1000
co_accumulation = []
co2_accumulation = []
dp_failed = np.random.logistic(loc=200.0, scale=40.0)
while not failed:
ambient_low = [temps[int(month_current)-1][0], 6]
ambient_avg = [temps[int(month_current)-1][1], 12]
ambient_high = [temps[int(month_current)-1][2], 6]
for ambient, time in [ambient_low, ambient_avg, ambient_high]:
chs = core_hot_spot(ambient, overload_ratio)
dp_current = calculate_dp(chs, time, dp_current)
co, co2 = oil_contamination(dp_current)
co_accumulation.append(co)
co2_accumulation.append(co2)
if (dp_current<=dp_failed) | (dp_current<=150):
failed = True
month_current += 0.230769
if month_current >= 13.0:
month_current = 1
trans_co.append(co_accumulation)
trans_co2.append(co2_accumulation)
return trans_co, trans_co2
# Washington DC
dc_temps = [[-2, 1, 6], [-1, 3, 8], [3, 7, 13], [8, 13, 19], [14, 18, 24], [19, 24, 29], [22, 28, 31], [21, 27, 30], [17, 22, 26], [10, 15, 20], [5, 10, 14], [0, 4, 8]]
# Phoenix AZ
ph_temps = [[8, 14, 20], [9, 16, 22], [12, 18, 25], [16, 23, 30], [21, 27, 35], [25, 32, 40], [29, 34, 42], [28, 32, 40], [25, 31, 38], [18, 24, 32], [12, 18, 24], [7, 12, 18]]
# Denver CO
de_temps = [[-8, -1, 7], [-7, 0 ,7], [-3, 3, 12], [1, 7, 16], [7, 14, 21], [12, 20, 27], [15, 22, 31], [14, 20, 30], [9, 15, 25], [2, 10, 18], [-4, 4, 12], [-9, -1, 6]]
# Training data
dc_train = tinsul_sim(dc_temps, 1, 200)
ph_train = tinsul_sim(ph_temps, 1, 200)
de_train = tinsul_sim(de_temps, 1, 200)
# Testing data
dc_test = tinsul_sim(dc_temps, 1, 50)
ph_test = tinsul_sim(ph_temps, 1, 50)
de_test = tinsul_sim(de_temps, 1, 50)
# Failure times of the training data, for model construction
fails_dc_train = [len(dg) for dg in dc_train[0]]
fails_ph_train = [len(dg) for dg in ph_train[0]]
fails_de_train = [len(dg) for dg in de_train[0]]
# Failure times of the testing data, for model evaluation
fails_dc_test = [len(dg) for dg in dc_test[0]]
fails_ph_test = [len(dg) for dg in ph_test[0]]
fails_de_test = [len(dg) for dg in de_test[0]]
# For the Type 1 model, we don't consider each of the 3 areas separately,
# so we will lump all the data together in the same array
type1_fails_train = [*fails_dc_train, *fails_ph_train, *fails_de_train]
# Calculate metrics
fd = np.array(type1_fails_train)
mttf = fd.mean()
res = stats.weibull_min.fit(fd, floc=0)
# Plot the results from above
sns.distplot(fd, kde=False)
plt.title('Type 1 Failure Distribution, \nMTTF={}, l={}, k={}'.format(str(round(mttf,2)), str(round(res[0],2)), str(round(res[2],2))))
plt.xlabel('Failure Time (weeks)')
plt.ylabel('Occurrences');
# Remember that we divided the data into training and testing sets
# This section (model evaluation) needs to use the testing set
type1_fails_test = [*fails_dc_test, *fails_ph_test, *fails_de_test]
ae_arr = [np.abs(ii - mttf) for ii in type1_fails_test]
type1_mae = np.mean(ae_arr)
print('Type 1 MAE is {} weeks'.format(str(round(type1_mae,2))))
# Find the MTTF for each distribution
mttf_dc = np.array(fails_dc_train).mean()
mttf_ph = np.array(fails_ph_train).mean()
mttf_de = np.array(fails_de_train).mean()
# Plot the failure distribution of all three areas of the country
# on top of each other
sns.distplot(fails_dc_train, kde=False)
sns.distplot(fails_ph_train, kde=False)
sns.distplot(fails_de_train, kde=False)
plt.title('Type 2 Failure Distributions')
plt.xlabel('Failure Time (weeks)')
plt.legend(['DC: MTTF={}'.format(str(round(mttf_dc, 2))),
'AZ: MTTF={}'.format(str(round(mttf_ph, 2))),
'CO: MTTF={}'.format(str(round(mttf_de, 2)))])
plt.ylabel('Occurrences');
# Remember that we divided the data into training and testing sets
# This section (model evaluation) needs to use the testing set
ae_arr2 = []
for env_dist, env_mttf in zip([fails_dc_test, fails_ph_test, fails_de_test],
[mttf_dc, mttf_ph, mttf_de]):
ae = [np.abs(ii - env_mttf) for ii in env_dist]
ae_arr2.extend(ae)
type2_mae = np.mean(ae_arr2)
print('Type 2 MAE is {} weeks'.format(str(round(type2_mae,2))))
# Determine the CO + CO2 accumulation at failure by finding
# the last value in each array
co_co2_accum = []
for co, co2 in [dc_train, ph_train, de_train]:
for co_per_trans, co2_per_trans in zip(co, co2):
co_co2_accum.append(co_per_trans[-1] + co2_per_trans[-1])
type3_mean = np.array(co_co2_accum).mean()
# Plot the results
sns.distplot(co_co2_accum, kde=False)
plt.title('Type 3 CO+CO2 Distribution at Failure\nMean={} ppm'.format(
str(round(type3_mean,2))))
plt.xlabel('CO+CO2 (ppm)')
plt.ylabel('Occurrences');
# The param_1 and param_2 arrays will contain the
# functional fit for each individual transformer
param_1 = []
param_2 = []
for co, co2 in [dc_train, ph_train, de_train]:
for co_per_trans, co2_per_trans in zip(co, co2):
# t is in weeks; max t is the length of the array
t = np.linspace(0, 1, len(co_per_trans))
co_co2 = np.add(co_per_trans, co2_per_trans)
# fit using a log function
fit_t = np.polyfit(t, np.log(co_co2), 1)
# Save the two model parameters calculated above
# in the param_1 and param_2 arrays
param_1.append(fit_t[0])
param_2.append(fit_t[1])
# Find the "typical" log model parameters by taking the mean
# of each param array
type3_param1 = np.mean(param_1)
type3_param2 = np.mean(param_2)
print('Type 3 functional fit is [{}, {}]'.format(
str(round(type3_param1,2)), str(round(type3_param2,2))))
# ae_arr3 will contain the absolute error from each transformer
ae_arr3 = []
# Iterate over all transformers from the testing data set
for co, co2 in [dc_test, ph_test, de_test]:
for co_per_trans, co2_per_trans in zip(co, co2):
fail_time = len(co_per_trans)
# 1. Clip the array at a random time in the transformer lifetime
np.random.seed(11)
end_time = np.random.randint(int(fail_time/2), fail_time-1)
#end_time = np.random.randint(1, fail_time-1)
co_per_trans = co_per_trans[0:end_time]
co2_per_trans = co2_per_trans[0:end_time]
# 2. Find the functional fit of the testing data
# t is in weeks; max t is the length of the array
t = np.linspace(0, 1, end_time)
co_co2 = np.add(co_per_trans, co2_per_trans)
# fit using a log function
fit_t = np.polyfit(t, np.log(co_co2), 1)
# find the predicted co+co2 at the last data point
# why do this? to smooth out the noise in the "actual"
# last data point
cur_co_co2 = fit_t[0]*end_time + fit_t[1]
# 3. Find the predicted remaining useful life
# now, determine where the future co+co2 values will
# reach the failure threshold determined above
t = 0 # time until predicted failure
while cur_co_co2<= type3_mean:
t += 1
cur_co_co2 += type3_param1
# 4. Find the prediction error
actual_rul = fail_time - end_time
ae = np.abs(t-actual_rul)
ae_arr3.append(ae)
type3_mae = np.mean(ae_arr3)
print('Type 3 MAE is {} weeks'.format(str(round(type3_mae,2))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Type I Prognostics
Step2: The resulting failure distribution of all simulated transformers from the training set is shown below. The predicted mean-time-to-failure and Weibull parameters are given in the title of the figure.
Step3: Looking at the shape of the distribution, Karen wonders if the accuracy of this Type 1 model is sufficient- the distribution is spread widely between 50 and 550. She calculates the mean absolute error of this approach as follows
Step4: Examining the results, the mean absolute error is too large to be useful. Karen crunches the numbers, and predicting a transformer failure with an accuracy of about two years is not good enough to reduce maintenance costs. On average, maintenance will be performed much earlier or later than required. She needs to find a more accurate model.
Step5: Notice, of course, that transformers in AZ will usually fail first (hot ambient temperature), while many transformers in CO (cold ambient temperature) may survive up to five times longer. Compare the MTTF of each of the three distributions (in the legend), which varies greatly.
Step6: Simply knowing which environment the transformer has been exposed to will significantly improve the accuracy of the model by about half a year.
Step7: The distribution of CO+CO2 accumulation at failure looks fairly Gaussian. Examine the transformer simulation from Part 2 of this notebook series to determine why this might be the case.
Step8: The first parameter above is the slope of the log model, and the second parameter is the intercept. Why did Karen find the average of the log model parameters from the training dataset instead of simply regressing the data from the testing transformers directly? There might be some unacceptable variation in the testing transformers, especially if very little data has been accumulated (such as when the transformer is in the early stages of its lifetime). For example, what if there is a negative slope during the initial data of a testing transformer? By projecting future DGA values using the "typical" fit established by the training data, the accuracy (and "safety") of the model is increased.
|
12,092
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
xv=[1,2,3,4]; yv=[5,1,4,0]
plt.plot(xv,yv);
plt.plot(xv,yv,'ro');
myplot=plt.plot(xv,yv,'k--');
plt.setp(myplot,linewidth=3.0,marker='+',markersize=30);
myplot=plt.plot(xv,yv,'k--');
plt.setp(myplot,'linewidth',3.0,'marker','+','markersize',30);
plt.setp(myplot)
plt.axis()
plt.axis([0.5,4.5,-0.5,5.5])
%matplotlib
plt.ioff() # deactivates 'interactive' mode
fig = plt.figure()
ti=plt.title('Very important data')
xl=plt.xlabel('time'); yl=plt.ylabel('value')
plt.setp(xl,fontweight='bold');
plt.plot(xv,yv,'k--')
plt.show()
fig.savefig('foo.ps', dpi=600, format='ps',orientation='landscape')
fig.savefig('foo.png', dpi=600, format='png',orientation='landscape')
fig.savefig('foo.pdf', dpi=600, format='pdf',orientation='landscape')
plt.close(fig)
plt.close('all')
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(xv,yv,'b-')
plt.subplot(2,1,2)
plt.plot(yv,xv,'ro')
plt.show()
plt.close(fig)
# switch back to inline mode
%matplotlib inline
fig2=plt.figure(figsize=(10,10))
plt.subplot(2,1,1)
plt.plot(xv,yv,'b-')
plt.subplot(2,1,2)
plt.plot(yv,xv,'ro');
xv=np.arange(-10,10.5,0.5); xv
plt.plot(xv,2*xv**3-5*xv**2+7*xv)
plt.plot(xv,2000*np.cos(xv),'r--')
plt.text(-10,-2800,'curve A')
plt.text(3,1500,'curve B');
xv_lin=np.arange(-3,3.01,0.02)
xv=10.**xv_lin
plt.semilogx(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200));
plt.semilogx(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200))
plt.grid(color='k')
plt.semilogy(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200))
xv=[0.5,1.5,2.5,3.5]; yv=[2,5,1,6]
mybar=plt.bar(xv, yv, width=1, yerr=0.5, facecolor='b')
mybar=plt.bar(xv, yv, width=1, yerr=0.5);
plt.xticks(xv, ['A','B','C','D'])
plt.setp(mybar, facecolor='r', edgecolor='w');
plt.figure(figsize=(5,5))
handles=plt.pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D']);
handles
plt.figure(figsize=(5,5))
handles=plt.pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D'])
plt.setp(handles[0] [0], color='y')
plt.setp(handles[1] [0], text='Blubber');
n=8; X,Y=np.mgrid[0:n,0:n]
T=np.arctan2(Y-n/2.0,X-n/2.0)
R=10+np.sqrt((Y-n/2.0)**2+(X-n/2.0)**2)
U,V=R*np.cos(T),R*np.sin(T)
plt.axes([0.025,0.025,0.95,0.95])
plt.quiver(X,Y,U,V,R,alpha=.5)
plt.quiver(X,Y,U,V, edgecolor='k', facecolor= 'None', linewidth=.5);
ax=plt.axes([0.025,0.025,0.95,0.95],polar=True)
N=20; theta=np.arange(0.0,2*np.pi,2*np.pi/N)
radii=10*np.random.rand(N)
width=np.pi/4*np.random.rand(N)
bars=plt.bar(theta,radii,width=width,bottom=0.0)
for r,bar in zip(radii,bars):
bar.set_facecolor( plt.cm.jet(r/10.))
bar.set_alpha(0.5)
xv=np.linspace(-10,10,100); yv=xv
X,Y=np.meshgrid(xv,yv)
Z=np.exp(-((X-1)**2/2/0.5**2)-((Y+2)**2/2/3**2))
Z=Z+1.5*np.exp(-((X-5)**2/2/4**2)-((Y-6)**2/2/3**2))
plt.contourf(X,Y,Z,10,alpha=0.5,cmap=plt.cm.hot)
C=plt.contour(X,Y,Z,10,colors='black', linewidth=0.5)
plt.clabel(C,inline=1,fontsize=10);
plt.pcolormesh(X,Y,Z,alpha=0.5,cmap=plt.cm.hot)
plt.axis([-5,10,-8,10]);
from mpl_toolkits.mplot3d import Axes3D
%matplotlib
plt.ioff()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
import random as rn
xv=[]; yv=[]; zv=[]
for c in range(100):
xv.append(rn.random()); yv.append(rn.random())
zv.append(rn.random())
ax.scatter(xv,yv,zv)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
xv=np.linspace(-10,10,100); yv=np.linspace(-10,10,100)
cx,cy=np.meshgrid(xv,yv)
cz=0.5*cx+np.exp(-cy**2)
tilt=ax.plot_surface(cx,cy,cz,linewidth=0, cmap=plt.cm.jet);
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
xv=np.linspace(-10,10,100); yv=np.linspace(-10,10,100)
cx,cy=np.meshgrid(xv,yv)
cz=0*cx
def gauss2D(x0,y0,sigx=1,sigy=1,height=1):
z=height*np.exp(-((cx-x0)**2/2/sigx**2)-((cy-y0)**2/2/sigy**2))
return z
cz=cz+gauss2D(-2,3)
cz=cz+gauss2D(2,4,2,3)
ax.plot_surface(cx,cy,cz,linewidth=0,cstride=2, rstride=2,cmap=plt.cm.jet)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
ax.contour(cx,cy,cz,cstride=2,rstride=2, cmap=plt.cm.jet)
plt.show()
%matplotlib inline
plt.close('all')
plt.style.available
plt.style.use('ggplot')
x = np.linspace(0,10,100)
y = np.sin(x)
plt.plot(x,y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Above commands enable pylab environment => direct access to numpy, scipy and matplotlib. The option 'inline' results in plot outputs to be directly embedded in the Notebook. If this causes problems, remove the option 'inline'.
Step2: A simple plotting example. Maybe some variations
Step3: Alternatively, Matplotlib understands the MATLAB syntax, e.g., for the above command (does not work with 'inline' enabled)
Step4: Available settings for a plot can be found this way (does not work with 'inline' enabled)
Step5: Some more commands for plot formatting
Step6: Let's switch to the Qt backend for plotting
Step7: Figures can be saved in a number of output formats such as Postscript
Step8: Alternatively, you can also save figures in PNG bitmap and PDF vector formats. (Note that some export formats may not be supported on your platform.)
Step9: You can close an open figure using its handle
Step10: Or in MATLAB style, close all open figures
Step11: Let's do a figure with several subpanels
Step12: Especially for figures with multiple subpanels it may be advisable to increase the figure size somewhat. Do this by using function arguments in the figure() call
Step13: By using Numpy arrays, Matplotlib can conveniently be used as a function ploting program
Step14: Certainly, you can do plots with logarithmic scale
Step15: Let's add grid lines
Step16: Analogously, you can use semilogy() and loglog() for plots with log y-axis and loglog plots.
Step17: Anybody bar charts?
Step18: Let's pimp the plot a little
Step19: For horizontal bar charts, you would use barh().
Step20: As you will have seen, we retrieved handles to the individual pie slices. Let's do something with them
Step21: Matplotlib also offers quiver() plots which are illustrated in the following example (taken from http
Step22: Polar plots are also nicely illustrated on the very same homepage
Step23: Contour plots are well suited for visualization of three-dimensional data sets
Step24: A similar yet distinct representation is provided by pcolormesh().
Step25: Compare the two figures, spot the similarities and differences.
Step26: Let's switch to the Qt backend for 3D plotting
Step27: Then, we can play around.
Step28: Try moving and rotating the (so far empty) plot in three dimensions. After the first excitement has settled, let's populate the plot with some data
Step29: In addition to the above 3D scatter plot, other plot types are supported, such as 3D surface plots
Step30: Try some other colormaps such as cm.bone, cm.spring or cm.cool (once more, these are the MATLAB color schemes).
Step31: Let's display the same data in contour representation
Step32: Style sheets
Step33: There are several predefinded style sheets for matplotlib. You can show all availible styles by typing
Step34: to pick one of them, type e.g.
Step35: and your plots will look similar to those created with ggplot2 in R
|
12,093
|
<ASSISTANT_TASK:>
Python Code:
(1, 2, 3)
()
(1,)
t = 1, 2, 3
t
len(t)
t[1]
u = 4, 5
t + u
2 * u
x, y = 1, 2
x
y
x, y = y, x
x
y
s = {0, 1, 0, 5, 5, 1, 0}
s
1 in s, 2 in s, 1 not in s
l = [0, 1, 0, 5, 5, 1, 0]
set(l)
set('абба')
set()
{}
len(s)
for x in s:
print(x)
{i for i in range(5)}
s2 = s | {2, 5}
s2
s < s2, s > s2, s <= s2, s >= s2
s2 & {1, 2, 3}
s2 - {1,3,5}
s2 ^ {1,3,5}
s2.add(4)
s2
s2.remove(1)
s2
s2 |= {1, 2}
s2
x = set([1, 4, 2, 4, 2, 1, 3, 4])
print(x)
x.add(5) # добавление элемента
print(x)
x.pop() # удаление элемента
print(x)
print(x.intersection(set([2, 4, 6, 8]))) # Пересечение
print(x.difference(set([2, 4, 6, 8]))) # Разность
print(x.union(set([2, 4, 6, 8]))) # Объединение
print(x.symmetric_difference(set([2, 4, 6, 8]))) # Симметрическая разность
print(x.issubset(set([2, 4, 6, 8]))) # Является ли подмножеством
print(x.issubset(set(list(range(10)))))
print(x.issuperset(set([2, 4, 6, 8]))) # Является ли надмножеством
print(x.issuperset(set([2, 4])))
d = {'one': 1, 'two': 2, 'three': 3}
d
d['two']
d['four']
'one' in d, 'four' in d
d['one'] =- 1
d
d['four'] = 4
d
len(d)
del d['two']
d
d.get('one'), d.get('five')
d.get('one', 0), d.get('five', 0)
d = {}
d
d['zero'] = 0
d
d['one'] = 1
d
d = {i: i ** 2 for i in range(5)}
d
d = {}
d[0, 0] = 1
d[0, 1] = 0
d[1, 0] = 0
d[1, 1] = -1
d
d[0, 0] + d[1, 1]
d = {'one': 1, 'two': 2, 'three': 3}
for x in d:
print(x, ' ', d[x])
d.keys(), d.values(), d.items()
for x in sorted(d.keys()):
print(x, ' ', d[x])
for x, y in d.items():
print(x, ' ', y)
del x, y
bool(False), bool(True)
bool(None)
bool(0), bool(123)
bool(''), bool(' ')
bool([]), bool([0])
bool(set()), bool({0})
bool({}), bool({0: 0})
def f():
pass
f
pass
type(f)
r = f()
print(r)
def f(x):
return x + 1
f(1), f(1.0)
f('abc')
def f(x, a=0, b='b'):
print(x, ' ', a, ' ', b)
f(1.0)
f(1.0, 1)
f(1.0, b='a')
f(1.0, b='a', a=2)
f(a=2, x=2.0)
a = 1
def f():
a = 2
return a
f()
a
def f():
global a
a = 2
return a
f()
a
def f(x, l):
l.append(x)
return l
l = [1, 2, 3]
f(0, l)
l
def f(x, l=[]):
l.append(x)
return l
f(0)
f(1)
f(2)
def f(x, l=None):
if l is None:
l = []
l.append(x)
return l
f(0)
f(1)
f(2, [0, 1])
def f(x, *l):
print(x, ' ', l)
f(0)
f(0, 1)
f(0, 1, 2)
f(0, 1, 2, 3)
l=[1, 2]
c=('a', 'b')
f(*l, 0, *c)
(*l, 0, *c)
[*l, 0, *c]
[*l, 3]
def f(x, y, **d):
print(x, ' ', y, ' ', d)
f(0, 1, foo=2, bar=3)
d={'foo': 2, 'bar': 3}
f(0, 1, **d)
d['x'] = 0
d['y'] = 1
f(**d)
def f(**d):
return d
f(x=0, y=1, z=2)
d={0: 'a', 1: 'b'}
{**d, 2: 'c'}
d1 = {0: 'a', 1: 'b'}
d2 = {2: 'c', 3: 'd'}
{**d1, **d2}
d2 = {1: 'B', 2: 'C'}
{**d1, 3: 'D', **d2, 3: 'd'}
def f(x, y, *l, **d):
print(x, ' ', y, ' ', l, ' ', d)
f(0, 1, 2, 3, foo=4, bar=5)
def f0(x):
return x + 2
def f1(x):
return 2 * x
l = [f0, f1]
l
x = 2.0
n = 1
l[n](x)
def fib(n):
'''вычисляет n-е число Фибоначчи'''
assert type(n) is int and n>0
if n <= 2:
return 1
x, y = 1, 1
for i in range(n - 2):
x, y = y, x + y
return y
fib.__doc__
help(fib)
fib?
[fib(n) for n in range(1, 10)]
fib(-1)
fib(2.0)
x = zip(range(5), range(0, 10, 2))
print(list(x))
x = map(lambda tmp: tmp ** 2, range(5))
print(list(x))
x = list(zip([7, 3, 4, 4, 5, 3, 9], ['a', 'n', 'n', 'a', 'k', 'n', 'a']))
# сначала сортировка по букве по алфавиту, потом сортировка по убыванию по числу
x = sorted(x, key=lambda element: (element[1], -element[0]))
print(list(x))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Скобки ставить не обязательно, если кортеж - единственная вещь в правой части присваивания.
Step2: Работать с кортежами можно так же, как со списками. Нельзя только изменять их.
Step3: В левой части присваивания можно написать несколько переменных через запятую, а в правой кортеж. Это одновременное присваивание значений нескольким переменным.
Step4: Сначала вычисляется кортеж в правой части, исходя из старых значений переменных (до этого присваивания). Потом одновременно всем переменным присваиваются новые значения из этого кортежа. Поэтому так можно обменять значения двух переменных.
Step5: Это проще, чем в других языках, где приходится использовать третью переменную.
Step6: Принадлежит ли элемент множеству?
Step7: Множество можно получить из списка, или строки, или любого объекта, который можно использовать в for цикле (итерабельного).
Step8: Как записать пустое множество? Только так.
Step9: Дело в том, что в фигурных скобках в питоне пишутся также словари (мы будем их обсуждать в следующем параграфе). Когда в них есть хоть один элемент, можно отличить словарь от множества. Но пустые фигурные скобки означают пустой словарь.
Step10: Работать с множествами можно как со списками.
Step11: Это генератор множества (set comprehension).
Step12: Объединение множеств.
Step13: Проверка того, является ли одно множество подмножеством другого.
Step14: Пересечение.
Step15: Разность и симметричная разность.
Step16: Множества (как и списки) являются изменяемыми объектами. Добавление элемента в множество и исключение из него.
Step17: Как и в случае +=, можно скомбинировать теоретико-множественную операцию с присваиванием.
Step18: Приведенные выше операции можно записывать и в другом стиле
Step19: Существуют также неизменяемые множества. Этот тип данных называется frozenset. Операции над такими множествами подобны обычным, только невозможно изменять их (добавлять и исключать элементы).
Step20: Можно узнать значение, соответствующее некоторому ключу. Словари реализованы как хэш-таблицы, так что поиск даже в больших словарях очень эффективен. В языках низкого уровня (например, C) для построения хэш-таблиц требуется использовать внешние библиотеки и писать заметное количество кода. В скриптовых языках (perl, python, php) они уже встроены в язык, и использовать их очень легко.
Step21: Можно проверить, есть ли в словаре данный ключ.
Step22: Можно присваивать значения как имеющимся ключам, так и отсутствующим (они добавятся к словарю).
Step23: Длина - число ключей в словаре.
Step24: Можно удалить ключ из словаря.
Step25: Метод get, если он будет вызван с отсутствующим ключом, не приводит к ошибке, а возвращает специальный объект None. Он используется всегда, когда необходимо указать, что объект отсутствует (в какой-то мере он аналогичен null в C). Если передать методу get второй аргумент - значение по умолчанию, то будет возвращаться это значение, а не None.
Step26: Словари обычно строят последовательно
Step27: А это генератор словаря (dictionary comprehension).
Step28: Ключами могут быть любые неизменяемые объекты, например, целые числа, строки, кортежи.
Step29: Словари, подобно спискам, можно использовать в for циклах. Перебираются имеющиеся в словаре ключи (в каком-то непредсказуемом порядке).
Step30: Метод keys возвращает список ключей, метод values - список соответствующих значений (в том же порядке), а метод items - список пар (ключ,значение). Точнее говоря, это не списки, а некоторые объекты, которые можно использовать в for циклах или превратить в списки функцией list. Если хочется написать цикл по упорядоченному списку ключей, то можно использовать sorted(d.keys)).
Step31: Что есть истина? И что есть ложь? Подойдём к этому философскому вопросу экспериментально.
Step32: На выражения, стоящие в булевых позициях (после if, elif и while), неявно напускается функция bool. Некоторые объекты интерпретируются как False
Step33: Эта функция более полезна
Step34: Если у функции много параметров, то возникает желание вызывать её попроще в наиболее часто встречающихся случаях. Для этого в операторе def можно задать значения некоторых параметров по умолчанию (они должны размещаться в конце списка параметров). При вызове необходимо указать все обязательные параметры (у которых нет значений по умолчанию), а необязательные можно и не указывать. Если при вызове указывать параметры в виде имя=значение, то это можно делать в любом порядке. Это гораздо удобнее, чем вспоминать, является данный параметр восьмым или девятым при вызове какой-нибудь сложной функции. Обратите внимание, что в конструкции имя=значение не ставятся пробелы между символом =.
Step35: Переменные, использующиеся в функции, являются локальными. Присваивание им не меняет значений глобальных переменных с такими же именами.
Step36: Если в функции нужно использовать какие-нибудь глобальные переменные, их нужно описать как global.
Step37: Пространство имён устанавливает соответствие между именами переменных и объектами - их значениями. Есть пространство имён локальных переменных функции, пространство имён глобальных переменных программы и пространство имён встроенных функций языка питон. Для реализации пространств имён используются словари.
Step38: Если в качестве значения какого-нибудь параметра по умолчанию используется изменяемый объект, то это может приводить к неожиданным последствиям. В данном случае исполнение определения функции приводит к созданию двух объектов
Step39: Чтобы избежать таких сюрпризов, в качестве значений по умолчанию лучше использовать только неизменяемые объекты.
Step40: Эта функция имеет один обязательный параметр плюс произвольное число необязательных. При вызове все такие дополнительные аргументы объединяются в кортеж, который функция может использовать по своему усмотрению.
Step41: Звёздочку можно использовать и при вызове функции. Можно заранее построить список (или кортеж) аргументов, а потом вызвать функцию с этими аргументами.
Step42: Такую распаковку из списков и кортежей можно использовать не только при вызове функции, но и при построении списка или кортежа.
Step43: Эта функция имеет два обязательных параметра плюс произвольное число необязательных ключевых параметров. При вызове они должны задаваться в виде имя=значение. Они собираются в словарь, который функция может использовать по своему усмотрению.
Step44: Двойную звёздочку можно использовать и при вызове функции. Можно заранее построить словарь аргументов, сопоставляющий значения именам параметров, а потом вызвать функцию с этими ключевыми аргументами.
Step45: Вот любопытный способ построить словарь с ключами-строками.
Step46: Двойную звёздочку можно использовать не только при вызове функции, но и при построении словаря.
Step47: Вот простой способ объединить два словаря.
Step48: Если один и тот же ключ встречается несколько раз, следующее значение затирает предыдущее.
Step49: Это наиболее общий вид списка параметров функции. Сначала идут обязательные параметры (в данном случае два), затем произвольное число необязательных (при вызове они будут объединены в кортеж), а затем произвольное число ключевых параметров (при вызове они будут объединены в словарь).
Step50: В питоне функции являются гражданами первого сорта. Они могут присутствовать везде, где допустимы объекты других типов - среди элементов списков, значений в словарях и т.д.
Step51: Если Вы пишете функцию не для того, чтобы один раз её вызвать и навсегда забыть, то нужна документация, объясняющая, что эта функция делает. Для этого сразу после строчки def пишется строка. Она называется док-строкой, и сохраняется при трансляции исходного текста на питоне в байт-код (в отличие от комментариев, которые при этом отбрасываются). Обычно эта строка заключается в тройные кавычки и занимает несколько строчек. Док-строка доступна как атрибут __doc__ функции, и используется функцией help. Вот пример культурно написанной функции, вычисляющей $n$-е число Фибоначчи.
Step52: В jupyter-ноутбуке к документации можно обращаться более удобным способом
Step53: Некоторые полезные функции
Step54: map применяет функию к каждому элементу массива
Step55: sorted --- сортировка
|
12,094
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lzip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.influence_plot(prestige_model, ax=ax, criterion="cooks")
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige, ax=ax)
fix, ax = plt.subplots(figsize=(12,14))
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige, ax=ax)
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(prestige_model, fig=fig)
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_ccpr(prestige_model, "education", ax=ax)
fig = plt.figure(figsize=(12, 8))
fig = sm.graphics.plot_ccpr_grid(prestige_model, fig=fig)
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(prestige_model, "education", fig=fig)
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_fit(prestige_model, "education", ax=ax)
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(crime_model, fig=fig)
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], ax=ax, data=dta)
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.plot_leverage_resid2(crime_model, ax=ax)
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.influence_plot(crime_model, ax=ax)
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Duncan's Prestige Dataset
Step2: Influence plots
Step3: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
Step4: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
Step5: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
Step6: Component-Component plus Residual (CCPR) Plots
Step7: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
Step8: Regression Plots
Step9: Fit Plot
Step10: Statewide Crime 2009 Dataset
Step11: Partial Regression Plots
Step12: Leverage-Resid<sup>2</sup> Plot
Step13: Influence Plot
Step14: Using robust regression to correct for outliers.
Step15: There isn't yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
|
12,095
|
<ASSISTANT_TASK:>
Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for AI Platform
TFVERSION = "1.14" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
mkdir taxifaremodel
touch taxifaremodel/__init__.py
%%writefile taxifaremodel/model.py
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
#1. Train and Evaluate Input Functions
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
#2. Feature Engineering
# One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# Cross features to get combination of day and hour
fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
# Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(features["latdiff"]**2 + features["londiff"]**2)
return features
feature_cols = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_day_hr),
fc_bucketized_plat,
fc_bucketized_plon,
fc_bucketized_dlat,
fc_bucketized_dlon,
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist")
]
#3. Serving Input Receiver Function
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
#4. Train and Evaluate
def train_and_evaluate(params):
OUTDIR = params["output_dir"]
model = tf.estimator.DNNRegressor(
hidden_units = params["hidden_units"].split(","), # NEW: paramaterize architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = max(100, params["train_steps"] // 10) # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(model, my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(params["train_data_path"]),
max_steps = params["train_steps"])
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(params["eval_data_path"]),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.estimator.train_and_evaluate(model, train_spec, eval_spec)
%%writefile taxifaremodel/task.py
import argparse
import json
import os
from . import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--hidden_units",
help = "Hidden layer sizes to use for DNN feature columns -- provide space-separated layers",
type = str,
default = "10,10"
)
parser.add_argument(
"--train_data_path",
help = "GCS or local path to training data",
required = True
)
parser.add_argument(
"--train_steps",
help = "Steps to run the training job for (default: 1000)",
type = int,
default = 1000
)
parser.add_argument(
"--eval_data_path",
help = "GCS or local path to evaluation data",
required = True
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--job-dir",
help="This is not used by our model, but it is required by gcloud",
)
args = parser.parse_args().__dict__
# Append trial_id to path so trials don"t overwrite each other
args["output_dir"] = os.path.join(
args["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(args)
%%writefile hyperparam.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 10
hyperparameterMetricTag: rmse
enableTrialEarlyStopping: True
algorithm: GRID_SEARCH
params:
- parameterName: hidden_units
type: CATEGORICAL
categoricalValues:
- 10,10
- 64,32
- 128,64,32
- 32,64,128
- 128,128,128
- 32,32,32
- 256,128,64,32
- 256,256,256,32
- 256,256,256,256
- 512,256,128,64,32
OUTDIR="gs://{}/taxifare/trained_hp_tune".format(BUCKET)
!gsutil -m rm -rf {OUTDIR} # start fresh each time
!gcloud ai-platform jobs submit training taxifare_$(date -u +%y%m%d_%H%M%S) \
--package-path=taxifaremodel \
--module-name=taxifaremodel.task \
--config=hyperparam.yaml \
--job-dir=gs://{BUCKET}/taxifare \
--python-version=3.5 \
--runtime-version={TFVERSION} \
--region={REGION} \
-- \
--train_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-train.csv \
--eval_data_path=gs://{BUCKET}/taxifare/smallinput/taxi-valid.csv \
--train_steps=5000 \
--output_dir={OUTDIR}
OUTDIR="gs://{}/taxifare/trained_large_tuned".format(BUCKET)
!gsutil -m rm -rf {OUTDIR} # start fresh each time
!gcloud ai-platform jobs submit training taxifare_large_$(date -u +%y%m%d_%H%M%S) \
--package-path=taxifaremodel \
--module-name=taxifaremodel.task \
--job-dir=gs://{BUCKET}/taxifare \
--python-version=3.5 \
--runtime-version={TFVERSION} \
--region={REGION} \
--scale-tier=STANDARD_1 \
-- \
--train_data_path=gs://cloud-training-demos/taxifare/large/taxi-train*.csv \
--eval_data_path=gs://cloud-training-demos/taxifare/small/taxi-valid.csv \
--train_steps=200000 \
--output_dir={OUTDIR} \
--hidden_units="128,64,32"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Move code into python package
Step2: Create model.py
Step3: Create task.py
Step4: Create hypertuning configuration
Step5: Run the training job
Step6: Results
|
12,096
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-3', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
12,097
|
<ASSISTANT_TASK:>
Python Code:
binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}
operators = binary_command.values()
import csv
def translate(l):
return [binary_command[a] if a in binary_command else a for a in l]
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], ' '.join(list(translate(line[:-2])))))
return dict(commands)
import re
def extract_variables(expr):
varbls = []
regex_pattern = '\s|\\)|\\('
l = re.split(regex_pattern, expr)
for a in l:
if (a not in operators) and (not a.isnumeric()) and (a != ''):
varbls.append(a)
return set(varbls)
def create_instance(wire):
exec_python = commands[wire]
pending = extract_variables(commands[wire])
count = 0
while pending and (count < 200):
s = pending.pop()
expr = commands[s]
exec_python = re.sub('({0})'.format(s), '( {0} )'.format(expr), exec_python)
pending = pending.union(extract_variables(exec_python))
count += 1
return wire + ' = ' + exec_python
def evaluate(var):
instance = create_instance(var)
exec(instance)
return np.uint16(locals()[var])
commands = display('inputs/input7.test.txt')
def test():
assert(evaluate('d') == 72)
assert(evaluate('e') == 507)
assert(evaluate('f') == 492)
assert(evaluate('g') == 114)
assert(evaluate('h') == 65412)
assert(evaluate('i') == 65079)
assert(evaluate('x') == 123)
assert(evaluate('y') == 456)
test()
import numpy as np
def RSHIFT(a, b):
result = np.uint16(a) >> int(b)
return int(result)
def LSHIFT(a, b):
result = np.uint16(a) << int(b)
return int(result)
def OR(a, b):
result = np.uint16(a) | np.uint16(b)
return int(result)
def AND(a, b):
result = np.uint16(a) & np.uint16(b)
return int(result)
def NOT(a):
result = ~ np.uint16(a)
return int(result)
import csv
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
def evaluate(wire):
known = {}
while wire not in known:
if wire in known:
break
for k, v in commands.items():
if (len(v) == 1) and (v[0].isnumeric()) and (k not in known):
known[k] = int(v[0])
elif (len(v) == 1) and (v[0] in known) and (k not in known):
known[k] = known[v[0]]
elif ('AND' in v) and (v[0] in known) and (v[2] in known):
known[k] = AND(known[v[0]], known[v[2]])
elif ('AND' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = AND(int(v[0]), known[v[2]])
elif ('AND' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = AND(known[v[0]], int(v[2]))
elif ('OR' in v) and (v[0] in known) and (v[2] in known):
known[k] = OR(known[v[0]], known[v[2]])
elif ('OR' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = OR(int(v[0]), known[v[2]])
elif ('OR' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = OR(known[v[0]], int(v[2]))
elif ('LSHIFT' in v) and (v[0] in known):
known[k] = LSHIFT(known[v[0]], v[2])
elif ('RSHIFT' in v) and (v[0] in known):
known[k] = RSHIFT(known[v[0]], v[2])
elif ('NOT' in v) and (v[1] in known):
known[k] = NOT(known[v[1]])
return known[wire]
commands = display('inputs/input7.test1.txt')
commands
evaluate('a')
commands = display('inputs/input7.test2.txt')
commands
test()
commands = display('inputs/input7.txt')
evaluate('a')
import csv
import numpy as np
def display(input_file):
produce a dict mapping variables to expressions
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
class LazyVar(object):
def __init__(self, func):
self.func = func
self.value = None
def __call__(self):
if self.value is None:
self.value = self.func()
return self.value
binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}
def translate(l):
translated = []
for a in l:
if a in binary_command:
b = binary_command[a]
elif a.isnumeric():
b = 'np.uint16({})'.format(a)
else:
b = '{}.func()'.format('var_' + a)
translated.append(b)
return translated
commands = display('inputs/input7.test2.txt')
commands = display('inputs/input7.test2.txt')
for k, v in commands.items():
command_str = '{0} = LazyVar(lambda: {1})'.format('var_' + k, ''.join(translate(v)))
print(command_str)
exec(command_str)
def test():
assert(var_d.func() == 72)
assert(var_e.func() == 507)
assert(var_f.func() == 492)
assert(var_g.func() == 114)
assert(var_h.func() == 65412)
assert(var_i.func() == 65079)
assert(var_x.func() == 123)
assert(var_y.func() == 456)
test()
def rscript_command(var, l):
vocab = {'AND' : 'bitwAnd',
'OR' : 'bitwOr',
'LSHIFT' : 'bitwShiftL',
'RSHIFT' : 'bitwShiftR'}
if len(l) == 3:
func = vocab[l[1]]
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
arg2 = l[2] if l[2].isdigit() else 'var_' + l[2] + '()'
return 'var_{0} <- function(a={1}, b={2})'.format(var, arg1, arg2) + ' {' + '{0}(a,b)'.format(func) + '}'
elif len(l) == 2:
func = 'bitwNot'
arg1 = l[1] if l[1].isdigit() else 'var_' + l[1] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + '{0}(a)'.format(func) + '}'
else:
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + 'a' + '}'
def generate_rscript(commands, target):
with open('day7_commands.R', 'wt') as f:
for k, v in commands.items():
f.write(rscript_command(k, v)+'\n')
f.write('var_' + target + '()')
commands = display('inputs/input7.test2.txt')
generate_rscript(commands, 'd')
! cat day7_commands.R
!Rscript day7_commands.R
commands = display('inputs/input7.txt')
generate_rscript(commands, 'a')
! cat day7_commands.R
!Rscript day7_commands.R
commands = display('inputs/input7.txt')
commands['b'] = ['16076']
evaluate('a')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Day 7
Step2: Test
Step4: This approach seems correct, but it creates huge expressions along the way that become harder and harder to parse. Thus the time to a final expression that wraps up all the computations is very long. Two ideas to carry on
Step5: Test 0
Step6: Test 1
Step7: Solution
Step9: Approach 3
Step10: Test
Step11: Although the approach passes the test, it does not end in reasonable time for the full input.
Step12: Test
Step13: Solution
Step14: Although this approach is more natural than defining a LazyWrapper in Python, it takes quite a lot of time to execute, so this is not a very cool solution after all.
|
12,098
|
<ASSISTANT_TASK:>
Python Code:
a = 1+1j
output = np.angle(a, deg=False)
print(output)
a = np.array([1+2j, 3+4j, 5+6j])
real = a.real
imag = a.imag
print("real part=", real)
print("imaginary part=", imag)
a = np.array([1+2j, 3+4j, 5+6j])
a.real = 9
a.imag = [5, 7, 9]
print(a)
a = 1+2j
output = np.conjugate(a)
print(output)
a = np.exp(2j * np.pi * np.arange(8))
output = np.fft.fft(a)
print(output)
print("a=", a)
inversed = np.fft.ifft(output)
print("inversed=", a)
a = [0, 1, 0, 0]
output = np.fft.rfft(a)
print(output)
assert output.size==len(a)//2+1 if len(a)%2==0 else (len(a)+1)//2
# cf.
output2 = np.fft.fft(a)
print(output2)
inversed = np.fft.ifft(output)
print("inversed=", a)
signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float32)
fourier = np.fft.fft(signal)
n = signal.size
freq = np.fft.fftfreq(n, d=1)
print(freq)
fig = plt.figure(figsize=(19, 10))
# Hamming window
window = np.hamming(51)
plt.plot(np.bartlett(51), label="Bartlett window")
plt.plot(np.blackman(51), label="Blackman window")
plt.plot(np.hamming(51), label="Hamming window")
plt.plot(np.hanning(51), label="Hanning window")
plt.plot(np.kaiser(51, 14), label="Kaiser window")
plt.xlabel("sample")
plt.ylabel("amplitude")
plt.legend()
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q2. Return the real part and imaginary part of a.
Step2: Q3. Replace the real part of a with 9, the imaginary part with [5, 7, 9].
Step3: Q4. Return the complex conjugate of a.
Step4: Discrete Fourier Transform
Step5: Q6. Compute the one-dimensional inverse DFT of the output in the above question.
Step6: Q7. Compute the one-dimensional discrete Fourier Transform for real input a.
Step7: Q8. Compute the one-dimensional inverse DFT of the output in the above question.
Step8: Q9. Return the DFT sample frequencies of a.
Step9: Window Functions
|
12,099
|
<ASSISTANT_TASK:>
Python Code:
import requests
import json
from TM1py import TM1Service
STATION = 'GHCND:USW00014732'
FROM, TO = '2017-01-01', '2017-01-04'
HEADERS = {"token": 'yyqEBOAbHVbtXkfAmZuPNfnSXvdfyhgn'}
url = 'https://www.ncdc.noaa.gov/cdo-web/api/v2/data?' \
'datasetid=GHCND&' \
'startdate=' + FROM + '&' \
'enddate=' + TO + '&' \
'limit=1000&' \
'datatypeid=TMIN&' \
'datatypeid=TAVG&' \
'datatypeid=TMAX&' \
'stationid=' + STATION
print(url)
response = requests.get(url, headers=HEADERS).json()
results = response["results"]
print(json.dumps(results[0:3], indent=2))
cells = dict()
for record in results:
value = record['value'] / 10
coordinates = ("Actual", record['date'][0:10], "NYC", record['datatype'])
cells[coordinates] = value
for coordinate, value in cells.items():
print(coordinate, value)
import configparser
config = configparser.ConfigParser()
config.read(r'..\..\config.ini')
with TM1Service(**config['tm1srv01']) as tm1:
tm1.cubes.cells.write_values("Weather Data", cells)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constants
Step2: Step 2
Step3: This is the URL we will get the data from.
Step 3
Step4: Step 4
Step5: Step 5
Step6: With TM1py we can send data to a cube with two lines of code as long as our cell set match dimensions in our cube
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.