Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
1,500
|
<ASSISTANT_TASK:>
Python Code:
# Esta libreria tiene las funciones principales que utilizaremos
from sympy import var, Matrix, Function, sin, cos, pi, trigsimp
# Esta libreria contiene una funcion que la va a dar un formato "bonito" a nuestras ecuaciones
from sympy.physics.mechanics import mechanics_printing
mechanics_printing()
τ = 2*pi
var("t q1 q2 q3")
var("l1:4")
# Defino el punto origen, la rotación y traslación aplicados
p0 = Matrix([[2],
[3],
[0]])
R1 = Matrix([[cos(q1), -sin(q1), 0],
[sin(q1), cos(q1), 0],
[0, 0, 1]])
d1 = Matrix([[l1],
[0],
[0]])
p1 = R1*p0 + d1
p1
a = 1.2
d = 0
α = τ/4
θ = q1
A1 = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)],
[sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)],
[0, sin(α), cos(α), d],
[0, 0, 0, 1]])
A1
a = 0.8
d = 0
α = 0
θ = q2
A2 = Matrix([[cos(θ), -sin(θ)*cos(α), sin(θ)*sin(α), a*cos(θ)],
[sin(θ), cos(θ)*cos(α), -cos(θ)*sin(α), a*sin(θ)],
[0, sin(α), cos(α), d],
[0, 0, 0, 1]])
A2
A1*A2
A1.inv()
trigsimp(A1.inv())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Una vez importadas las funciones de nuestra librería, podemos empezar declarando variables (o constantes) conocidas para nosotros.
Step2: Nota que cualquiera de estas dos notaciones es valida para importar variables de sympy
Step3: Convención Denavit - Hartenberg
Step4: Dando otros valores aleatorios a la misma matriz tendremos
Step5: Y de la misma manera que en cualquier motor de algebra, si queremos multiplicar, tan solo tenemos que hacerlo
Step6: o bien, obtener la matriz inversa
Step7: la cual, a veces puede ser simplificada
|
1,501
|
<ASSISTANT_TASK:>
Python Code:
current_number = 2
while current_number <= 16:
twice_number = current_number + current_number
print(f"{current_number} and {current_number} are {twice_number}")
current_number = twice_number
current_number = 2
while current_number <= 16:
twice_number = current_number + current_number
print(f"{current_number} and {current_number} are {twice_number}")
current_number = twice_number
def sum_positive_numbers(max_number):
total = 0
first_number = 1
while first_number <= max_number:
total = total + first_number
first_number = first_number + 1
return total
user_number = int(input("Please enter a number: "))
print(sum_positive_numbers(user_number))
def get_grades(number_of_grades):
grades = []
while len(grades) < number_of_grades:
current_grade = int(input("Please enter a student grade: "))
grades = grades + [current_grade]
return grades
def get_highest_grade(grades):
highest_grade = grades[0]
current_grade_index = 1
while current_grade_index < len(grades):
if grades[current_grade_index] > highest_grade:
highest_grade = grades[current_grade_index]
current_grade_index = current_grade_index + 1
return highest_grade
number_of_grades = int(input("How many students are there?: "))
grades = get_grades(number_of_grades)
highest_grade = get_highest_grade(grades)
print(f"The highest grade is {highest_grade}")
i = 1
j = 1
while i <= 10:
line = ''
while j <= 10:
line = line + str(i * j) + '\t'
j = j + 1
print(line)
j = 1
i = i + 1
i = 1
while i < 10:
print(i)
print("End of the program")
i = 8
while i <= 0:
print(i)
print("End of the program")
numbers = [1, 2, 3, 4]
index = 0
total = 0
while index <= len(numbers):
total = total + numbers[index]
index = index + 1
print(total)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <div class="align-center" style="display
Step8: <div class="align-center" style="display
Step9: <div class="align-center" style="display
Step10: <div class="align-center" style="display
Step11: <div class="align-center" style="display
|
1,502
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Title
Step2: Resources
Step3: Run the model on a single batch of data, and inspect the output
Step4: Compile the model for training
|
1,503
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%matplotlib notebook
from threeML import *
import os
gbm_dir = "gbm"
bat_dir = "bat"
bat = OGIPLike('BAT',
observation=os.path.join(bat_dir,'gbm_bat_joint_BAT.pha'),
response=os.path.join(bat_dir,'gbm_bat_joint_BAT.rsp'))
bat.set_active_measurements('15-150')
bat.view_count_spectrum()
nai6 = OGIPLike('n6',
os.path.join(gbm_dir,'gbm_bat_joint_NAI_06.pha'),
os.path.join(gbm_dir,'gbm_bat_joint_NAI_06.bak'),
os.path.join(gbm_dir,'gbm_bat_joint_NAI_06.rsp'),
spectrum_number=1)
nai6.set_active_measurements('8-900')
nai6.view_count_spectrum()
bgo0 = OGIPLike('b0',
os.path.join(gbm_dir,'gbm_bat_joint_BGO_00.pha'),
os.path.join(gbm_dir,'gbm_bat_joint_BGO_00.bak'),
os.path.join(gbm_dir,'gbm_bat_joint_BGO_00.rsp'),
spectrum_number=1)
bgo0.set_active_measurements('250-10000')
bgo0.view_count_spectrum()
band = Band()
model = Model(PointSource('joint_fit',0,0,spectral_shape=band))
data_list = DataList(bat,nai6,bgo0)
jl = JointLikelihood(model, data_list)
_=jl.fit()
no_eac_results = jl.results
_=display_spectrum_model_counts(jl,step=False)
# turn on the effective area correction and set it's bounds
nai6.use_effective_area_correction(.2,1.8)
bgo0.use_effective_area_correction(.2,1.8)
# refit the data
_=jl.fit()
with_eac_res = jl.results
_=display_spectrum_model_counts(jl,step=False)
no_eac_results.display()
with_eac_res.display()
plot_point_source_spectra(no_eac_results,with_eac_res,flux_unit='erg2/(keV s cm2)',equal_tailed=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plugin setup
Step2: Model setup
Step3: Spectral fitting
Step4: It seems that the effective area between GBM and BAT do not agree!
Step5: Let's add an effective area correction between the detectors to investigate the problem
Step6: Now we have a much better fit to all data sets
Step7: Examining the differences
Step8: Let's plot the fits in model space and see how different the resulting models are.
|
1,504
|
<ASSISTANT_TASK:>
Python Code:
%cd ../examples/superlists/
!python3 functional_tests.py
%%writefile functional_tests.py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import unittest
class NewVisitorTest(unittest.TestCase):
def setUp(self):
self.browser = webdriver.Firefox()
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
def test_can_start_a_list_and_retrieve_it_later(self):
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
self.browser.get('http://localhost:8000')
# She notices the page title and header mention to-do lists
self.assertIn('To-Do', self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn('To-Do', header_text)
# She is invited to enter a to-do item straight away
inputbox = self.browser.find_element_by_id('id_new_item')
self.assertEqual(
inputbox.get_attribute('placeholder'),
'Enter a to-do item'
)
# She types "Buy peacock feathers" into a text box (Edith's hobby
# is tying fly-fishing lures)
inputbox.send_keys('Buy peacock feathers')
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list table
inputbox.send_keys(Keys.ENTER)
table = self.browser.find_element_by_id('id_list_table')
rows = table.find_elements_by_tag_name('tr')
self.assertTrue(
any(row.text == '1: Buy peacock feathers' for row in rows)
)
# There is still a text box inviting her to add another item. She
# enters "Use peacock feathers to make a fly" (Edith is very
# methodical)
self.fail('Finish the test!')
# The page updates again, and now shows both items on her list
# Edith wonders whether the site will remember her list. Then she sees
# that the site has generated a unique URL for her -- there is some
# explanatory text to that effect.
# She visits that URL - her to-do list is still there.
# Satisfied, she goes back to sleep
if __name__ == '__main__':
unittest.main(warnings='ignore')
!python3 functional_tests.py
!python3 manage.py test
!mkdir lists/templates
%%writefile lists/templates/home.html
<html>
<title>To-Do lists</title>
</html>
%%writefile lists/views.py
from django.shortcuts import render
def home_page(request):
return render(request, 'home.html')
!python3 manage.py test
...
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'lists',
)
...
!python3 manage.py test
%%writefile lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
self.assertTrue(response.content.strip().startswith(b'<html>')) #<---- Fix offending newline here
self.assertIn(b'<title>To-Do lists</title>', response.content)
self.assertTrue(response.content.strip().endswith(b'</html>')) #<---- Fix offending newline here
!python3 manage.py test
# %load lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from django.template.loader import render_to_string
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
expected_html = render_to_string('home.html')
self.assertEqual(response.content.decode(), expected_html)
%%writefile lists/templates/home.html
<html>
<head>
<title>To-Do lists</title>
</head>
<body>
<h1>Your To-Do list</h1>
</body>
</html>
!python3 functional_tests.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Did you try it, and get an error saying Problem loading page or Unable to connect? So did I. It’s because we forgot to spin up the dev server first using manage.py runserver. Do that, and you’ll get the failure message we’re after.
Step2: We’re using several of the methods that Selenium provides to examine web pages
Step3: Decoding that, the test is saying it can’t find an <h1> element on the page. Let’s see what we can do to add that to the HTML of our home page.
Step4: Great! We’ll start by taking our HTML string and putting it into its own file. Create a directory called lists/templates to keep templates in, and then open a file at lists/templates/home.html, to which we’ll transfer our HTML
Step5: Now to change our views
Step6: Instead of building our own HttpResponse, we now use the Django render function. It takes the request as its first parameter (for reasons we’ll go into later) and the name of the template to render. Django will automatically search folders called templates inside any of your apps' directories. Then it builds an HttpResponse for you, based on the content of the template.
Step7: Another chance to analyse a traceback
Step8: You can see there’s lots of apps already in there by default. We just need to add ours, lists, to the bottom of the list. Don’t forget the trailing comma—it may not be required, but one day you’ll be really annoyed when you forget it and Python concatenates two strings on different lines…
Step9: Darn, not quite.
Step10: Our refactor of the code is now complete, and the tests mean we’re happy that behaviour is preserved. Now we can change the tests so that they’re no longer testing constants; instead, they should just check that we’re rendering the right template. Another Django helper function called render_to_string is our friend here
Step11: We use .decode() to convert the response.content bytes into a Python unicode string, which allows us to compare strings with strings, instead of bytes with bytes as we did earlier.
Step12: Let’s see if our functional test likes it a little better
|
1,505
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator
from matplotlib import rcParams
from matplotlib.legend import Legend
import seaborn as sns
rcParams["font.family"] = "sans-serif"
rcParams['font.sans-serif'] = ['DejaVu Sans']
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.io import fits
from astropy.table import Table
import glob
import gc
%matplotlib notebook
cat_files = glob.glob('update_final*fit')
def get_hist_counts(cf):
score_tab = fits.getdata(cf)
scores = np.array(score_tab.score)
all_cnts, _ = np.histogram(scores, range=(0,1), bins=100)
low_lat = np.where(np.abs(score_tab.b) < 5)
plane_lat = np.where(np.abs(score_tab.b) < 15)
high_lat = np.where(np.abs(score_tab.b) > 30)
low_cnts, _ = np.histogram(scores[low_lat], range=(0,1), bins=100)
plane_cnts, _ = np.histogram(scores[plane_lat], range=(0,1), bins=100)
high_cnts, _ = np.histogram(scores[high_lat], range=(0,1), bins=100)
return all_cnts, low_cnts, plane_cnts, high_cnts
all_cnts = np.zeros(100, dtype=np.int64)
low_cnts = np.zeros_like(all_cnts)
plane_cnts = np.zeros_like(all_cnts)
high_cnts = np.zeros_like(all_cnts)
for cf in cat_files:
all_tmp, low_tmp, plane_tmp, high_tmp = get_hist_counts(cf)
all_cnts += all_tmp
low_cnts += low_tmp
plane_cnts += plane_tmp
high_cnts += high_tmp
col_dict = {'mustard': "#E29930",
'blue': "#217CA3",
'asphalt': "#32384D"
}
thresh = 0.83 #for nDet>2 sources
x_grid = np.repeat(np.linspace(0.0, 1, 101), 2)
ynorm = 1e6
fig, ax = plt.subplots(figsize=(7,5))
ax.plot(x_grid, np.concatenate(([1e-10], np.repeat(all_cnts, 2), [1e-10])),
color=col_dict['blue'])
ax.fill(x_grid, np.concatenate(([1e-10], np.repeat(all_cnts, 2), [1e-10])),
alpha=0.4, color=col_dict['blue'],
label='PS1 PSC "missing"')
ax.plot(x_grid, np.concatenate(([1e-10], np.repeat(low_cnts, 2), [1e-10])),
color=col_dict['mustard'])
ax.fill(x_grid, np.concatenate(([1e-10], np.repeat(low_cnts, 2), [1e-10])),
alpha=0.4, color=col_dict['mustard'],
label=r"$|b| < 5\degree$")
ax.plot(x_grid, np.concatenate(([0], np.repeat(high_cnts, 2), [0])),
color=col_dict['asphalt'])
ax.fill(x_grid, np.concatenate(([1e-10], np.repeat(high_cnts, 2), [1e-10])),
alpha=0.4, color=col_dict['asphalt'],
label=r"$|b| > 30\degree$")
ax.set_yscale("log")
ax.set_xlim(-0.01, 1.01)
ax.set_ylim(4e3, 3e6)
ax.tick_params(which="both", top=True, right=True, labelsize=15)
xminor = MultipleLocator(0.05)
ax.xaxis.set_major_locator(MultipleLocator(0.1))
ax.xaxis.set_minor_locator(MultipleLocator(0.025))
ax.set_xlabel("RF score", fontsize=15)
ax.set_ylabel("N", fontsize=15)
ax.vlines([thresh], 10, 1e9,
linestyles=":",
color='k', lw=2, zorder=11)
ax.text(thresh, .4e7, 'FoM threshold',
color='k',
rotation=90, ha="right", fontsize=14)
ax.text(thresh-0.001, 2e6, r'$\longleftarrow \mathrm{Extended}$ ',
color='k',
ha="right", fontsize=13)
ax.text(thresh, 2e6, r' $\mathrm{PS} \longrightarrow$',
color='k',
ha="left", fontsize=13)
ax.legend(loc=10, framealpha=1, bbox_to_anchor = (0.0,0.1,1,1),
bbox_transform=ax.transAxes, fontsize=13)
fig.subplots_adjust(left=0.105,right=0.98,top=0.98,bottom=0.11)
fig.savefig("../paperII/figures/PS1_PSC_update_hist.pdf")
print("There are {:d} sources classified in the ZTF-PS1 catalog".format(sum(all_cnts)))
print("There are {:d} stars using the FoM threshold".format(sum(all_cnts[np.where(np.linspace(0,0.99,100) >= thresh)])))
print("There are {:d} surefire stars (RF > 0.99)".format(all_cnts[-1]))
print("There are {:d} surefire galaxies (RF < 0.01)".format(all_cnts[0]))
print(" ")
print("At high (|b| > 30) latitudes:")
print("\t there are {:d} sources".format(sum(high_cnts)))
print("\t there are {:d} surefire galaxies (RF < 0.01)".format(high_cnts[0]))
print("\t there are {:d} stars using the FoM threshold".format(sum(high_cnts[np.where(np.linspace(0,0.99,100) >= thresh)])))
print("\t there are {:d} 50-50 stars (RF >= 0.5)".format(sum(high_cnts[np.where(np.linspace(0,0.99,100) >= 0.5)])))
print(" ")
print("At low (|b| < 5) latitudes:")
print("\t there are {:d} sources".format(sum(low_cnts)))
print("\t there are {:d} surefire stars (RF > 0.99)".format(low_cnts[-1]))
print("\t there are {:d} stars using the FoM threshold".format(sum(low_cnts[np.where(np.linspace(0,0.99,100) >= thresh)])))
print("\t there are {:d} 50-50 stars (RF >= 0.5)".format(sum(low_cnts[np.where(np.linspace(0,0.99,100) >= 0.5)])))
print("In the plane (|b| < 15):")
print("\t there are {:d} sources".format(sum(plane_cnts)))
print("\t there are {:d} surefire stars (RF > 0.99)".format(plane_cnts[-1]))
print("\t there are {:d} stars using the FoM threshold".format(sum(plane_cnts[np.where(np.linspace(0,0.99,100) >= thresh)])))
print("\t there are {:d} 50-50 stars (RF >= 0.5)".format(sum(plane_cnts[np.where(np.linspace(0,0.99,100) >= 0.5)])))
N_neg = 0 + 4331022 + 2643185 + 0 + 2583710 + 0 + 4118504 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 584621 + 1619083 + 4649772 + 1994380
print('There are {} stars below declination -30'.format(N_neg))
N_unique = 0
N_solo_not_det = 0
N_solo_force_det = 0
N_solo_pri0_force_det = 0
N_solo_pri0_no_force = 0
N_multi_pri2_force_det = 0
N_mutli_pri2_no_force = 0
N_multi_pri1_force_det = 0
N_mutli_pri1_no_force = 0
N_multi_pri0_force_det = 0
N_mutli_pri0_no_force = 0
nDet_hist = np.zeros(100)
missing_files = glob.glob('missing*fit')
for mf in missing_files:
mf_tab = fits.getdata(mf)
uni_counts = np.unique(mf_tab['objid'], return_index=True, return_inverse = True, return_counts=True)
N_unique += len(uni_counts[0])
print(N_unique)
solo_idx = uni_counts[1][np.where(uni_counts[3] == 1)]
multi_idx = uni_counts[1][np.where(uni_counts[3] > 1)]
not_det = len(np.where((mf_tab['stackDet'][solo_idx] == 0) &
(mf_tab['forceDet'][solo_idx] == 0))[0])
force_det = len(np.where((mf_tab['stackDet'][solo_idx] == 0) &
(mf_tab['forceDet'][solo_idx] == 1))[0])
no_primary_det = len(np.where((mf_tab['stackDet'][solo_idx] == 1) &
(mf_tab['forceDet'][solo_idx] == 1))[0])
no_primary_no_force = len(np.where((mf_tab['stackDet'][solo_idx] == 1) &
(mf_tab['forceDet'][solo_idx] == 0))[0])
if not_det + force_det + no_primary_det + no_primary_no_force != len(solo_idx):
print(mf, 'Ooops')
break
else:
N_solo_not_det += not_det
N_solo_force_det += force_det
N_solo_pri0_force_det += no_primary_det
N_solo_pri0_no_force += no_primary_no_force
mf_df = pd.DataFrame(mf_tab.objid.byteswap().newbyteorder(), columns=['objid'])
mf_df['nDetections'] = mf_tab.nDetections.astype(np.int32)
mf_df['primaryDetection'] = mf_tab.primaryDetection
mf_df['stackDet'] = mf_tab.stackDet
mf_df['forceDet'] = mf_tab.forceDet
tmp = mf_df.drop(mf_df.index[solo_idx])
ha = tmp.groupby('objid')[['primaryDetection','forceDet']].sum()
multi_pri2_force_det = len(ha.iloc[np.where((ha.primaryDetection >= 2) & (ha.forceDet > 1))].index.values.astype(int))
mutli_pri2_no_force = len(ha.iloc[np.where((ha.primaryDetection >= 2) & (ha.forceDet < 1))].index.values.astype(int))
multi_pri1_force_det = len(ha.iloc[np.where((ha.primaryDetection == 0) & (ha.forceDet > 1))].index.values.astype(int))
mutli_pri1_no_force = len(ha.iloc[np.where((ha.primaryDetection == 0) & (ha.forceDet < 1))].index.values.astype(int))
multi_pri0_force_det = len(ha.iloc[np.where((ha.primaryDetection == 1) & (ha.forceDet >= 1))].index.values.astype(int))
mutli_pri0_no_force = len(ha.iloc[np.where((ha.primaryDetection == 1) & (ha.forceDet == 0))].index.values.astype(int))
if multi_pri2_force_det + mutli_pri2_no_force + multi_pri1_force_det + mutli_pri1_no_force + multi_pri0_force_det + mutli_pri0_no_force != len(multi_idx):
print(mf, 'multi Ooops')
break
else:
N_multi_pri2_force_det += multi_pri2_force_det
N_mutli_pri2_no_force += mutli_pri2_no_force
N_multi_pri1_force_det += multi_pri1_force_det
N_mutli_pri1_no_force += mutli_pri1_no_force
N_multi_pri0_force_det += multi_pri0_force_det
N_mutli_pri0_no_force += mutli_pri0_no_force
check_fd = mf_df.groupby('objid')[['forceDet']].sum()
if len(np.where(check_fd.forceDet > 0)[0]) != (force_det + no_primary_det +
multi_pri2_force_det + multi_pri1_force_det + multi_pri0_force_det):
print(mf, 'fDet Ooops')
break
meh = mf_df.groupby('objid')[['forceDet','nDetections']].mean()
hist, _ = np.histogram(meh.nDetections.iloc[np.where(meh.forceDet == 0)], range=(3,103), bins=100)
nDet_hist += hist
del(mf_tab)
del(mf_df)
del(tmp)
del(ha)
gc.collect()
print(N_solo_force_det + N_solo_pri0_force_det +
N_multi_pri2_force_det + N_multi_pri1_force_det + N_multi_pri0_force_det)
print(N_multi_pri2_force_det + N_mutli_pri2_no_force)
print('For the 281434529 still not classified sources:')
print('\t{:9.0f} have nDetections = 3'.format(nDet_hist[0]))
print('\t{:9.0f} have nDetections <= 5'.format(sum(nDet_hist[0:3])))
print('\t{:9.0f} have nDetections <=10'.format(sum(nDet_hist[0:8])))
10942237-10938781
mf_tab = fits.getdata(missing_files[1])
mf_df = pd.DataFrame(mf_tab.objid.byteswap().newbyteorder(), columns=['objid'])
mf_df
all_tab = fits.getdata('PS1_star_galaxy.xhall.stars_20180814_1_adamamiller.fit')
all_df = pd.DataFrame(all_tab.strid.byteswap().newbyteorder(), columns=['strid'])
all_df.set_index(all_tab.strid.byteswap().newbyteorder(), inplace=True)
all_df.head()
all_df[~all_df.index.isin(np.unique(mf_tab.objid))]
def get_hist_counts(df):
rf_arr = np.array(df.rf_score)
coords = SkyCoord(np.array(df.raStack)*u.deg, np.array(df.decStack)*u.deg)
all_cnts, _ = np.histogram(rf_arr, range=(0,1), bins=100)
low_lat = np.where(np.abs(coords.galactic.b) < 15*u.deg)
high_lat = np.where(np.abs(coords.galactic.b) > 50*u.deg)
low_cnts, _ = np.histogram(rf_arr[low_lat], range=(0,1), bins=100)
high_cnts, _ = np.histogram(rf_arr[high_lat], range=(0,1), bins=100)
return all_cnts, low_cnts, high_cnts
file_list = glob.glob("/Users/adamamiller/Desktop/PS1_fits/hdf5/*h5")
all_cnts = np.zeros(100, dtype=np.int64)
low_cnts = np.zeros_like(all_cnts)
high_cnts = np.zeros_like(all_cnts)
for file in file_list:
df = pd.read_hdf(file)
all_tmp, low_tmp, high_tmp = get_hist_counts(df)
all_cnts += all_tmp
low_cnts += low_tmp
high_cnts += high_tmp
cat_df = pd.read_csv('class_table.ascii.gz', delim_whitespace=True)
for i in range(17):
start = i*8000000
end = (i+1)*8000000
cat_df.iloc[start:end].to_csv('update_{}.csv'.format(i), index=False)
cat_df.iloc[17*8000000:].to_csv('update_17.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in Classifications
Step2: Count sources that were missed in PS1 PSC v1
Step3: Loop over reason some stars are missing
Step4: What's going on with the 3456 "missing" missing sources
Step5: Answer
Step6: Make files for CasJobs upload
|
1,506
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from geoscilabs.gpr.GPRlab1 import downloadRadargramImage, PipeWidget, WallWidget
from SimPEG.utils import download
URL = "http://github.com/geoscixyz/geosci-labs/raw/main/images/gpr/ubc_GPRdata.png"
radargramImage = downloadRadargramImage(URL)
PipeWidget(radargramImage)
URL = "http://github.com/geoscixyz/geosci-labs/raw/main/images/gpr/ubc_GPRdata.png"
radargramImage = downloadRadargramImage(URL)
WallWidget(radargramImage)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pipe Fitting App
Step2: Slab Fitting App
|
1,507
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test, verbose=2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 텐서플로 2.0 시작하기
Step2: MNIST 데이터셋을 로드하여 준비합니다. 샘플 값을 정수에서 부동소수로 변환합니다
Step3: 층을 차례대로 쌓아 tf.keras.Sequential 모델을 만듭니다. 훈련에 사용할 옵티마이저(optimizer)와 손실 함수를 선택합니다
Step4: 모델을 훈련하고 평가합니다
|
1,508
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
knn.score(X_test, y_test)
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
from sklearn.model_selection import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.model_selection import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
from sklearn.model_selection import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 120)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
plot_learning_curve(1)
plot_learning_curve(3)
plot_learning_curve(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Validating Models
Step2: Let's fit a K-neighbors classifier
Step3: Now we'll use this classifier to predict labels for the data
Step4: Finally, we can check how well our prediction did
Step5: It seems we have a perfect classifier!
Step6: Now we train on the training data, and validate on the test data
Step7: This gives us a more reliable estimate of how our model is doing.
Step8: This can also be computed directly from the model.score method
Step9: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors
Step10: We see that in this case, a small number of neighbors seems to be the best option.
Step11: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Step12: K-fold Cross-Validation
Step13: This gives us an even better idea of how well our model is doing.
Step14: Now let's create a realization of this dataset
Step15: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit
Step16: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Step17: Now we'll use this to fit a quadratic curve to the data.
Step18: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
Step19: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Step20: Detecting Over-fitting with Validation Curves
Step21: Now let's plot the validation curves
Step22: Notice the trend here, which is common for this type of plot.
Step23: Detecting Data Sufficiency with Learning Curves
Step24: Let's see what the learning curves look like for a linear model
Step25: This shows a typical learning curve
Step26: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
|
1,509
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# 将 MNIST 数据从 (0, 255] 缩放到 (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE)
train_datasets = make_datasets_unbatched().batch(BATCH_SIZE)
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
NUM_WORKERS = 2
# 由于 `tf.data.Dataset.batch` 需要全局的批处理大小,
# 因此此处的批处理大小按工作器数量增加。
# 以前我们使用64,现在变成128。
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
# 创建数据集需要在 MultiWorkerMirroredStrategy 对象
# 实例化后。
train_datasets = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# 模型的建立/编译需要在 `strategy.scope()` 内部。
multi_worker_model = build_and_compile_cnn_model()
# Keras 的 `model.fit()` 以特定的时期数和每时期的步数训练模型。
# 注意此处的数量仅用于演示目的,并不足以产生高质量的模型。
multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
# 将 `filepath` 参数替换为在文件系统中所有工作器都能访问的路径。
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets,
epochs=3,
steps_per_epoch=5,
callbacks=callbacks)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 利用 Keras 来训练多工作器(worker)
Step2: 准备数据集
Step3: 构建 Keras 模型
Step4: 让我们首先尝试用少量的 epoch 来训练模型,并在单个工作器(worker)中观察结果,以确保一切正常。 随着训练的迭代,您应该会看到损失(loss)下降和准确度(accuracy)接近1.0。
Step5: 多工作器(worker)配置
Step6: 注意:解析 TF_CONFIG 并且在调用 MultiWorkerMirroredStrategy.__init__() 时启动 TensorFlow 的 GRPC 服务器,因此必须在创建tf.distribute.Strategy实例之前设置 TF_CONFIG 环境变量。
Step7: 数据集分片和批(batch)大小
Step8: 要注意的另一件事是 datasets 的批处理大小。 在上面的代码片段中,我们使用 GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS ,这是单个工作器的大小的 NUM_WORKERS 倍,因为每个工作器的有效批量大小是全局批量大小(参数从 tf.data.Dataset.batch() 传入)除以工作器的数量,通过此更改,我们使每个工作器的批处理大小与以前相同。
|
1,510
|
<ASSISTANT_TASK:>
Python Code:
def DivResto(num, base):
Retorna o quociente e resto da divisão de num por base
return num//base, num%base
dec = 14
bin = "" # string vazia
div, resto = DivResto(dec,2)
dec = div
bin = str(resto) + bin
print(bin)
# repete mais uma vez, pois temos outro dígito
div, resto = DivResto(dec,2)
dec = div
bin = str(resto) + bin
print(bin)
def Converte(dec,base):
num = ""
while dec > 0:
div, resto = DivResto(dec,base)
dec = div
num = str(resto) + num
return num
print(Converte(14,2))
def Converte(dec,base=2):
num = ''
while dec > 0:
div, resto = DivResto(dec,base)
dec = div
num = str(resto) + num
return num
print(Converte(14)) # usará a base 2
print(Converte(14, 3)) # usará a base 3
print(Converte(14, base=3)) # usará a base 3
def Converte(dec,base):
num = ''
while dec > 0:
div, resto = DivResto(dec,base)
dec = div
num = ??? + num
return num
print(Converte(14,16), hex(14)) # hexadecimal
import string
print(dir(string))
print()
print(string.digits, string.ascii_uppercase)
digitos = string.digits+string.ascii_uppercase
print(digitos[0], digitos[10])
def Converte(dec,base):
digitos = string.digits+string.ascii_uppercase
num = ''
while dec > 0:
div, resto = DivResto(dec,base)
dec = div
num = digitos[resto] + num
return num
print(Converte(14,16), hex(14)) # hexadecimal
import random
def Adivinhe():
x = random.randint(0,100)
Adivinhe()
def Etiope(m, n):
print(Etiope(12,13), 12*13)
print(Etiope(22,18), 22*18)
def Etiope(m, n):
print(Etiope(12,13), 12*13)
print(Etiope(22,18), 22*18)
def SomaDigitos(num):
print(SomaDigitos(12345))
def Primo(num):
# ele não pode ser divisível por nenhum número de 2 a num-1
print(Primo(10))
print(Primo(13))
print(Primo(131))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introdução à Programação em Python
Step2: Para resolver o primeiro item, utilizamos o comando while que significa enquanto
Step3: Reparem que substituindo $2$ por qualquer outro número, convertemos o número para outras bases menores do que $10$.
Step4: Como podemos adaptar nossa função para que possamos converter também para bases maiores que 10?
Step5: Faça um programa que sorteie um número e peça para o usuário adivinhar.
Step6: A multiplicação Etíope é um método de multiplicação que utiliza apenas a divisão e multiplicação por $2$.
Step7: Existem algumas sintaxes no Python conhecidas como Syntactic sugar que são formas de reduzir longas sequências de instruções de uma forma mais simples para leitura.
Step8: Crie uma função para calcular a soma dos dígitos de um número.
Step9: Crie uma função para verificar se um número é primo!
|
1,511
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
# ignore warnings to make notebook easier to see online
# COMMENT OUT THESE LINES FOR ACTUAL ANALYSIS
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
sns.set_palette("colorblind")
except ImportError:
print("Install seaborn. It help you make prettier figures!")
import numpy as np
from astropy.modeling import models
g = models.Gaussian1D()
# Generate fake data
np.random.seed(0)
x = np.linspace(-5., 5., 200)
y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2)
y += np.random.normal(0., 0.2, x.shape)
yerr = 0.2
plt.figure(figsize=(8,5))
plt.errorbar(x, y, yerr=yerr, fmt='ko')
# define power law component
pl = models.PowerLaw1D()
# fix x_0 of power law component
pl.x_0.fixed = True
# define constant
c = models.Const1D()
# make compound model
plc = pl + c
# parameters for fake data.
alpha = 2.0
amplitude = 5.0
white_noise = 2.0
freq = np.linspace(0.01, 10.0, int(10.0/0.01))
from astropy.modeling.fitting import _fitter_to_model_params
_fitter_to_model_params(plc, [amplitude, alpha, white_noise])
psd_shape = plc(freq)
powers = psd_shape*np.random.chisquare(2, size=psd_shape.shape[0])/2.0
plt.figure(figsize=(12,7))
plt.loglog(freq, powers, ds="steps-mid", label="periodogram realization")
plt.loglog(freq, psd_shape, label="power spectrum")
plt.legend()
logmin = -1e16
class PSDLogLikelihood(object):
def __init__(self, freq, power, model, m=1):
A Chi-square likelihood as appropriate for power spectral analysis.
Parameters
----------
freq : iterable
x-coordinate of the data
power : iterable
y-coordinte of the data
model: an Astropy Model instance
The model to use in the likelihood.
m : int
1/2 of the degrees of freedom, i.e. the number of powers
that were averaged to obtain the power spectrum input into
this routine.
self.x = ps.freq # the x-coordinate of the data (frequency array)
self.y = ps.power # the y-coordinate of the data (powers)
self.model = model # an astropy.models instance
self.m = m
self.params = [k for k,l in self.model.fixed.items() if not l]
self.npar = len(self.params) # number of free parameters
def evaluate(self, pars, neg=False):
Evaluate the log-likelihood.
Parameters
----------
pars : iterable
The list of parameters for which to evaluate the model.
neg : bool, default False
If True, compute the *negative* log-likelihood, otherwise
compute the *positive* log-likelihood.
Returns
-------
loglike : float
The log-likelihood of the model
# raise an error if the length of the parameter array input into
# this method doesn't match the number of free parameters in the model
if np.size(pars) != self.npar:
raise Exception("Input parameters must" +
" match model parameters!")
# set parameters in self.model to the parameter set to be used for
# evaluation
_fitter_to_model_params(self.model, pars)
# compute the values of the model at the positions self.x
mean_model = self.model(self.x)
# if the power spectrum isn't averaged, compute simple exponential
# likelihood (chi-square likelihood for 2 degrees of freedom)
if self.m == 1:
loglike = -np.sum(np.log(mean_model)) - \
np.sum(self.y/mean_model)
# otherwise use chi-square distribution to compute likelihood
else:
loglike = -2.0*self.m*(np.sum(np.log(mean_model)) +
np.sum(self.y/mean_model) +
np.sum((2.0 / (2. * self.m) - 1.0) *
np.log(self.y)))
if not np.isfinite(loglike):
loglike = logmin
if neg:
return -loglike
else:
return loglike
def __call__(self, parameters, neg=False):
return self.evaluate(parameters, neg)
from stingray import Powerspectrum
ps = Powerspectrum()
ps.freq = freq
ps.power = powers
ps.df = ps.freq[1] - ps.freq[0]
ps.m = 1
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
test_pars = [1, 5, 100]
loglike(test_pars)
test_pars = [4.0, 10, 2.5]
loglike(test_pars)
test_pars = [2.0, 5.0, 2.0]
loglike(test_pars)
from stingray.modeling import PSDLogLikelihood
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike(test_pars)
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike.model.parameters
loglike.npar
starting_pars = [3.0, 1.0, 2.4]
res = parest.fit(loglike, starting_pars)
res.result
print(res.p_opt)
print(res.err)
print("AIC: " + str(res.aic))
print("BIC: " + str(res.bic))
plt.figure(figsize=(12,8))
plt.loglog(ps.freq, psd_shape, label="true power spectrum",lw=3)
plt.loglog(ps.freq, ps.power, label="simulated data")
plt.loglog(ps.freq, res.mfit, label="best fit", lw=3)
plt.legend()
res.print_summary(loglike)
# broken power law model
bpl = models.BrokenPowerLaw1D()
# add constant
bplc = bpl + c
bplc.param_names
# define starting parameters
bplc_start_pars = [2.0, 1.0, 3.0, 1.0, 2.5]
loglike_bplc = PSDLogLikelihood(ps.freq, ps.power, bplc, m=ps.m)
pval, plc_opt, bplc_opt = parest.compute_lrt(loglike, starting_pars, loglike_bplc, bplc_start_pars)
print("Likelihood Ratio: " + str(pval))
from stingray.modeling import PSDPosterior
lpost = PSDPosterior(ps.freq, ps.power, plc, m=ps.m)
import scipy.stats
# flat prior for the power law index
p_alpha = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
from stingray.modeling import set_logprior
lpost.logprior = set_logprior(lpost, priors)
lpost = PSDPosterior(ps.freq, ps.power, plc, priors=priors, m=ps.m)
test_pars = [1.0, 2.0, 4.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
test_pars = [6, 6, 3.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
test_pars = [5.0, 2.0, 2.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
parest = PSDParEst(ps, fitmethod='BFGS', max_post=True)
res = parest.fit(lpost, starting_pars)
print("best-fit parameters:")
for p,e in zip(res.p_opt, res.err):
print("%.4f +/- %.4f"%(p,e))
res.print_summary(lpost)
sample = parest.sample(lpost, res.p_opt, cov=res.cov, nwalkers=400,
niter=100, burnin=300, namestr="psd_modeling_test")
sample.acceptance
sample.mean
sample.ci
sample.print_results()
fig = sample.plot_results(nsamples=1000, fig=None, save_plot=True,
filename="modeling_tutorial_mcmc_corner.pdf")
import copy
def _generate_model(lpost, pars):
Helper function that generates a fake PSD similar to the
one in the data, but with different parameters.
Parameters
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
model_data : numpy.ndarray
An array of model values for each bin in lpost.x
# get the model
m = lpost.model
# reset the parameters
_fitter_to_model_params(m, pars)
# make a model spectrum
model_data = lpost.model(lpost.x)
return model_data
def _generate_psd(ps, lpost, pars):
Generate a fake power spectrum from a model.
Parameters:
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
sim_ps : stingray.Powerspectrum object
The simulated Powerspectrum object
model_spectrum = _generate_model(lpost, pars)
# use chi-square distribution to get fake data
model_powers = model_spectrum*np.random.chisquare(2*ps.m,
size=model_spectrum.shape[0])/(2.*ps.m)
sim_ps = copy.copy(ps)
sim_ps.powers = model_powers
return sim_ps
def _compute_pvalue(obs_val, sim):
Compute the p-value given an observed value of a test statistic
and some simulations of that same test statistic.
Parameters
----------
obs_value : float
The observed value of the test statistic in question
sim: iterable
A list or array of simulated values for the test statistic
Returns
-------
pval : float [0, 1]
The p-value for the test statistic given the simulations.
# cast the simulations as a numpy array
sim = np.array(sim)
# find all simulations that are larger than
# the observed value
ntail = sim[sim > obs_val].shape[0]
# divide by the total number of simulations
pval = ntail/sim.shape[0]
return pval
def calibrate_lrt(ps, lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False,
nsim=1000, niter=200, nwalker=500, burnin=200, namestr="test"):
# set up the ParameterEstimation object
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
# compute the observed likelihood ratio
lrt_obs, res1, res2 = parest.compute_lrt(lpost1, t1,
lpost2, t2,
neg=neg,
max_post=max_post)
# simulate parameter sets from the simpler model
if not max_post:
# using Maximum Likelihood, so I'm going to simulate parameters
# from a multivariate Gaussian
# set up the distribution
mvn = scipy.stats.multivariate_normal(mean=res1.p_opt, cov=res1.cov)
# sample parameters
s_all = mvn.rvs(size=nsim)
else:
if sample is None:
# sample the posterior using MCMC
sample = parest.sample(lpost, res1.p_opt, cov=res1.cov,
nwalkers=nwalker, niter=niter,
burnin=burnin, namestr=namestr)
# pick nsim samples out of the posterior sample
s_all = sample[np.random.choice(sample.shape[0], nsim, replace=False)]
lrt_sim = np.zeros(nsim)
# now I can loop over all simulated parameter sets to generate a PSD
for i,s in enumerate(s_all):
# generate fake PSD
sim_ps = _generate_psd(ps, lpost1, s)
# make LogLikelihood objects for both:
if not max_post:
sim_lpost1 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost1.model, m=sim_ps.m)
sim_lpost2 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost2.model, m=sim_ps.m)
else:
# make a Posterior object
sim_lpost1 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost1.model, m=sim_ps.m)
sim_lpost1.logprior = lpost1.logprior
sim_lpost2 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost2.model, m=sim_ps.m)
sim_lpost2.logprior = lpost2.logprior
parest_sim = PSDParEst(sim_ps, max_post=max_post)
lrt_sim[i], _, _ = parest_sim.compute_lrt(sim_lpost1, t1,
sim_lpost2, t2,
neg=neg,
max_post=max_post)
# now I can compute the p-value:
pval = _compute_pvalue(lrt_obs, lrt_sim)
return pval
pval = calibrate_lrt(ps, loglike, starting_pars,
loglike_bplc, bplc_start_pars,
max_post=False, nsim=100)
print("The p-value for rejecting the simpler model is: " + str(pval))
import scipy.stats
# flat prior for the power law indices
p_alpha1 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
p_alpha2 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the break frequency
p_x_break = lambda xbreak: ((0.01 <= xbreak) & (10.0 >= xbreak))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_1_0"] = p_alpha
priors["alpha_2_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
priors["x_break_0"] = p_x_break
lpost_bplc = PSDPosterior(ps.freq, ps.power, bplc, priors=priors, m=ps.m)
lpost_bplc(bplc_start_pars)
pval = calibrate_lrt(ps, lpost, starting_pars,
lpost_bplc, bplc_start_pars,
sample=sample.samples,
max_post=True, nsim=100)
print("The posterior predictive p-value is: p = " + str(pval))
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="BFGS")
pval = parest.calibrate_lrt(lpost, starting_pars, lpost_bplc, bplc_start_pars,
sample=sample.samples, nsim=100, max_post=True, seed=200)
print(pval)
# compute highest outlier in the data, and the frequency and index
# where that power occurs
max_power, max_freq, max_ind = parest._compute_highest_outlier(lpost, res)
max_power
pval = parest.calibrate_highest_outlier(lpost, starting_pars, sample=sample,
max_post=True,
nsim=100, niter=200, nwalkers=500,
burnin=200, namestr="test")
pval
from stingray import Powerspectrum
m = 1
nfreq = 100000
freq = np.linspace(1, 1000, nfreq)
np.random.seed(100) # set the seed for the random number generator
noise = np.random.exponential(size=nfreq)
model = models.PowerLaw1D() + models.Const1D()
model.x_0_0.fixed = True
alpha_0 = 2.0
amplitude_0 = 100.0
amplitude_1 = 2.0
model.alpha_0 = alpha_0
model.amplitude_0 = amplitude_0
model.amplitude_1 = amplitude_1
p = model(freq)
power = noise * p
ps = Powerspectrum()
ps.freq = freq
ps.power = power
ps.m = m
ps.df = freq[1] - freq[0]
ps.norm = "leahy"
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
from stingray.modeling import PSDLogLikelihood, PSDPosterior, PSDParEst
def fit_powerspectrum(ps, model, starting_pars, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
if priors:
lpost = PSDPosterior(ps, model, priors=priors)
else:
lpost = PSDLogLikelihood(ps.freq, ps.power, model, m=ps.m)
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
model_to_test = models.PowerLaw1D() + models.Const1D()
model_to_test.x_0_0.fixed = True
t0 = [80, 1.5, 2.5]
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
plt.figure()
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
plt.plot(ps.freq, res.mfit, lw=3, color="red")
from stingray.modeling.scripts import fit_powerspectrum
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
l = models.Lorentz1D
l.param_names
def fit_lorentzians(ps, nlor, starting_pars, fit_whitenoise=True, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
model = models.Lorentz1D()
if nlor > 1:
for i in range(nlor-1):
model += models.Lorentz1D()
if fit_whitenoise:
model += models.Const1D()
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
lpost = PSDPosterior(ps.freq, ps.power, model, priors=priors, m=ps.m)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
np.random.seed(400)
nlor = 3
x_0_0 = 0.5
x_0_1 = 2.0
x_0_2 = 7.5
amplitude_0 = 150.0
amplitude_1 = 50.0
amplitude_2 = 15.0
fwhm_0 = 0.1
fwhm_1 = 1.0
fwhm_2 = 0.5
whitenoise = 2.0
model = models.Lorentz1D(amplitude_0, x_0_0, fwhm_0) + \
models.Lorentz1D(amplitude_1, x_0_1, fwhm_1) + \
models.Lorentz1D(amplitude_2, x_0_2, fwhm_2) + \
models.Const1D(whitenoise)
p = model(ps.freq)
noise = np.random.exponential(size=len(ps.freq))
power = p*noise
plt.figure()
plt.loglog(ps.freq, power, lw=1, ds="steps-mid", c="black")
plt.loglog(ps.freq, p, lw=3, color="red")
import copy
ps_new = copy.copy(ps)
ps_new.power = power
t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]
parest, res = fit_lorentzians(ps_new, nlor, t0)
res.p_opt
parest.plotfits(res, save_plot=False, namestr="lorentzian_test")
from stingray.modeling import fit_lorentzians
parest, res = fit_lorentzians(ps_new, nlor, t0)
res.p_opt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The models and API of astropy.modeling.models is explained in the astropy documentation in more detail.
Step2: Likelihoods and Posteriors
Step3: We're going to pick some fairly standard parameters for our data
Step4: And now a frequency array
Step5: Now we can set the parameters in the model
Step6: As a last step, we need to add noise by picking from a chi-square distribution with 2 degrees of freedom
Step7: Let's plot the result
Step10: Maximum Likelihood Fitting
Step11: Let's make an object and see what it calculates if we put in different parameter sets. First, we have to make our sample PSD into an actual Powerspectrum object
Step12: Something close to the parameters we put in should yield the largest log-likelihood. Feel free to play around with the test parameters to verify that this is true.
Step13: To estimate the parameters, we can use an optimization routine, such as those implemented in scipy.optimize.minimize.
Step14: In order to fit a model, make an instance of the appropriate LogLikelihood or Posterior subclass, andsimply call the fit method with that instance and starting parameters you would like to fit.
Step15: The result is an OptimizationResults object, which computes various summaries and useful quantities.
Step16: Note
Step17: Note
Step18: Finally, it also produces the values of the mean function for the parameters at the maximum. Let's plot that and compare with the power spectrum we put in
Step19: That looks pretty good!
Step20: Likelihood Ratios
Step21: Bayesian Parameter Estimation
Step22: The priors are set as a dictionary of functions
Step23: There's a function set_logprior in stingray.modeling that sets the prior correctly
Step24: You can also set the priors when you instantiate the posterior object
Step25: Much like before with the log-likelihood, we can now also compute the log-posterior for various test parameter sets
Step26: When the prior is zero (so the log-prior is -infinity), it automatically gets set to a very small value in order to avoid problems when doing the optimization
Step27: We can do the same parameter estimation as above, except now it's called maximum-a-posteriori instead of maximum likelihood and includes the prior (notice we set max_post=True)
Step28: The same outputs exist as for the Maximum Likelihood case
Step29: Unlike in the maximum likelihood case, we can also sample from the posterior probability distribution. The method sample uses the emcee package to do MCMC.
Step30: The sampling method returns an object with various attributes that are useful for further analysis, for example the acceptance fraction
Step31: Or the mean and confidence intervals of the parameters
Step32: The method print_results prints the results
Step33: Similarly, the method plot_results produces a bunch of plots
Step37: Calibrating Likelihood Ratio Tests
Step38: As expected, the p-value for rejecting the powerlaw model is fairly large
Step39: Now we can set up the PSDPosterior object
Step40: And do the posterior predictive p-value. Since we've already sampled from the simple model, we can pass that sample to the calibrate_lrt function, in order to cut down on computation time (if the keyword sample is not given, it will automatically run MCMC
Step41: Again, we find that the p-value does not suggest rejecting the powerlaw model.
Step42: Bayesian-ish QPO Searches
Step43: Convenience Functions
Step44: What does this data set look like?
Step45: In order to fit this, we'll write a convenience function that can take the power spectrum, a model, some starting parameters and just run with it
Step46: Let's see if it works. We've already defined our model above, but to be explicit, let's define it again
Step47: Now we just need some starting parameters
Step48: Looks like it worked! Let's plot the result, too
Step49: You can find the function in the scripts sub-module
Step50: Fitting Lorentzians
Step51: Let's make a dataset so we can test it!
Step52: Let's make this into a Powerspectrum object
Step53: So now we can fit this model with our new function, but first, we need to define the starting parameters for our fit. The starting parameters will be [amplitude, x_0, fwhm] for each component plus the white noise component at the end
Step54: Let's look at the output
Step55: Cool, that seems to work! For convenience PSDParEst also has a plotting function
Step56: The function exists in the library as well for ease of use
|
1,512
|
<ASSISTANT_TASK:>
Python Code:
from pybaseball import statcast, utils
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pybaseball.plotting import plot_bb_profile
# Grab 1 month per year
dfs = []
for year in range(2015, 2021):
print(f"Starting year {year}")
dfs.append(statcast(start_dt=f'{year}-08-01', end_dt=f'{year}-09-01',verbose=False))
threshold = 0.002
summary = None
for year,df in zip(range(2015, 2021),dfs):
for bb_type in dfs[0].bb_type.dropna().unique():
# Isolate each bb_type
i = df[df["bb_type"] == bb_type]
# Sort by pairs of launch angle and speed
i = i.groupby(["launch_angle", "launch_speed"]).size().reset_index(name="count").sort_values("count", ascending=False)
# Derive fraction of total
i["fraction"] = i["count"] / i["count"].sum()
# Flagging as possibly imputed criterion
i["flag"] = (i["fraction"] > threshold)
i["bb_type"] = bb_type
i["year"] = year
flagged = i[i["flag"] == True]
# Add to dataframe (or create on first iteration)
if summary is not None:
summary = summary.append(flagged[["launch_angle","launch_speed","count","bb_type","year"]])
else:
summary = flagged[["launch_angle","launch_speed","count","bb_type","year"]]
# Print out the results
summary.groupby(["launch_angle", "launch_speed","bb_type"]).size().reset_index(name="Years above threshold")
for i,year in enumerate(range(2015, 2021)):
plot_bb_profile(dfs[i])
plt.title(f'{year}')
plt.xlabel("Launch Angle")
plt.show()
for i,year in enumerate(range(2015, 2021)):
skimmed_df = dfs[i].merge(summary.drop_duplicates(), how="left", on=["launch_angle","launch_speed","bb_type"],indicator=True)
plot_bb_profile(skimmed_df[skimmed_df['_merge'] == 'left_only'])
plt.title(f'{year}')
plt.xlabel("Launch Angle")
plt.show()
for year,df in zip(range(2015, 2021),dfs):
for bb_type in dfs[0].bb_type.dropna().unique():
# Isolate each bb_type
i = df[df["bb_type"] == bb_type]
# Sort by pairs of launch angle and speed
i = i.groupby(["launch_angle", "launch_speed"]).size().reset_index(name="count").sort_values("count", ascending=False)
# Derive fraction of total
i["fraction"] = i["count"] / i["count"].sum()
print(f"bb_type: {bb_type}, year: {year}")
print(i.head(5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since there's a bit of variance year-to-year and especially difference in 2020 with Hawkeye, grab a month from each year
Step2: Calculate the fraction per year a given ev/launch angle makes up, then put those together into one DF.
Step3: Group over years to see the cases where the threshold is passed
Step4: Validate Results
Step5: Output distributions look clean, so this set looks like a good start.
|
1,513
|
<ASSISTANT_TASK:>
Python Code:
# create an example RDD
map_exp_rdd = sc.textFile('../../data/mtcars.csv')
map_exp_rdd.take(4)
# split auto model from other feature values
map_exp_rdd_1 = map_exp_rdd.map(lambda x: x.split(',')).map(lambda x: (x[0], x[1:]))
map_exp_rdd_1.take(4)
# remove the header row
header = map_exp_rdd_1.first()
# the filter method apply a function to each elemnts. The function output is a boolean value (TRUE or FALSE)
# elements that have output TRUE will be kept.
map_exp_rdd_2 = map_exp_rdd_1.filter(lambda x: x != header)
map_exp_rdd_2.take(4)
# convert string values to numeric values
map_exp_rdd_3 = map_exp_rdd_2.map(lambda x: (x[0], list(map(float, x[1]))))
map_exp_rdd_3.take(4)
mapValues_exp_rdd = map_exp_rdd_3
mapValues_exp_rdd.take(4)
import numpy as np
mapValues_exp_rdd_1 = mapValues_exp_rdd.mapValues(lambda x: np.mean(x))
mapValues_exp_rdd_1.take(4)
x = [('a', 'b', 'c'), ('a', 'a'), ('c', 'c', 'c', 'd')]
flatMap_exp_rdd = sc.parallelize(x)
flatMap_exp_rdd.collect()
flatMap_exp_rdd_1 = flatMap_exp_rdd.flatMap(lambda x: x)
flatMap_exp_rdd_1.collect()
# example data
my_data = [
[1, (23, 28, 32)],
[2, (18, 29, 31)],
[3, (34, 21, 18)]
]
flatMapValues_exp_rdd = sc.parallelize(my_data)
flatMapValues_exp_rdd.collect()
# merge A,B,and C columns into on column and add the type column
flatMapValues_exp_rdd_1 = flatMapValues_exp_rdd.flatMapValues(lambda x: list(zip(list('ABC'), x)))
flatMapValues_exp_rdd_1.collect()
# unpack the element values
flatMapValues_exp_rdd_2 = flatMapValues_exp_rdd_1.map(lambda x: [x[0]] + list(x[1]) )
flatMapValues_exp_rdd_2.collect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: mapValues
Step2: When using mapValues(), the x in the above lambda function refers to the element value, not including the element key.
Step3: flatMapValues
|
1,514
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import geopandas as gpd
from pysal.explore import segregation
import pysal.lib
s_map = gpd.read_file(pysal.lib.examples.get_path("sacramentot2.shp"))
s_map.columns
gdf = s_map[['geometry', 'HISP_', 'TOT_POP']]
gdf['composition'] = gdf['HISP_'] / gdf['TOT_POP']
gdf.plot(column = 'composition',
cmap = 'OrRd',
figsize=(20,10),
legend = True)
from pysal.explore.segregation.aspatial import Dissim
index = Dissim(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import GiniSeg
index = GiniSeg(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import Entropy
index = Entropy(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import Atkinson
index = Atkinson(gdf, 'HISP_', 'TOT_POP', b = 0.5)
type(index)
index.statistic
from pysal.explore.segregation.aspatial import ConProf
index = ConProf(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
index.plot()
from pysal.explore.segregation.aspatial import Isolation
index = Isolation(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import Exposure
index = Exposure(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import CorrelationR
index = CorrelationR(gdf, 'HISP_', 'TOT_POP')
type(index)
index.statistic
from pysal.explore.segregation.aspatial import ModifiedDissim
index = ModifiedDissim(gdf, 'HISP_', 'TOT_POP', iterations = 500)
type(index)
index.statistic
from pysal.explore.segregation.aspatial import ModifiedGiniSeg
index = ModifiedGiniSeg(gdf, 'HISP_', 'TOT_POP', iterations = 500)
type(index)
index.statistic
from pysal.explore.segregation.aspatial import BiasCorrectedDissim
index = BiasCorrectedDissim(gdf, 'HISP_', 'TOT_POP', B = 500)
type(index)
index.statistic
from pysal.explore.segregation.aspatial import DensityCorrectedDissim
index = DensityCorrectedDissim(gdf, 'HISP_', 'TOT_POP', xtol = 1e-5)
type(index)
index.statistic
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then it's time to load some data to estimate segregation. We use the data of 2000 Census Tract Data for the metropolitan area of Sacramento, CA, USA.
Step2: The data have several demographic variables. We are going to assess the segregation of the Hispanic Population (variable 'HISP_'). For this, we only extract some columns of the geopandas dataframe.
Step3: We also can plot the spatial distribution of the composition of the Hispanic population over the tracts of Sacramento
Step4: Notation
Step5: All the segregation classes have the statistic and the core_data attributes. We can access the point estimation of D for the data set with the statistic attribute
Step6: The interpretation of this value is that 32.18% of the hispanic population would have to move to reach eveness in Sacramento.
Step7: Entropy
Step8: Atkinson
Step9: Concentration Profile
Step10: In addition, this index has a plotting method to see the profile estimated.
Step11: Isolation
Step12: The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 23.19% of probability that this member shares a unit with another hispanic.
Step13: The interpretation of this number is that if you randomly pick a hispanic person of a specific tract of Sacramento, there is 76.8% of probability that this member shares a unit with an nonhispanic.
Step14: Modified Dissimilarity
Step15: Modified Gini
Step16: Bias-Corrected Dissimilarity
Step17: Density-Corrected Dissimilarity
|
1,515
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from random import random
import math
import numpy as np
import copy
from scipy import stats
import matplotlib.pyplot as plt
import pickle as pkl
from scipy.spatial import distance
import seaborn as sns
sns.set_style('darkgrid')
def loadMovieLens(path='./data/movielens'):
#Get movie titles
movies={}
rev_movies={}
for idx,line in enumerate(open(path+'/u.item')):
idx,title=line.split('|')[0:2]
movies[idx]=title
rev_movies[title]=idx
# Load data
prefs={}
for line in open(path+'/u.data'):
(user,movieid,rating,ts)=line.split('\t')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs,rev_movies
data,movies = loadMovieLens("data/ml-100k")
def getRawArray(data):
d = []
for u in data.keys():
for i in data[u].keys():
d.append([u,i,data[u][i]])
return np.array(d)
# splitting while avoiding to reduce the dataset too much
def split_train_test(data,percent_test):
test={}
train={}
movie={}
for u in data.keys():
test.setdefault(u,{})
train.setdefault(u,{})
for movie in data[u]:
#print(data[u][movie])
if (random()<percent_test):
test[u][movie]=data[u][movie]
else:
train[u][movie]=data[u][movie]
return train, test
def split_train_test_by_movies(data,percent_test):
test={}
train={}
movie={}
for u in data.keys():
for movie in data[u]:
if (random()<percent_test):
try:
test[movie][u]=data[u][movie]
except KeyError:
test.setdefault(movie,{})
test[movie][u]=data[u][movie]
else:
try:
train[movie][u]=data[u][movie]
except KeyError:
train.setdefault(movie,{})
train[movie][u]=data[u][movie]
return train, test
percent_test = 0.2
train,test = split_train_test(data,percent_test)
m_train,m_test = split_train_test_by_movies(data,percent_test)
def deleteUnseenInTest(train,test):
for k in test.keys():
try:
train[k]
except KeyError:
test.pop(k,None)
def deleteUnknowData(triplet_test, trainUsers, trainItems) :
to_Del = []
for i,t in enumerate(triplet_test):
if not t[0] in trainUsers:
to_Del.append(i)
elif not t[1] in trainItems:
to_Del.append(i)
return np.delete(triplet_test, to_Del, 0)
deleteUnseenInTest(train,test)
deleteUnseenInTest(m_train,m_test)
def getRawArray(data):
rawArray = []
for u in data.keys():
for i in data[u].keys():
rawArray.append([u,i,data[u][i]])
return rawArray
def getDataByUsers(rawArray) :
usr = {}
for t in rawArray:
if not t[0] in usr.keys():
usr[t[0]] = {}
usr[t[0]][t[1]] = float(t[2])
return usr
def getDataByItems(rawArray) :
itm = {}
for t in rawArray:
if not t[1] in itm.keys():
itm[t[1]] = {}
itm[t[1]][t[0]] = float(t[2])
return itm
# Split l'ensemble des rawArrays
def splitTrainTest(rawArray, testProp) :
perm = np.random.permutation(rawArray)
splitIndex = int(testProp * len(rawArray))
return perm[splitIndex:], perm[:splitIndex]
# supprime des données de test les données inconnus en train
def deleteUnknowData(rawArray_test, trainUsers, trainItems) :
to_Del = []
for i,t in enumerate(rawArray_test):
if not t[0] in trainUsers:
to_Del.append(i)
elif not t[1] in trainItems:
to_Del.append(i)
return np.delete(rawArray_test, to_Del, 0)
%%time
rawArray = getRawArray(data)
arrayTrain, arrayTest = splitTrainTest(rawArray , 0.2)
trainUsers = getDataByUsers(arrayTrain)
trainItems = getDataByItems(arrayTrain)
arrayTest = deleteUnknowData(arrayTest, trainUsers, trainItems)
testUsers = getDataByUsers(arrayTest)
testItems = getDataByItems(arrayTest)
arrayTest[:10,:10]
class baselineMeanUser:
def __init__(self):
self.users={}
def fit(self,train):
for user in train.keys():
note=0.0
for movie in train[user].keys():
note+=train[user][movie]
note=note/len(train[user])
self.users[user]=note
def predict(self,users):
return [self.users[u] for u in users]
baseline_mu= baselineMeanUser()
baseline_mu.fit(train)
pred = baseline_mu.predict(arrayTest[:,0])
print("Mean Error %0.6f" %(
(np.array(pred) - np.array(arrayTest[:,2], float)) ** 2).mean())
class baselineMeanMovie:
def __init__(self):
self.movies={}
def fit(self,train):
for movie in train.keys():
note=0.0
for user in train[movie].keys():
note+=train[movie][user]
note=note/len(train[movie])
self.movies[movie]=note
def predict(self,movies):
res=[]
for m in movies:
try:
res.append(self.movies[m])
except:
res.append(3)
return res
baseline_mm= baselineMeanMovie()
baseline_mm.fit(m_train)
pred = baseline_mm.predict(arrayTest[:,1])
print("Mean Error %0.6f" %(
(np.array(pred) - np.array(arrayTest[:,2], float)) ** 2).mean())
rawMatrix = np.zeros((943,1682))
for u in data:
for m in data[u]:
rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]
print(np.shape(rawMatrix))
rawMatrix[:5,:5]
rawMatrixTrain = np.zeros((len(data.keys()),1682))
for u in train:
for m in train[u]:
rawMatrixTrain[int(u)-1][int(movies[m])-1] = train[u][m]
rawMatrixTest = np.zeros((len(data.keys()),1682))
for u in test:
for m in test[u]:
rawMatrixTest[int(u)-1][int(movies[m])-1] = test[u][m]
def nmf(X, latent_features, max_iter=100, eps = 1e-5,printevery=100):
print "NMF with %d latent features, %d iterations."%(latent_features, max_iter)
# mask used to ignore null element (coded by zero)
mask = np.sign(X)
# randomly initialized matrix
rows, columns = X.shape
A = np.random.rand(rows, latent_features)
Y = np.random.rand(latent_features, columns)
# Not used as I couldn't get it to bring significant improvments
# Y = linalg.lstsq(A, X)[0] # initializing that way as recommanded in a blog post
# Y = np.maximum(Y, eps) # avoiding too low values
masked_X = mask * X
masktest = np.sign(rawMatrixTest) # used for prints
masktrain = np.sign(rawMatrixTrain) # used for prints
for i in range(1, max_iter + 1):
top = np.dot(masked_X, Y.T)
bottom = (np.dot((mask * np.dot(A, Y)), Y.T)) + eps
A *= top / bottom
top = np.dot(A.T, masked_X)
bottom = np.dot(A.T, mask * np.dot(A, Y)) + eps
Y *= top / bottom
# evaluation
if i % printevery == 0 or i == 1 or i == max_iter:
X_est = np.dot(A, Y)
q = masktest*X_est - rawMatrixTest
q_train = masktrain*X_est - rawMatrixTrain
print "Iteration %d, Err %.05f, Err train %.05f"%( i, (q*q).sum()/ masktest.sum(), (q_train*q_train).sum()/ masktest.sum() )
return A, Y
%%time
A,Y = nmf(rawMatrixTrain,100,eps = 1e-5,max_iter=5,printevery=1)
resMatrix = A.dot(Y)
%%time
A,Y = nmf(rawMatrixTrain,50,eps = 1e-5,max_iter=500,printevery=100)
resMatrix = A.dot(Y)
%%time
A,Y = nmf(rawMatrixTrain,1,eps = 1e-5,max_iter=100,printevery=20)
resMatrix = A.dot(Y)
## This class is used to make predictions
class evalMF:
def __init__(self,resMatrix,dicU,dicI):
self.resMatrix=resMatrix
self.dicU = dicU
self.dicI = dicI
def fit(self):
pass
def predict(self,user,movie):
return self.resMatrix[int(user)-1][int(self.dicI[movie])-1]
mf = evalMF(resMatrix,data,movies)
# np.array([ (float(ra[2]) - mf.predict(ra[0],ra[1]))**2 for ra in evalArrayTest]).mean()
# faster evaluation
masqueTest=np.sign(rawMatrixTest)
q = masqueTest*resMatrix - rawMatrixTest
(q*q).sum()/ masqueTest.sum()
print data["1"]["Akira (1988)"]
print mf.predict("1","Akira (1988)")
print data["1"]["I.Q. (1994)"]
print mf.predict("1","I.Q. (1994)")
user="1"
summ=0
for i in data[user]:
summ+=(float(data[user][i]) - mf.predict(user,i))**2
summ/len(data[user])
user="3"
summ=0
for i in data[user]:
summ+=(float(data[user][i]) - mf.predict(user,i))**2
summ/len(data[user])
class mf():
def __init__(self, k, eps=1e-3, nIter=2000, lambd=0.5):
self.k = k
self.lambd = lambd
self.eps = eps
self.nIter = nIter
def fit(self, trainUsers, trainItems, rawArray):
print "MF with %d latent features, %d iterations."%(self.k, self.nIter)
printevery = np.round(self.nIter/10,decimals=-4)
self.p = {}
self.q = {}
self.bu = {}
self.bi = {}
self.mu = np.random.random() * 2 - 1
for j in range(len(rawArray)):
u = rawArray[j][0]
i = rawArray[j][1]
if not u in self.p:
self.p[u] = np.random.rand(1,self.k)
self.bu[u] = np.random.rand() * 2 - 1
if not i in self.q:
self.q[i] = np.random.rand(self.k,1)
self.bi[i] = np.random.rand() * 2 - 1
loss = []
for it in range(self.nIter):
ind = np.random.randint(len(rawArray))
u = rawArray[ind][0]
i = rawArray[ind][1]
# thx to A & A !
tmp = trainUsers[u][i] - (self.mu + self.bi[i] + self.bu[u] +self.p[u].dot(self.q[i])[0][0])
self.p[u] = (1 - self.lambd * self.eps) * self.p[u] + self.eps * 2 * tmp * self.q[i].transpose()
self.bu[u] = (1 - self.lambd * self.eps) * self.bu[u] + self.eps * 2 * tmp
self.q[i] = (1 - self.lambd * self.eps) * self.q[i] + self.eps * 2 * tmp * self.p[u].transpose()
self.bi[i] = (1 - self.lambd * self.eps) * self.bi[i] + self.eps * 2 * tmp
self.mu = (1 - self.lambd * self.eps) * self.mu + self.eps * 2 * tmp
loss.append(tmp*tmp)
if it % printevery == 0 or it == self.nIter-1:
print "Iteration %s, Err %.05f, True %.05f"%( str(it).ljust(8), np.mean(loss) , model.score(arrayTest))
return self.p, self.q
def predict(self, rawArray_test):
pred = np.zeros(len(rawArray_test))
for ind,t in enumerate(rawArray_test):
pred[ind] = self.mu + self.bu[t[0]] + self.bi[t[1]] + self.p[t[0]].dot(self.q[t[1]])[0][0]
return pred
def score(self, rawArray_test) :
return ((self.predict(rawArray_test) - np.array(rawArray_test[:,2], float)) ** 2).mean()
%%time
k = 10
eps = 8e-3
nIter = int(2e6)
lambd = 0.2
model = mf(k, eps=eps, nIter=nIter,lambd=lambd)
p,q = model.fit(trainUsers, trainItems, arrayTrain)
print "\nError on test: %.05f" %(model.score(arrayTest))
class tSNE():
def __init__(self,perp, nIter, lr, moment, dim=2):
self.perp = perp # entre 5 et 50
self.nIter = nIter
self.lr = lr
self.moment = moment
self.dim = dim
def fit(self,data):
nEx = np.shape(data)[0]
# Matrice des distances de ||xi - xj||² #
normx = np.sum(data**2,1)
normx = np.reshape(normx, (1, nEx))
distancex = normx + normx.T - 2 * data.dot(data.T)
# Calcul des sigma ---------------------------------------------------------------#
lperp = np.log2(self.perp)
# initialisation bornes pour la recherche dichotomique #
sup = np.ones((nEx,1)) * np.max(distancex)
inf = np.zeros((nEx,1))
self.sigma = (sup + inf) / 2.
# recherche dichotomique #
stop = False
while not stop:
# Calculer la matrice des p(i|j)
self.pcond = np.exp(-distancex / (2. * (self.sigma**2)))
self.pcond = self.pcond / np.sum(self.pcond - np.eye(nEx),1).reshape(nEx,1)
# Calculer l'entropie de p(i|j)
entropy = - np.sum(self.pcond * np.log2(self.pcond), 0)
# Mise a jour des bornes
# Si il faut augmenter sigma
up = entropy < lperp
inf[up,0] = self.sigma[up,0]
# Si il faut baisser sigma
down = entropy > lperp
sup[down,0] = self.sigma[down,0]
# Mise a jour de sigma et condition d'arrêt
old = self.sigma
self.sigma = ((sup + inf) / 2.)
if np.max(np.abs(old - self.sigma)) < 1e-5:
stop = True
#print np.exp(entropy)
#print self.sigma.T
#--------------------------------------------------------------------------#
#initialiser y
self.embeddings = np.zeros((self.nIter+2, nEx, self.dim))
self.embeddings[1] = np.random.randn(nEx, self.dim) * 1e-4
#--------------------------------------------------------------------------#
# p(ij)
self.pij = (self.pcond + self.pcond.T) / (2.*nEx)
np.fill_diagonal(self.pij, 0)
# Descente de Gradient
for t in xrange(1,self.nIter+1):
# Matrice des distances
normy = np.sum((self.embeddings[t]**2),1)
normy = np.reshape(normy, (1, nEx))
distancey = normy + normy.T - 2 * self.embeddings[t].dot(self.embeddings[t].T)
# q(ij)
# self.qij = (distancey.sum() + nEx*(nEx-1)) / (1 + distancey)
# np.fill_diagonal(self.qij, 0)
self.qij = 1 / (1 + distancey)
np.fill_diagonal(self.qij, 0)
self.qij = self.qij / self.qij.sum()
# Descente de gradient
yt = self.embeddings[t]
tmpgrad = 4 * ((self.pij - self.qij) / (1 + distancey)).reshape(nEx, nEx,1)
for i in range(nEx):
dy = (tmpgrad[i] * (yt[i]-yt)).sum(0)
self.embeddings[t+1][i] = yt[i] - self.lr * dy + self.moment * (yt[i] - self.embeddings[t-1,i])
from sklearn import datasets
from scipy import stats
digits = datasets.load_digits()
X_ini = np.vstack([digits.data[digits.target==i]
for i in range(10)])
cols = np.hstack([digits.target[digits.target==i]
for i in range(10)])
%%time
model = tSNE(10,500,1000,0)
model.fit(X_ini)
palette = np.array(sns.color_palette("hls", 10))
t = np.shape(model.embeddings)[0] -1
# We create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(model.embeddings[t,:,0], model.embeddings[t,:,1], lw=0, s=40,
c=palette[cols.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')
plt.show()
from sklearn.manifold import TSNE
mod = TSNE(random_state=1337)
%%time
X = mod.fit_transform(X_ini)
palette = np.array(sns.color_palette("hls", 10))
# We create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(X[:,0], X[:,1], lw=0, s=40,
c=palette[cols.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')
plt.show()
rawMatrix = np.zeros((943,1682))
for u in data:
for m in data[u]:
rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]
%%time
rawMatrix = np.zeros((943,1682))
for u in data:
for m in movies.keys():
try:
rawMatrix[int(u)-1][int(movies[m])-1] = model.predict([[u,m]])
except KeyError:
rawMatrix[int(u)-1][int(movies[m])-1] = 3
for u in data:
for m in data[u]:
rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]
rawMatrix[:5,:5]
model.predict([["1","101 Dalmatians (1996)"]])
rawMatrix = np.zeros((943,1682))
for u in data:
for m in data[u]:
rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]
mod = TSNE(random_state=1337)
%%time
X = mod.fit_transform(rawMatrix)
palette = np.array(sns.color_palette("hls", 4))
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(X[:,0], X[:,1], lw=0, s=40)
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
plt.show()
me = rawMatrix.mean(1)
X_ini = np.vstack([rawMatrix[np.where(np.logical_and(me>i, me<=j))]
for i,j in [(0,2),(2,3),(3,4),(4,5)]])
cols = np.hstack( [ np.round( me[ np.where(np.logical_and(me>i, me<=j)) ] )
for i,j in [(0,2),(2,3),(3,4),(4,5)] ] )
%%time
X = mod.fit_transform(X_ini)
palette = np.array(sns.color_palette("hls", 6))
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(X[:,0], X[:,1], lw=0, s=40,
c=palette[cols.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
txts = []
for i in range(2,6):
xtext, ytext = np.median(X[cols == i, :], axis=0)
txt = ax.text(xtext, ytext, str(i), fontsize=24)
txts.append(txt)
#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data
Step2: Splitting data between train/test
Step3: split used for convenience on the average by movie baseline
Step4: cleaning
Step5: Matrix used for fast evaluation
Step6: Content example
Step7: Baseline
Step8: Raw matrix are used for convenience and clarity.
Step9: Train and test dataset
Step10: Non-negative Matrix Factorization
Step11: We see that it quickly get better than the baseline.
Step12: This is due to the high sparsity of the matrix.
Step13: Despite good results in few seconds on this dataset, this can only get us so far.
Step14: Let's look at some predictions
Step15: We usualy see an important difference between users, so we need to take the bias into account.
Step16: We have not been very successful with incorporating the bias and L1 into that implementation...
Step17: thanks to Professeur Chen
Step18: For reference, let's compare it with sklearn's TSNE
Step19: It produce similar results, albeit faster, as expected.
Step20: Predict unknown data and fill the others with the right number
Step21: 0.0164939277392222 percent of unseen data
Step22: Produced visualisation
Step23: But its not very informative...
|
1,516
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
%%sql
-- Select name and age,
SELECT name, age
-- from the table 'criminals',
FROM criminals
-- where age is greater than,
WHERE age >
-- select age,
(SELECT age
-- from criminals
FROM criminals
-- where the name is 'James Smith'
WHERE name == 'James Smith')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Select Based On The Result Of A Select
|
1,517
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from kneed.data_generator import DataGenerator as dg
from kneed.knee_locator import KneeLocator
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
x = [3.07, 3.38, 3.55, 3.68, 3.78, 3.81, 3.85, 3.88, 3.9, 3.93]
y = [0.0, 0.3, 0.47, 0.6, 0.69, 0.78, 0.845, 0.903, 0.95, 1.0]
kl = KneeLocator(x, y, S=1.0, curve='convex', direction='increasing', interp_method='interp1d')
kl.x_normalized
np.diff(kl.x_normalized).mean()
np.diff(x).mean()
from scipy.signal import argrelextrema
argrelextrema(kl.y_difference, np.greater)
argrelextrema(kl.y_difference, np.less)
kl.y_difference_maxima
plt.plot(kl.x_normalized, kl.y_normalized);
plt.plot(kl.x_difference, kl.y_difference);
np.random.seed(23) # only for the walkthrough
x,y = dg.noisy_gaussian(N=1000)
x[:5],y[:5]
kneedle = KneeLocator(x, y, S=1.0, curve='concave', direction='increasing', interp_method='polynomial')
kneedle.plot_knee_normalized()
kneedle.plot_knee()
kneedle.knee
knees = []
for i in range(50):
x,y = dg.noisy_gaussian(N=1000)
kneedle = KneeLocator(x, y, direction='increasing', curve='concave', interp_method='polynomial')
knees.append(kneedle.knee)
np.mean(knees)
x = np.arange(0,10)
y_convex_inc = np.array([1,2,3,4,5,10,15,20,40,100])
y_convex_dec = y_convex_inc[::-1]
y_concave_dec = 100 - y_convex_inc
y_concave_inc = 100 - y_convex_dec
kn = KneeLocator(x, y_convex_inc, curve='convex')
knee_yconvinc = kn.knee
kn = KneeLocator(x, y_convex_dec, curve='convex', direction='decreasing')
knee_yconvdec = kn.knee
kn = KneeLocator(x, y_concave_inc, curve='concave')
knee_yconcinc = kn.knee
kn = KneeLocator(x, y_concave_dec, curve='concave', direction='decreasing')
knee_yconcdec = kn.knee
f, axes = plt.subplots(2, 2, figsize=(10,10));
yconvinc = axes[0][0]
yconvdec = axes[0][1]
yconcinc = axes[1][0]
yconcdec = axes[1][1]
sns.lineplot(x, y_convex_inc, ax=axes[0][0])
yconvinc.vlines(x=knee_yconvinc, ymin=0, ymax=100, linestyle='--')
yconvinc.set_title("curve='convex', direction='increasing'")
sns.lineplot(x, y_convex_dec, ax=axes[0][1])
yconvdec.vlines(x=knee_yconvdec, ymin=0, ymax=100, linestyle='--')
yconvdec.set_title("curve='convex', direction='decreasing'")
sns.lineplot(x, y_concave_inc, ax=axes[1][0])
yconcinc.vlines(x=knee_yconcinc, ymin=0, ymax=100, linestyle='--')
yconcinc.set_title("curve='concave', direction='increasing'")
sns.lineplot(x, y_concave_dec, ax=axes[1][1])
yconcdec.vlines(x=knee_yconcdec, ymin=0, ymax=100, linestyle='--')
yconcdec.set_title("curve='concave', direction='decreasing'");
x = list(range(90))
y = [
7304.99, 6978.98, 6666.61, 6463.2, 6326.53, 6048.79, 6032.79, 5762.01, 5742.77,
5398.22, 5256.84, 5226.98, 5001.72, 4941.98, 4854.24, 4734.61, 4558.75, 4491.1,
4411.61, 4333.01, 4234.63, 4139.1, 4056.8, 4022.49, 3867.96, 3808.27, 3745.27,
3692.34, 3645.55, 3618.28, 3574.26, 3504.31, 3452.44, 3401.2, 3382.37, 3340.67,
3301.08, 3247.59, 3190.27, 3179.99, 3154.24, 3089.54, 3045.62, 2988.99, 2993.61,
2941.35, 2875.6, 2866.33, 2834.12, 2785.15, 2759.65, 2763.2, 2720.14, 2660.14,
2690.22, 2635.71, 2632.92, 2574.63, 2555.97, 2545.72, 2513.38, 2491.57, 2496.05,
2466.45, 2442.72, 2420.53, 2381.54, 2388.09, 2340.61, 2335.03, 2318.93, 2319.05,
2308.23, 2262.23, 2235.78, 2259.27, 2221.05, 2202.69, 2184.29, 2170.07, 2160.05,
2127.68, 2134.73, 2101.96, 2101.44, 2066.4, 2074.25, 2063.68, 2048.12, 2031.87
]
kneedle = KneeLocator(x, y, S=1.0, curve='convex', direction='decreasing')
kneedle.plot_knee_normalized()
kneedle = KneeLocator(x, y, S=1.0, curve='convex', direction='decreasing', interp_method='polynomial')
kneedle.plot_knee_normalized()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The knee is located by passing x and y values to knee_locator.
Step2: There are plotting functions to visualize the knee point on the raw data and the normalized data.
Step3: Test all type of functions
Step4: Polynomial line fit
|
1,518
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Set some Pandas options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 25)
from datetime import datetime
now = datetime.now()
now
now.day
now.weekday()
from datetime import date, time
time(3, 24)
date(1970, 9, 3)
my_age = now - datetime(1970, 9, 3)
my_age
my_age.days/365.
segments = pd.read_csv("data/AIS/transit_segments.csv")
segments.head()
segments.seg_length.hist(bins=500)
segments.seg_length.apply(np.log).hist(bins=500)
segments.st_time.dtype
datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')
from dateutil.parser import parse
parse(segments.st_time.ix[0])
segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))
pd.to_datetime(segments.st_time)
pd.to_datetime([None])
vessels = pd.read_csv("data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
[v for v in vessels.type.unique() if v.find('/')==-1]
vessels.type.value_counts()
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=range(3)+range(3), score=np.random.random(size=6)))
df1, df2
pd.merge(df1, df2)
pd.merge(df1, df2, how='outer')
segments.head(1)
vessels.head(1)
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
segments_merged.head()
vessels.merge(segments, left_index=True, right_on='mmsi').head()
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()
np.concatenate([np.random.random(5), np.random.random(5)])
np.r_[np.random.random(5), np.random.random(5)]
np.c_[np.random.random(5), np.random.random(5)]
mb1 = pd.read_excel('data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
mb1.head()
mb1.columns = mb2.columns = ['Count']
mb1.index.name = mb2.index.name = 'Taxon'
mb1.head()
mb1.index[:3]
mb1.index.is_unique
pd.concat([mb1, mb2], axis=0).shape
pd.concat([mb1, mb2], axis=0).index.is_unique
pd.concat([mb1, mb2], axis=1).shape
pd.concat([mb1, mb2], axis=1).head()
pd.concat([mb1, mb2], axis=1).values[:5]
pd.concat([mb1, mb2], axis=1, join='inner').head()
mb1.combine_first(mb2).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()
# Write your answer here
cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None)
cdystonia.head()
stacked = cdystonia.stack()
stacked
stacked.unstack().head()
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
cdystonia2.index.is_unique
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
cdystonia_long = cdystonia[['patient','site','id','treat','age','sex']].drop_duplicates().merge(
twstrs_wide, right_index=True, left_on='patient', how='inner').head()
cdystonia_long
cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'].unstack('week').head()
pd.melt(cdystonia_long, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
cdystonia.pivot('patient', 'obs')
cdystonia.pivot_table(rows=['site', 'treat'], cols='week', values='twstrs', aggfunc=max).head(20)
pd.crosstab(cdystonia.sex, cdystonia.site)
vessels.duplicated(cols='names')
vessels.drop_duplicates(['names'])
cdystonia.treat.value_counts()
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
vals = pd.Series([float(i)**10 for i in range(10)])
vals
np.log(vals)
vals = vals.replace(0, 1e-6)
np.log(vals)
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
top5 = vessels.type.apply(lambda s: s in vessels.type.value_counts().index[:5])
vessels5 = vessels[top5]
pd.get_dummies(vessels5.type).head(10)
cdystonia.age.describe()
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','ancient'])[:30]
pd.qcut(cdystonia.age, 4)[:30]
quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
pd.get_dummies(quantiles).head(10)
new_order = np.random.permutation(len(segments))
new_order[:30]
segments.take(new_order).head()
segments.head()
# Write your answer here
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
cdystonia_grouped
for patient, group in cdystonia_grouped:
print patient
print group
print
cdystonia_grouped.agg(np.mean).head()
cdystonia_grouped.mean().head()
cdystonia_grouped.mean().add_suffix('_mean').head()
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)
cdystonia.groupby(['week','site']).mean().head()
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
cdystonia_grouped['twstrs'].mean().head()
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head()
chunks = dict(list(cdystonia_grouped))
chunks[4]
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
cdystonia2.head(10)
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
def top(df, column, n=5):
return df.sort_index(by=column, ascending=False)[:n]
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments
top3segments.head(20)
mb1.index[:3]
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))
mb_class = mb1.copy()
mb_class.index = class_index
mb_class.head()
mb_class.groupby(level=0).sum().head(10)
from IPython.core.display import HTML
HTML(filename='data/titanic.html')
# Write your answer here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Date/Time data handling
Step2: In addition to datetime there are simpler objects for date and time information only, respectively.
Step3: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times
Step4: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
Step5: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram
Step6: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful
Step7: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.
Step8: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.
Step9: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.
Step10: We can convert all the dates in a particular column by using the apply method.
Step11: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.
Step12: Pandas also has a custom NA value for missing datetime objects, NaT.
Step13: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
Step14: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
Step15: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Step16: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Step17: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.
Step18: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Step19: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.
Step20: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Step21: This operation is also called binding or stacking.
Step22: Let's give the index and columns meaningful labels
Step23: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
Step24: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated
Step25: However, the index is no longer unique, due to overlap between the two DataFrames.
Step26: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.
Step27: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.
Step28: If we wanted to use the second table to fill values absent from the first table, we could use combine_first.
Step29: We can also create a hierarchical index based on keys identifying the original tables.
Step30: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict.
Step31: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Step32: Reshaping DataFrame objects
Step33: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways
Step34: To complement this, unstack pivots from rows back to columns.
Step35: For this dataset, it makes sense to create a hierarchical index based on the patient and observation
Step36: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
Step37: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking
Step38: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized
Step39: This illustrates the two formats for longitudinal data
Step40: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table
Step41: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
Step42: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
Step43: Data transformation
Step44: Value replacement
Step45: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
Step46: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
Step47: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.
Step48: We can also perform the same replacement that we used map for with replace
Step49: Inidcator variables
Step50: Discretization
Step51: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 90's
Step52: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False
Step53: Since the data are now ordinal, rather than numeric, we can give them labels
Step54: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default
Step55: Alternatively, one can specify custom quantiles to act as cut points
Step56: Note that you can easily combine discretiztion with the generation of indicator variables shown above
Step57: Permutation and sampling
Step58: Using this sequence as an argument to the take method results in a reordered DataFrame
Step59: Compare this ordering with the original
Step60: Exercise
Step61: Data aggregation and GroupBy operations
Step62: This grouped dataset is hard to visualize
Step63: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups
Step64: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
Step65: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Step66: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation
Step67: If we wish, we can easily aggregate according to multiple keys
Step68: Alternately, we can transform the data, using a function of our choice with the transform method
Step69: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns
Step70: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed
Step71: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by type this way
Step72: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index
Step73: Apply
Step74: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship
Step75: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Step76: Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
Step77: Using the string methods split and join we can create an index that just uses the first three classifications
Step78: However, since there are multiple taxonomic units with the same class, our index is no longer unique
Step79: We can re-establish a unique index by summing all rows with the same class, using groupby
Step80: Exercise
Step81: Women and children first?
|
1,519
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def func2Mat(f,I):
'''
Convert a function acting on a vector into a matrix, given
the space defined by psuedoscalar I
'''
A = I.basis()
B = [f(a) for a in A]
M = [float(b | a) for a in A for b in B]
return np.array(M).reshape(len(B), len(B))
from clifford import Cl
from math import *
l,b = Cl(3) # returns (layout,blades). you can change dimesion here
I = l.pseudoScalar
B = l.randomIntMV()(2) # we use randIntMV because its easier to read
f = lambda x:x|B
func2Mat(f,I=I)
B
ls = range(1,len(I.basis())+1) # some dilation values (eigenvalues)
A = I.basis()
d = lambda x: sum([(x|a)/a*l for a,l in zip(A,ls)])
func2Mat(d,I=I)
B = l.randomMV()(2)
R = e**(B/2)
r = lambda x: R*x*~R
func2Mat(r,I=I)
rinv = lambda x: ~R*x*R # the inverse rotation
func2Mat(rinv,I=I)
a = l.randomIntMV()(1)
n = lambda x: -a*x/a
func2Mat(n,I=I)
a
from numpy.linalg import det
det(func2Mat(n,I=I)), det(func2Mat(r,I=I))
g = lambda x: r(d(rinv(x)))
func2Mat(g,I=I)
from numpy.linalg import eig
vals, vecs = eig(func2Mat(r,I=I))
np.round(vecs,3)
B/(abs(B))
B
vals
cos(abs(B)), sin(abs(B))
vals, vecs = eig(func2Mat(g,I=I))
np.round(vecs,5).T
[R*a*~R for a in I.basis()]
from pylab import linspace, plot,axis,legend
def plot_ps(ps,**kw):
x = [p[e1] for p in ps]
y = [p[e2] for p in ps]
plot(x,y, marker='o',ls='',**kw)
l,b = Cl(2)
locals().update(b)
I = l.pseudoScalar
## define function of interest
B = l.randomMV()(2)
R = e**(B/2)
f = lambda x: R*x*~R
## loop though cartesian grid and apply f,
ps,qs=[],[]
for x in linspace(-1,1,11):
for y in linspace(-1,1,11):
p = x*e1 + y*e2
q = f(p)
ps.append(p)
qs.append(q)
plot_ps(ps,label='before')
plot_ps(qs,label='after')
axis('equal')
legend()
func2Mat(f,I =I )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start with initializing a euclidean N-dimensional algebra and assign our pseudoscalar to $I$, pretty standard.
Step2: Anti-symmetric
Step3: Whats the B? you can read its values straight off the matrix.
Step4: Diagonal ( Directional Scaling)
Step5: Orthgonal, Rotation
Step6: The inverse of this is ,
Step7: Orthogonal, Reflection
Step8: Notice the determinant for reflection is -1, and for rotation is +1.
Step9: Symmetric
Step10: Eigen stuffs
Step11: If you checkout the real column, and compare this to the bivector which generated this rotation (aka the generator), after its been normalized
Step12: Symmetric
Step13: This is easily found by using the rotation part of the symmetric operator,
Step14: Primitive Visualization in 2D
|
1,520
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
import numpy as np
np.random.seed(1234)
import matplotlib.pyplot as plt
from skopt.learning import ExtraTreesRegressor
from skopt import Optimizer
from skopt.plots import plot_gaussian_process
noise_level = 0.1
# Our 1D toy problem, this is the function we are trying to
# minimize
def objective(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) +\
np.random.randn() * noise_level
def objective_wo_noise(x):
return objective(x, noise_level=0)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_optimizer="sampling")
plot_args = {"objective": objective_wo_noise,
"noise_level": noise_level, "show_legend": True,
"show_title": True, "show_next_point": False,
"show_acq_func": True}
for i in range(30):
next_x = opt.ask()
f_val = objective(next_x)
opt.tell(next_x, f_val)
# The same output could be created with opt.run(objective, n_iter=30)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
acq_func_kwargs = {"xi": 10000, "kappa": 10000}
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="LCB", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="PI", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
acq_func_kwargs = {"xi": 0.000001, "kappa": 0.001}
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="LCB", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="EI", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="PI", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
acq_func_kwargs = {"xi": -1000000000000}
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="PI", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="EI", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
acq_func_kwargs = {"kappa": 0}
opt = Optimizer([(-2.0, 2.0)], "GP", n_initial_points=3,
acq_func="LCB", acq_optimizer="sampling",
acq_func_kwargs=acq_func_kwargs)
opt.acq_func_kwargs
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
acq_func_kwargs = {"kappa": 100000}
opt.acq_func_kwargs = acq_func_kwargs
opt.update_next()
opt.run(objective, n_iter=20)
_ = plot_gaussian_process(opt.get_result(), **plot_args)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Toy example
Step2: Plotting parameters
Step3: We run a an optimization loop with standard settings
Step4: We see that some minima is found and "exploited"
Step5: We see that the points are more random now.
Step6: And for xi when using acq_func="EI"
Step7: We can also favor exploitaton
Step8: Note that negative values does not work with the "PI"-acquisition function
Step9: Changing kappa and xi on the go
|
1,521
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
Acc_nbc = np.loadtxt('Data/nbc_anneal.csv', delimiter=',', skiprows=1)
Acc_aode = np.loadtxt('Data/aode_anneal.csv', delimiter=',', skiprows=1)
names = ("AODE", "NBC")
x=np.zeros((len(Acc_nbc),2),'float')
x[:,0]=Acc_aode/100
x[:,1]=Acc_nbc/100
#we consider the difference of accuracy scaled in (0,1)
import bayesiantests as bt
rope=0.01
left, within, right = bt.correlated_ttest(x, rope=rope,runs=10,verbose=True,names=names)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as snb
#generate samples from posterior (it is not necesssary because the posterior is a Student)
samples=bt.correlated_ttest_MC(x, rope=rope,runs=10,nsamples=50000)
#plot posterior
snb.kdeplot(samples, shade=True)
#plot rope region
plt.axvline(x=-rope,color='orange')
plt.axvline(x=rope,color='orange')
#add label
plt.xlabel('Nbc-Aode on Anneal dataset');
import numpy as np
Acc_nbc = np.loadtxt('Data/nbc_audiology.csv', delimiter=',', skiprows=1)
Acc_aode = np.loadtxt('Data/aode_audiology.csv', delimiter=',', skiprows=1)
names = ("AODE", "NBC")
diff=(Acc_nbc-Acc_aode)/100.0 #we consider the difference of accuracy scaled in (0,1)
import bayesiantests as bt
rope=0.01
left, within, right = bt.correlated_ttest(diff, rope=rope,runs=10,verbose=True,names=names)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as snb
#generate samples from posterior (it is not necesssary because the posterior is a Student)
samples=bt.correlated_ttest_MC(diff, rope=rope,runs=10,nsamples=50000)
#plot posterior
snb.kdeplot(samples, shade=True)
#plot rope region
plt.axvline(x=-rope,color='orange')
plt.axvline(x=rope,color='orange')
#add label
plt.xlabel('Nbc-Aode on Audiology dataset');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions in the module accept the following arguments.
Step2: We can also plot the posterior distribution.
Step3: We will load the classification accuracies of NBC and AODE on the dataset Audiology from the file. The classifiers have been evaluated by 10-runs of 10-fold cross-validation.
|
1,522
|
<ASSISTANT_TASK:>
Python Code:
from ga4gh.client import client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
#Obtain dataSet id REF: -> `1kg_metadata_service`
dataset = c.search_datasets().next()
counter = 0
for individual in c.search_individuals(dataset_id=dataset.id):
if counter > 5:
break
counter += 1
print "Individual: {}".format(individual.name)
print " id: {}".format(individual.id)
print " dataset_id: {}".format(individual.dataset_id)
print " description: {}".format(individual.description)
print " species.term: {}".format(individual.species.term)
print " species.id: {}".format(individual.species.id)
print " sex.term: {}".format(individual.sex.term)
print " sex.id: {}\n".format(individual.sex.id)
single_individual = c.get_individual(individual_id=individual.id)
print "Individual: {}".format(single_individual.name)
print " info['Family ID']: {}".format(single_individual.info['Family ID'].values[0].string_value)
counter = 0
for biosample in c.search_biosamples(dataset_id=dataset.id):
if counter > 5:
break
counter += 1
print "BioSample: {}".format(biosample.name)
print " id: {}".format(biosample.id)
print " dataset_id: {}\n".format(biosample.dataset_id)
single_biosample = c.get_biosample(biosample.id)
print"\nName: {}".format(single_biosample.name)
print" Id: {},".format(single_biosample.id)
print" Dataset Id: {},".format(single_biosample.dataset_id)
print" Desciption: {},".format(single_biosample.description)
print" Individual Id: {},".format(single_biosample.individual_id)
print" Disease: {},".format(single_biosample.disease)
print" Sample Created: {},".format(single_biosample.created)
print" Sample Updated: {}".format(single_biosample.updated)
for info in single_biosample.info:
print" {}: \t{}".format(info, single_biosample.info[info].values[0].string_value)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Search individuals method
Step2: Note
Step3: In this case, the Family ID can be exchanged through the protocol, although the named field is not present in the Individual message itself.
Step4: Only five sets of the bio samples data response were displayed for illustration purposes, but the response returns all of the samples hosted by the provided dataset. This all of the data contained in each response, illustrated in a organized fashion.
|
1,523
|
<ASSISTANT_TASK:>
Python Code:
import py_entitymatching as em
import profiler
import pandas as pd
## Read input tables
A = em.read_csv_metadata('dblp_demo.csv', key='id')
B = em.read_csv_metadata('acm_demo.csv', key='id')
len(A), len(B), len(A) * len(B)
A.head(2)
B.head(2)
# If the tables are large we can downsample the tables like this
A1, B1 = em.down_sample(A, B, 500, 1, show_progress=False)
len(A1), len(B1)
# But for the demo, we will use the entire table A and B
profiler.profile_table(A, 'paper year')
profiler.profile_table(B, 'paper year')
B.replace({'paper year':{
20003:2003
}}, inplace=True)
### Blocking plan
### A, B -- AttrEquivalence blocker [year]--------------------------| Candidate set
# Create attribute equivalence blocker
ab = em.AttrEquivalenceBlocker()
# Block tables using 'year' attribute : same year include in candidate set
C1 = ab.block_tables(A, B, 'paper year', 'paper year',
l_output_attrs=['title', 'authors', 'paper year'],
r_output_attrs=['title', 'authors', 'paper year']
)
len(C1)
C1.head(2)
# check whether the current blocking method has dropped a lot of potential matches
dbg = em.debug_blocker(C1, A, B)
dbg.head()
# em.view_table(dbg)
# Revised blocking plan
# A, B -- AttrEquivalence blocker [year] --------------------|
# |---> candidate set
# A, B -- Overlap blocker [title]---------------------------|
profiler.profile_table(A, 'title', plot=False)
profiler.profile_table(B, 'title', plot=False)
# Initialize overlap blocker
ob = em.OverlapBlocker()
# Block over title attribute
C2 = ob.block_tables(A, B, 'title', 'title', show_progress=False, overlap_size=1)
len(C2)
# Combine the outputs from attr. equivalence blocker and overlap blocker
C = em.combine_blocker_outputs_via_union([C1, C2])
len(C)
# Check again to see if we are dropping any potential matches
dbg = em.debug_blocker(C, A, B)
dbg.head()
# Sample candidate set
S = em.sample_table(C, 450)
# Label S
S = em.label_table(S, 'label')
# Load the pre-labeled data
S = em.read_csv_metadata('labeled_data_demo.csv',
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(S)
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
# Generate a set of features
F = em.get_features_for_matching(A, B)
# List the feature names generated
F['feature_name']
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
# Split H into P and Q
PQ = em.split_train_test(H, train_proportion=0.5, random_state=0)
P = PQ['train']
Q = PQ['test']
# Debug RF matcher using GUI
em.vis_debug_rf(rf, P, Q,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Add a feature to do Jaccard on title + authors and add it to F
# Create a feature declaratively
sim = em.get_sim_funs_for_matching()
tok = em.get_tokenizers_for_matching()
feature_string = jaccard(wspace((ltuple['title'] + ' ' + ltuple['authors']).lower()),
wspace((rtuple['title'] + ' ' + rtuple['authors']).lower()))
feature = em.get_feature_fn(feature_string, sim, tok)
# Add feature to F
em.add_feature(F, 'jac_ws_title_authors', feature)
# Print supported sim. functions
pd.DataFrame({'simfunctions':sorted(sim.keys())})
# Print supported tokenizers
pd.DataFrame({'tokenizers':sorted(tok.keys())})
F['feature_name']
# Convert I into feature vectors using updated F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Check whether the updated F improves X (Random Forest)
result = em.select_matcher([rf], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
# Select the best matcher again using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric='f1', random_state=0)
result['cv_stats']
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Convert J into a set of feature vectors using F
L = em.extract_feature_vecs(J, feature_table=F,
attrs_after='label', show_progress=False)
# Predict on L
predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
append=True, target_attr='predicted', inplace=False)
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'label', 'predicted')
em.print_eval_summary(eval_result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read input tables
Step2: Block tables to get candidate set
Step3: Debug blocking output
Step4: Match tuple pairs in candidate set
Step5: Selecting the best learning-based matcher using I
Step7: Debug X (Random Forest)
Step8: Compute accuracy of X (Decision Tree) on J
|
1,524
|
<ASSISTANT_TASK:>
Python Code:
from collections import defaultdict
def gcd(a , b ) :
if(b == 0 ) :
return a
return gcd(b , a % b )
def splitArray(arr , N ) :
mp = defaultdict(int )
for i in range(N ) :
mp[arr[i ] ] += 1
G = 0
for i in mp :
G = gcd(G , mp[i ] )
if(G > 1 ) :
print("YES ")
else :
print("NO ")
if __name__== "__main __":
arr =[1 , 2 , 3 , 4 , 4 , 3 , 2 , 1 ]
n = len(arr )
splitArray(arr , n )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,525
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]}
df = pd.DataFrame(data)
s = 'spike'
def g(df, s):
spike_cols = [col for col in df.columns if s in col and col != s]
return spike_cols
result = g(df.copy(),s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,526
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
1,527
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
#使用seaborn-whitegrid风格
plt.style.use('seaborn-whitegrid')
import numpy as np
fig = plt.figure()
ax = plt.axes()
fig = plt.figure()
ax = plt.axes()
x = np.linspace(0, 10, 1000)
ax.plot(x, np.sin(x));
plt.plot(x, np.sin(x));
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x));
plt.plot(x, np.sin(x - 0), color='blue') # specify color by name
plt.plot(x, np.sin(x - 1), color='g') # short color code (rgbcmyk)
plt.plot(x, np.sin(x - 2), color='0.75') # Grayscale between 0 and 1
plt.plot(x, np.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF)
plt.plot(x, np.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1
plt.plot(x, np.sin(x - 5), color='chartreuse'); # all HTML color names supporte
plt.plot(x, x + 0, linestyle='solid')
plt.plot(x, x + 1, linestyle='dashed')
plt.plot(x, x + 2, linestyle='dashdot')
plt.plot(x, x + 3, linestyle='dotted');
# 也可以使用符合来替代单词
plt.plot(x, x + 4, linestyle='-') # solid
plt.plot(x, x + 5, linestyle='--') # dashed
plt.plot(x, x + 6, linestyle='-.') # dashdot
plt.plot(x, x + 7, linestyle=':'); # dotted
plt.plot(x, x + 0, '-g') # solid green
plt.plot(x, x + 1, '--c') # dashed cyan
plt.plot(x, x + 2, '-.k') # dashdot black
plt.plot(x, x + 3, ':r'); # dotted red
plt.plot(x, np.sin(x))
# 我们取x轴范围和y轴范围为(-1, 11)和(-1.5,1.5)
plt.xlim(-1, 11)
plt.ylim(-1.5, 1.5);
plt.plot(x, np.cos(x),'--y')
plt.axis([-1, 11, -1.5, 1.5])
plt.plot(x, np.sin(x))
plt.axis('tight');
plt.plot(x, np.sin(x))
plt.axis('equal');
plt.plot(x,np.sin(x),':r')
plt.title('sin 函数图像')
plt.xlabel('x轴')
plt.ylabel('y轴');
plt.plot(x, np.sin(x), '-g', label='sin(x)')
plt.plot(x, np.cos(x), ':b', label='cos(x)')
plt.axis('equal')
plt.legend();
ax = plt.axes()
ax.plot(x, np.sin(x))
ax.set(xlim=(0, 10), ylim=(-2, 2),
xlabel='x', ylabel='sin(x)',
title='A Simple Plot');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 对于所有Matplotlib图,我们首先创建一个图形和一个轴。以最简单的形式,可以如下创建图形和轴:
Step2: 在Matplotlib中,图形(类plt.Figure的一个实例)可以被认为是一个包含所有代表轴,图形,文本和标签的对象的容器。轴(类plt.Axes的实例)就是我们在上面看到的:一个带有刻度和标签的边界框,该边界框最终将包含构成我们可视化的绘图元素。在本书中,我们通常使用变量名fig来引用图形实例,并使用ax来引用轴实例或一组轴实例。
Step3: 或者,我们可以使用pylab界面,并在后台为我们创建图形和轴:
Step4: 如果想在一个图中,画出多条曲线,可以调用多次plot()
Step5: 这就是在Matplotlib中绘制简单函数!现在,我们将深入探讨有关如何控制轴和线的外观的更多详细信息。
Step6: 如果未指定颜色,则Matplotlib将自动在一组默认颜色中循环显示多行。
Step7: 如果您想更简洁,可以将这些线型和颜色代码组合到plt.plot()函数的单个非关键字参数中:
Step8: 这些单字符颜色代码反映了通常用于数字彩色图形的RGB(红色/绿色/蓝色)和CMYK(青色/品红色/黄色/黑色)颜色系统中的标准缩写。
Step9: 一个有用的相关方法是plt.axis()(在此注意带有e的轴和带有i的轴之间的潜在混淆)。通过传递一个指定[xmin,xmax,ymin,ymax]的列表,可以使用plt.axis()方法通过一次调用来设置x和y限制:
Step10: plt.axis()方法甚至超越了此范围,允许您执行诸如自动收紧当前图的边界之类的操作:
Step11: 它甚至允许更高级别的规格,例如确保长宽比相等,以便在屏幕上x中的一个单位等于y中的一个单位:
Step12: 有关轴限制和plt.axis方法的其他功能的更多信息,可以参考plt.axis文档字符串。
Step13: 可以使用函数的可选参数来调整这些标签的位置,大小和样式。有关更多信息,请参见Matplotlib文档和每个函数的文档字符串。
Step14: 如您所见,plt.legend()函数跟踪线条样式和颜色,并将它们与正确的标签匹配。在plt.legend文档字符串中可以找到有关指定和格式化图例的更多信息。此外,我们将在“自定义绘图图例”中介绍一些更高级的图例选项。
|
1,528
|
<ASSISTANT_TASK:>
Python Code:
import compliance_checker
print(compliance_checker.__version__)
# First import the compliance checker and test that it is installed properly.
from compliance_checker.runner import CheckSuite, ComplianceChecker
# Load all available checker classes.
check_suite = CheckSuite()
check_suite.load_all_available_checkers()
# Path to the Scripps Pier Data.
# See https://github.com/Unidata/netcdf-c/issues/1299
# for the reason we need to append `#fillmismatch` to the URL.
url = "http://data.ioos.us/thredds/dodsC/deployments/rutgers/ru29-20150623T1046/ru29-20150623T1046.nc3.nc#fillmismatch"
output_file = "buoy_testCC.txt"
return_value, errors = ComplianceChecker.run_checker(
ds_loc=url,
checker_names=["cf", "acdd"],
verbose=True,
criteria="normal",
skip_checks=None,
output_filename=output_file,
output_format="text",
)
with open(output_file, "r") as f:
print(f.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running Compliance Checker on the Scripps Pier shore station data
|
1,529
|
<ASSISTANT_TASK:>
Python Code:
import gmql as gl
dataset1 = gl.get_example_dataset("Example_Dataset_1")
dataset2 = gl.get_example_dataset("Example_Dataset_2")
dataset1.schema
dataset2.schema
filtered_dataset1 = dataset1.reg_select((dataset1.chr == 'chr3') & (dataset1.start >= 30000))
filtered_dataset_2 = dataset2[dataset2['antibody_target'] == 'CTCF']
dataset_join = dataset1.join(dataset2, [gl.DLE(0)])
query_result = dataset_join.materialize()
query_result.regs.head()
query_result.meta.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading datasets
Step2: The GMQLDataset
Step3: Filtering the dataset regions based on a predicate
Step4: From this operation we can learn several things about the GMQLDataset data structure. Each GMQLDataset has a set of methods and fields which can be used to build GMQL queries. For example, in the previous statement we have
Step5: Notice that the notation for selecting the samples using metadata is the same as the one for filtering Pandas DataFrames.
Step6: To understand the concept of genometric predicate please visit the documentation of the library.
Step7: The GDataframe
Step8: Metadata
|
1,530
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
# Now nexa modules|
import sys
sys.path.append("../")
from visualization.data_clustering import visualize_data_cluster_text_to_image
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
run_name = '/low-resolution'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
# Load the particular example
Nspatial_clusters = 3
Ntime_clusters = 15
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name +parameters_string]
cluster = 2
data_centers = np.arange(1, 15, 1)
for data_center in data_centers:
fig = visualize_data_cluster_text_to_image(nexa, f, run_name,
cluster, data_center)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the files
Step2: First visualize the cluster
|
1,531
|
<ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
Overlay("vbx.bit").download()
from pynq.drivers import HDMI
from pynq.drivers.video import VMODE_1920x1080,VMODE_1280x720
vmode=VMODE_1280x720
#vmode=VMODE_1920x1080
hdmi_out = HDMI('out',video_mode=vmode)
hdmi_in = HDMI('in', video_mode=vmode,frame_list=hdmi_out.frame_list)
hdmi_in.start()
hdmi_out.start()
from IPython.display import Image
frame = hdmi_in.frame()
orig_img_path = '/home/xilinx/jupyter_notebooks/examples/' + \
'data/mxp_filters.jpg'
frame.save_as_jpeg(orig_img_path)
Image(filename=orig_img_path)
hdmi_in.frame_index_next()
import time
import cv2
import numpy as np
num_frames = 20
start = time.time()
for i in range (num_frames):
np_frame= (np.frombuffer(hdmi_in.frame_raw(),
dtype=np.uint8)).reshape(1080,1920,3)
gray_frame = cv2.cvtColor(np_frame, cv2.COLOR_RGB2GRAY)
sobel_frame=cv2.Sobel(gray_frame,ddepth=-1,dx=1,dy=1)
sobel_rgb_frame=cv2.cvtColor(sobel_frame, cv2.COLOR_GRAY2RGB)
hdmi_out.frame_raw(bytearray(sobel_rgb_frame.tobytes()))
end = time.time()
print("Frames per second: " + str((num_frames) / (end - start)))
from IPython.display import Image
orig_img_path = '/home/xilinx/jupyter_notebooks/examples/' + \
'data/opencv_filters.jpg'
hdmi_out.frame().save_as_jpeg(orig_img_path)
Image(filename=orig_img_path)
from pynq.iop.mxp_filter import MXP_Filter
filter_text=
void filter(uint8_t* in_vid,uint8_t* out_vid,int rows, int cols,int pitch){
int rowlen=cols;
// save scratchpad allocator state
vbx_sp_push();
vbx_ubyte_t* line_in= vbx_sp_malloc(rowlen*3);
vbx_word_t* v_tmp0=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* v_tmp1=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* v_tmp2=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* luma_top=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* luma_mid=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* luma_bot=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* sobel_top=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* sobel_mid=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* sobel_bot=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* grad_x=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
vbx_word_t* grad_y=vbx_sp_malloc(rowlen*sizeof(vbx_word_t));
int row;
for(row=0;row<rows;row++){
//load a row into scratchpad
vbx_dma_to_vector(line_in,in_vid+row*pitch*3,rowlen*3);
//extract each channel into seperate vectors
//multiply by a facto for combining into luma
vbx_set_vl(1);
vbx_set_2D(rowlen,sizeof(vbx_word_t),3,3);
vbx_2D(SVBWU,VMUL,(vbx_uword_t*)v_tmp0,66,line_in+0);
vbx_2D(SVBWU,VMUL,(vbx_uword_t*)v_tmp1,129,line_in+1);
vbx_2D(SVBWU,VMUL,(vbx_uword_t*)v_tmp2,25,line_in+2);
//convert to luma
vbx_set_vl(rowlen);
vbx(VVW,VADD,luma_bot,v_tmp0,v_tmp1);
vbx(VVW,VADD,luma_bot,luma_bot,v_tmp2);
vbx(SVW,VSHR,luma_bot,8,luma_bot);
/// Apply [1 2 1] to input luma row
vbx_set_vl(rowlen-1);
vbx(VVW,VADD,sobel_bot,luma_bot,luma_bot+1);
vbx_set_vl(rowlen-2);
vbx(VVW,VADD,sobel_bot,sobel_bot,sobel_bot+1);
if (row>1){
// |grad_y| = |sobel_top - sobel_bot|;
vbx(VVW,VABSDIFF,grad_y,sobel_top,sobel_bot);
//grad_x = | 1 0 -1 |
// | 2 0 -2 | * luma
// | 1 - -1 |
vbx_set_vl(rowlen);
vbx(VVW,VADD,v_tmp0,luma_top,luma_mid);
vbx(VVW,VADD,v_tmp1,luma_bot,luma_mid);
vbx(VVW,VADD,v_tmp0,v_tmp0,v_tmp1);
vbx_set_vl(rowlen-2);
vbx(VVW,VABSDIFF,grad_x,v_tmp0,v_tmp0+2);
//sum gradiaents and threshold at 255
vbx_word_t* row_out=grad_x;
vbx(VVW,VADD,row_out,grad_x,grad_y);
vbx(SVW,VSUB,v_tmp0,255,row_out);
vbx(SVW,VCMV_LTZ,row_out,255,v_tmp0);
//copy to other bytes of word
vbx(SVW,VMUL,row_out,0x010101,row_out);
//pack into RGB
vbx_set_vl(3);
vbx_set_2D(rowlen-2,3,sizeof(vbx_word_t),0);
vbx_2D(VVB,VMOV,(vbx_byte_t*)row_out,(vbx_byte_t*)row_out,0);
vbx_byte_t *out=out_vid+((row-1)*pitch + 1)*3;
vbx_dma_to_host(out,row_out,(rowlen-2)*3);
}
//rotate rows
{
vbx_word_t* aa;
aa=luma_top;
luma_top=luma_mid;
luma_mid=luma_bot;
luma_bot=aa;
aa=sobel_top;
sobel_top=sobel_mid;
sobel_mid=sobel_bot;
sobel_bot=aa;
}
}
//blackout outside pixels
vbx_set_vl(rowlen*3);
vbx(SVBU,VMOV,line_in,0,0);
//top row
vbx_dma_to_host(out_vid,line_in,rowlen*3);
//bottom row
vbx_dma_to_host(out_vid+(rows-1)*rowlen*3,line_in,rowlen*3);
//left column
vbx_dma_to_host_2D(out_vid,line_in,3,cols,pitch*3,0);
//right column
vbx_dma_to_host_2D(out_vid+(rowlen-1)*3,line_in,3,cols,rowlen*3,0);
//restore scratchpad allocator state
vbx_sp_pop();
//make sure all dma operations are complete before exiting
vbx_sync();
}
filter_bin=MXP_Filter(filter_text)
import time
filter_bin.start()
start = time.time()
num_frames=60
for i in range (num_frames):
in_video = hdmi_in.frame_phyaddr()
out_video= hdmi_out.frame_phyaddr()
filter_bin.call(in_video,out_video,720,1280,1920)
end = time.time()
filter_bin.stop()
print("Frames per second: " + str((num_frames) / (end - start)))
hdmi_out.stop()
hdmi_in.stop()
del hdmi_in, hdmi_out
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1
Step1: Step 2
Step2: Step 3
Step3: 2. Applying Sobel filters on HMDI input with Python
Step 1
Step4: Step 2
Step5: Step 3
Step7: 3. Applying Sobel filters on HMDI input with MXP
Step 1
Step8: Step 2. Call mxp filter and compare timing
Step9: Notice how much faster than the python implementation the MXP version is. When you look at the hdmi output you can start to notice pretty bad tearing in the image, You porobably want to add some more sophisticated frame buffer management if you want to fix this.
|
1,532
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.svm import SVR
from sklearn import cross_validation
rng = np.random.RandomState(42)
n_samples = 200
kernels = ['linear', 'poly', 'rbf']
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
plt.figure(figsize=(14, 5))
for i in range(len(kernels)):
ax = plt.subplot(1, len(kernels), i + 1)
plt.setp(ax, xticks=(), yticks=())
model = SVR(kernel=kernels[i], C=5)
model.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_validation.cross_val_score(model,
X[:, np.newaxis], y, scoring="mean_squared_error", cv=10)
X_test = np.linspace(3 * -.5, 3 * .5, 100)
plt.plot(X_test, model.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-3 * .5, 3 * .5))
plt.ylim((-1, 1))
plt.legend(loc="best")
plt.title("Kernel {}\nMSE = {:.2e}(+/- {:.2e})".format(
kernels[i], -scores.mean(), scores.std()))
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cross_validation
rng = np.random.RandomState(0)
n_samples = 200
true_fun = lambda X: X ** 3
X = np.sort(5 * (rng.rand(n_samples) - .5))
y = true_fun(X) + .01 * rng.randn(n_samples)
X = X[:, None]
y = y
f, axarr = plt.subplots(1, 3)
axarr[0].scatter(X[::20], y[::20])
axarr[0].set_xlim((-3 * .5, 3 * .5))
axarr[0].set_ylim((-1, 1))
axarr[1].scatter(X[::10], y[::10])
axarr[1].set_xlim((-3 * .5, 3 * .5))
axarr[1].set_ylim((-1, 1))
axarr[2].scatter(X, y)
axarr[2].set_xlim((-3 * .5, 3 * .5))
axarr[2].set_ylim((-1, 1))
plt.show()
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
# This is actually negative MSE!
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to maximize score
print(train_scores.mean(axis=1))
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
#plt.ylim((0, 50))
plt.legend(loc='best')
from sklearn.learning_curve import learning_curve
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf'), X, y, cv=10,
scoring="mean_squared_error",
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to minimize squared error
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
plt.legend(loc='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Learning Curves
Step2: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
Step3: You can see that for the model with kernel = linear, the validation score doesn't really improve as more data is given.
|
1,533
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.master("spark://helk-spark-master:7077") \
.enableHiveSupport() \
.getOrCreate()
spark
first_df = spark.range(10).toDF("numbers")
first_df.show()
dog_data=[['Pedro','Doberman',3],['Clementine','Golden Retriever',8],['Norah','Great Dane',6]\
,['Mabel','Austrailian Shepherd',1],['Bear','Maltese',4],['Bill','Great Dane',10]]
dog_df=spark.createDataFrame(dog_data, ['name','breed','age'])
dog_df.show()
dog_df.schema
dog_df.printSchema()
dog_df.select("name").show()
dog_df.select(dog_df["name"]).show()
dog_df.filter(dog_df["age"] > 4).show()
dog_df.groupBy(dog_df["age"]).count().show()
dog_df.createOrReplaceTempView("dogs")
sql_dog_df = spark.sql("SELECT * FROM dogs")
sql_dog_df.show()
sql_dog_df = spark.sql("SELECT * FROM dogs WHERE name='Pedro'")
sql_dog_df.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is a SparkSession?
Step2: Check the SparkSession variable
Step3: What is a Dataframe?
Step4: Create another Dataframe
Step5: Check the Dataframe schema
Step6: printSchema()
Step7: Access Dataframe Columns
Step8: Access Dataframe's columns by indexing (df['name']).
Step9: Filter Dataframe
Step10: Group Dataframe
Step11: Run SQL queries on your Dataframe
|
1,534
|
<ASSISTANT_TASK:>
Python Code:
reactions = [
('k', {'A': 1}, {'B': 1, 'A': -1}),
]
names, params = 'A B'.split(), ['k']
D = [8e-9, 8e-9] # He diffusion constant in water at room temperature
import sympy as sym
x, h = sym.symbols('x h')
d2fdx2 = sym.Function('f')(x).diff(x, 2)
d2fdx2.as_finite_difference([x-h, x, x+h], x).factor()
# %load ../scipy2017codegen/odesys_diffusion.py
from itertools import chain
import numpy as np
import matplotlib.pyplot as plt
from scipy2017codegen.odesys import ODEsys
class MOLsys(ODEsys):
System of ODEs based on method of lines on the interval x = [0, x_end]
def __init__(self, *args, **kwargs):
self.x_end = kwargs.pop('x_end')
self.n_lines = kwargs.pop('n_lines')
self.D = kwargs.pop('D')
self.dx = self.x_end / self.n_lines
super(MOLsys, self).__init__(*args, **kwargs)
def f_eval(self, y, t, *params):
f_out = np.empty(self.ny*self.n_lines)
for i in range(self.n_lines):
slc = slice(i*self.ny, (i+1)*self.ny)
y_bis = self.second_derivatives_spatial(i, y, f_out[slc])
f_out[slc] *= self.D
f_out[slc] += self.lambdified_f(*chain(y[slc], params))
return f_out
def central_reference_bin(self, i):
return np.clip(i, 1, self.ny - 2)
def j_eval(self, y, t, *params):
j_out = np.zeros((self.ny*self.n_lines, self.ny*self.n_lines)) # dense matrix
for i in range(self.n_lines):
slc = slice(i*self.ny, (i+1)*self.ny)
j_out[slc, slc] = self.lambdified_j(*chain(y[slc], params))
k = self.central_reference_bin(i)
for j in range(self.ny):
j_out[i*self.ny + j, (k-1)*self.ny + j] += self.D[j]/self.dx**2
j_out[i*self.ny + j, (k )*self.ny + j] += -2*self.D[j]/self.dx**2
j_out[i*self.ny + j, (k+1)*self.ny + j] += self.D[j]/self.dx**2
return j_out
def second_derivatives_spatial(self, i, y, out):
k = self.central_reference_bin(i)
for j in range(self.ny):
left = y[(k-1)*self.ny + j]
cent = y[(k )*self.ny + j]
rght = y[(k+1)*self.ny + j]
out[j] = (left - 2*cent + rght)/self.dx**2
def integrate(self, tout, y0, params=(), **kwargs):
y0 = np.array(np.vstack(y0).T.flat)
yout, info = super(MOLsys, self).integrate(tout, y0, params, **kwargs)
return yout.reshape((tout.size, self.n_lines, self.ny)).transpose((0, 2, 1)), info
def x_centers(self):
return np.linspace(self.dx/2, self.x_end - self.dx/2, self.n_lines)
def plot_result(self, tout, yout, info=None, ax=None):
ax = ax or plt.subplot(1, 1, 1)
x_lines = self.x_centers()
for i, t in enumerate(tout):
for j in range(self.ny):
c = [0.0, 0.0, 0.0]
c[j] = t/tout[-1]
plt.plot(x_lines, yout[i, j, :], color=c)
self.print_info(info)
from scipy2017codegen.chem import mk_rsys
molsys = mk_rsys(MOLsys, reactions, names, params, x_end=0.01, n_lines=50, D=D)
xc = molsys.x_centers()
xm = molsys.x_end/2
A0 = np.exp(-1e6*(xc-xm)**2)
B0 = np.zeros_like(A0)
tout = np.linspace(0, 30, 40)
yout, info = molsys.integrate(tout, [A0, B0], [0.00123])
yout.shape
%matplotlib inline
molsys.plot_result(tout[::10], yout[::10, ...], info)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The diffusion follows Fick's law of diffusion
Step2: We will solve the partial differential equation (PDE) using method of lines. We discretize space into a series of bins (lines), in each of these bins we calculate the contribution of chemical reactions to the rate of change, and then add the diffusion contribution based on a finite difference estimate of the second derivative.
Step4: In this case, we are dealing with an equidistant grid and you may very well recognize this result from standard text books (it is actually quite easy to derive from the definition of the derivative).
|
1,535
|
<ASSISTANT_TASK:>
Python Code:
# Import required libraries
import requests
import json
from __future__ import division
import math
import csv
import matplotlib.pyplot as plt
# set key
key="be8992a420bfd16cf65e8757f77a5403:8:44644296"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key}
# make request
r = requests.get(base_url+response_format, params=search_params)
print(r.url)
# set date parameters here
# Uncomment to test
# r = requests.get(base_url+response_format, params=search_params)
# print(r.url)
# set page parameters here
# Uncomment to test
# r = requests.get(base_url+response_format, params=search_params)
# print(r.url)
# Inspect the content of the response, parsing the result as text
response_text= r.text
print(response_text[:1000])
# Convert JSON response to a dictionary
data = json.loads(response_text)
# data
print(data.keys())
# this is boring
data['status']
# so is this
data['copyright']
# this is what we want!
# data['response']
data['response'].keys()
data['response']['meta']['hits']
# data['response']['docs']
type(data['response']['docs'])
docs = data['response']['docs']
docs[0]
# set key
key="ef9055ba947dd842effe0ecf5e338af9:15:72340235"
# set base url
base_url="http://api.nytimes.com/svc/search/v2/articlesearch"
# set response format
response_format=".json"
# set search parameters
search_params = {"q":"Duke Ellington",
"api-key":key,
"begin_date":"20150101", # date must be in YYYYMMDD format
"end_date":"20151231"}
# make request
r = requests.get(base_url+response_format, params=search_params)
# convert to a dictionary
data=json.loads(r.text)
# get number of hits
hits = data['response']['meta']['hits']
print("number of hits: ", str(hits))
# get number of pages
pages = int(math.ceil(hits/10))
# make an empty list where we'll hold all of our docs for every page
all_docs = []
# now we're ready to loop through the pages
for i in range(pages):
print("collecting page", str(i))
# set the page parameter
search_params['page'] = i
# make request
r = requests.get(base_url+response_format, params=search_params)
# get text and convert to a dictionary
data=json.loads(r.text)
# get just the docs
docs = data['response']['docs']
# add those docs to the big list
all_docs = all_docs + docs
len(all_docs)
# DEFINE YOUR FUNCTION HERE
# uncomment to test
# get_api_data("Duke Ellington", 2014)
all_docs[0]
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', and 'date' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
formatted.append(dic)
return(formatted)
all_formatted = format_articles(all_docs)
all_formatted[:5]
def format_articles(unformatted_docs):
'''
This function takes in a list of documents returned by the NYT api
and parses the documents into a list of dictionaries,
with 'id', 'header', 'date', 'lead paragrph' and 'word count' keys
'''
formatted = []
for i in unformatted_docs:
dic = {}
dic['id'] = i['_id']
dic['headline'] = i['headline']['main']
dic['date'] = i['pub_date'][0:10] # cutting time of day.
# YOUR CODE HERE
formatted.append(dic)
return(formatted)
# uncomment to test
# all_formatted = format_articles(all_docs)
# all_formatted[:5]
keys = all_formatted[1]
# writing the rest
with open('all-formated.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_formatted)
# YOUR CODE HERE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Constructing API GET Request
Step2: You often want to send some sort of data in the URL’s query string. This data tells the API what information you want. In our case, we want articles about Duke Ellington. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. In addition to the search term q, we have to put in the api-key term.
Step3: Now we're ready to make the request. We use the .get method from the requests library to make an HTTP GET Request.
Step4: Now, we have a response object called r. We can get all the information we need from this object. For instance, we can see that the URL has been correctly encoded by printing the URL. Click on the link to see what happens.
Step5: Click on that link to see it returns!
Step6: Challenge 2
Step7: 2. Parsing the response text
Step8: What you see here is JSON text, encoded as unicode text. JSON stands for "Javascript object notation." It has a very similar structure to a python dictionary -- both are built on key/value pairs. This makes it easy to convert JSON response to a python dictionary.
Step9: That looks intimidating! But it's really just a big dictionary. Let's see what keys we got in there.
Step10: That looks what we want! Let's put that in it's own variable.
Step11: 3. Putting everything together to get all the articles.
Step12: Challenge 3
Step13: 4. Formatting
Step14: This is all great, but it's pretty messy. What we’d really like to to have, eventually, is a CSV, with each row representing an article, and each column representing something about that article (header, date, etc). As we saw before, the best way to do this is to make a lsit of dictionaries, with each dictionary representing an article and each dictionary representing a field of metadata from that article (e.g. headline, date, etc.) We can do this with a custom function
Step15: Challenge 4 Collect more fields
Step16: 5. Exporting
Step17: Capstone Challenge
|
1,536
|
<ASSISTANT_TASK:>
Python Code:
import keras.backend as K
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from kaggle_data import load_data, preprocess_data, preprocess_labels
X_train, labels = load_data('../data/kaggle_ottogroup/train.csv', train=True)
X_train, scaler = preprocess_data(X_train)
Y_train, encoder = preprocess_labels(labels)
X_test, ids = load_data('../data/kaggle_ottogroup/test.csv', train=False)
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y_train.shape[1]
print(nb_classes, 'classes')
dims = X_train.shape[1]
print(dims, 'dims')
feats = dims
training_steps = 25
x = K.placeholder(dtype="float", shape=X_train.shape)
target = K.placeholder(dtype="float", shape=Y_train.shape)
# Set model weights
W = K.variable(np.random.rand(dims, nb_classes))
b = K.variable(np.random.rand(nb_classes))
# Define model and loss
y = K.dot(x, W) + b
loss = K.categorical_crossentropy(y, target)
activation = K.softmax(y) # Softmax
lr = K.constant(0.01)
grads = K.gradients(loss, [W,b])
updates = [(W, W-lr*grads[0]), (b, b-lr*grads[1])]
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Training
loss_history = []
for epoch in range(training_steps):
current_loss = train([X_train, Y_train])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: {}".format(current_loss))
loss_history = [np.mean(lh) for lh in loss_history]
# plotting
plt.plot(range(len(loss_history)), loss_history, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
# Placeholders and variables
x = K.placeholder()
target = K.placeholder()
w = K.variable(np.random.rand())
b = K.variable(np.random.rand())
# Define model and loss
# %load ../solutions/sol_2311.py
# %load ../solutions/sol_2312.py
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Generate data
np_x = np.random.rand(1000)
np_target = 0.96*np_x + 0.24
# Training
loss_history = []
for epoch in range(200):
current_loss = train([np_x, np_target])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: %.03f, w, b: [%.02f, %.02f]" % (current_loss, K.eval(w), K.eval(b)))
# Plot loss history
# %load ../solutions/sol_2313.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Your Turn
Step2: Notes
Step3: Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD
Step4: The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.
Step5: Training
Step6: We can also plot the loss history
|
1,537
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q sciunit
import sciunit, random
from sciunit import Test
from sciunit.capabilities import Runnable
from sciunit.scores import BooleanScore
from sciunit.models import RunnableModel
from sciunit.models.backends import register_backends, Backend
class RandomNumBackend(Backend):
'''generate a random integer between min and max'''
def set_run_params(self, **run_params):
# get min from run_params, if not exist, then 0.
self.min = run_params.get('min', 0)
# get max from run_params, if not exist, then self.min + 100.
self.max = run_params.get('max', self.min + 100)
def _backend_run(self):
# generate and return random integer between min and max.
return random.randint(self.min, self.max)
class RandomNumModel(RunnableModel):
A model that always produces a constant number as output.
def run(self):
self.results = self._backend.backend_run()
class RangeTest(Test):
Tests if the model predicts the same number as the observation.
# Default Runnable Capability for RunnableModel
required_capabilities = (Runnable,)
# This test's 'judge' method will return a BooleanScore.
score_type = BooleanScore
def generate_prediction(self, model):
model.run()
return model.results
def compute_score(self, observation, prediction):
score = BooleanScore(
observation['min'] <= prediction and observation['max'] >= prediction
)
return score
model = RandomNumModel("model 1")
register_backends({"Random Number": RandomNumBackend})
model.set_backend("Random Number")
model.set_run_params(min=1, max=10)
observation = {'min': 1, 'max': 10}
oneToTenTest = RangeTest(observation, "test 1")
score = oneToTenTest.judge(model)
print(score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Beside the usual model in previous sections, let’s create a model that run a Backend instance to simulate and obtain results.
Step4: Let’s define subclasses of SciUnit Backend, Test, and Model.
Step5: Let’s define the model instance named model 1.
Step6: We must register any backend isntance in order to use it in model instances.
Step7: Next, create an observation that requires the generated random integer between 1 and 10 and a test instance that use the observation and against the model
Step8: print the score, and we can see the result.
|
1,538
|
<ASSISTANT_TASK:>
Python Code:
workDir = "/home/nick/notebook/SIPSim/t/M.bark_M.ext/"
import os
import sys
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
!cd $workDir; \
seqDB_tools accession-GI2fasta < M.barkeri_refseq.txt > M.barkeri.fna
!cd $workDir; \
seqDB_tools accession-GI2fasta < M.extorquens_AM1_refseq.txt > M.extorquens_AM1.fna
# renaming genome sequences
!cd $workDir; \
find . -name "*_rn.fna" |\
xargs -I % rm -f %
!cd $workDir; \
find . -name "*.fna" |\
perl -pe 's/\.fna$//' | \
xargs -P 2 -I % bash -c \
"SIPSim renameGenomes %.fna > %_rn.fna"
# list of all genomes files and their associated names
!cd $workDir; \
find . -name "*_rn.fna" | \
perl -pe 's/.+\///' | \
perl -pe 's/(.+)(\.[^.]+)/\$1\t\$1\$2/' > genomes_all_list.txt
!cd $workDir; head genomes_all_list.txt
!cd $workDir; \
SIPSim indexGenomes genomes_all_list.txt \
--np 2 > index_log.txt
!cd $workDir; \
SIPSim gradientComms \
--n_comm 1 \
--abund_dist uniform \
--abund_dist_p low:1,high:1 \
genomes_all_list.txt > comm-n1-unif.txt
!cd $workDir; tail comm-n1-unif.txt
# making config file
config =
[library 1]
# baseline: no incorp
[[intraPopDist 1]]
distribution = uniform
[[[start]]]
[[[[interPopDist 1]]]]
distribution = uniform
start = 0
end = 0
[[[end]]]
[[[[interPopDist 1]]]]
distribution = uniform
start = 0
end = 0
outfile = os.path.join(workDir, 'incorp.config')
outf = open(outfile, 'wb')
outf.write(config)
outf.close()
!cd $workDir; \
SIPSim isoIncorp \
comm-n1-unif.txt incorp.config \
> incorp-n1-unif.txt
!cd $workDir; head incorp-n1-unif.txt
%%bash -s "$workDir"
# adding incorp 100% library for N.extorquens
cd $1
cat incorp-n1-unif.txt \
<(tail -n +2 incorp-n1-unif.txt | \
perl -pe 's/^1/2/' | \
perl -pe 's/0\.0/100.0/ if /M.extorquens/') \
> incorp-n2-unif.txt
head incorp-n2-unif.txt
!cd $workDir; \
SIPSim gradientComms \
--n_comm 2 \
--abund_dist uniform \
--abund_dist_p low:1,high:1 \
genomes_all_list.txt > comm-n2-unif.txt
!cd $workDir; tail comm-n2-unif.txt
!cd $workDir; \
SIPSim fractions \
comm-n2-unif.txt \
> fracs-n2-unif.txt
!cd $workDir; head fracs-n2-unif.txt
!cd $workDir;\
SIPSim fragGC \
genomes_all_list.txt \
--flr 500,None \
--fld skewed-normal,9000,2500,-5 \
--nf 50x \
--np 24 \
2> shotFragGC_skewN90-25-n5-nS.log \
> shotFragGC_skewN90-25-n5-nS.pkl
!cd $workDir; head shotFragGC_skewN90-25-n5-nS.log
!cd $workDir; \
SIPSim OTU_sim \
shotFragGC_skewN90-25-n5-nS.pkl \
comm-n2-unif.txt \
incorp-n2-unif.txt \
fracs-n2-unif.txt \
--abs_abund 2e8 \
2> OTU-n2-unif_skewN90-25-n5-nS_A2e9.log \
> OTU-n2-unif_skewN90-25-n5-nS_A2e9.txt
!cd $workDir; head OTU-n2-unif_skewN90-25-n5-nS_A2e9.log
%%R -i workDir
# loading file
inFiles = c('OTU-n2-unif_skewN90-25-n5-nS_A2e9.txt')
inFiles = sapply(inFiles, function(x){
x = as.character(x)
paste(c(workDir, x), collapse='/')
})
tbls = list()
for (fileName in inFiles){
tbls[[fileName]] = read.csv(fileName, sep='\t')
}
tbl = do.call(rbind, tbls)
tbl$abs_abund = as.numeric(gsub('.+-nS_A|\\.txt\\.[0-9]+', '', rownames(tbl)))
tbl = tbl %>%
filter(!grepl('inf', fractions, ignore.case=T)) %>%
separate(fractions, into = c('BD_min','BD_max'), sep='-', convert=TRUE) %>%
filter(BD_min != 1.795)
%%R
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
%%R -w 800 -h 400
# plotting absolute abundances
tbl.s = tbl %>%
mutate(BD_mean = (BD_min + BD_max) / 2) %>%
group_by(abs_abund, BD_mean, library, taxon) %>%
summarize(total_count = sum(count))
## plot
p = ggplot(tbl.s, aes(BD_mean, total_count, shape=taxon, color=taxon)) +
geom_point() +
geom_line() +
scale_x_continuous(limits=c(1.68,1.78), breaks=seq(1.68,1.78,0.02)) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme(
text = element_text(size=16)
)
p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Downloading genomes
Step3: Indexing genomes
Step4: Simulating a gradient community
Step6: Simulating isotope incorporation
Step7: Resimulating community (n=2)
Step8: Simulating gradient fractions
Step9: Simulating shotgun fragments & calculating GC
Step10: Creating OTU table
Step11: Plotting OTU abundances
|
1,539
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib notebook
# Load the csv with pandas
df = pd.read_csv('hipgalv.LSR.csv', index_col=0)
#print(df)
# Use matplotlib's default "Reds" colormap. More colormaps and information here:
# https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
cmap = cm.get_cmap('Reds')
# Make the plot
ax = df.plot.scatter(x='z',
y='Mv',
c='colorb_v',
cmap=cmap,
title="Sample Visualization")
# Set the x-axis label
ax.set_xlabel("Altitude")
# Set the y-axis label
ax.set_ylabel("Intrinsic Luminosity")
# Set the colormap label
f = plt.gcf()
cax = f.get_axes()[1]
cax.set_ylabel('B-V Color')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here is a code sample showing how to read the data and draw a colored plot.
|
1,540
|
<ASSISTANT_TASK:>
Python Code:
print('hello, world.')
# Would show 9 if this were the last line, but it is not, so shows nothing
4 + 5
# I hope we see 11.
5 + 6
a = 5 + 6
a
import numpy as np
import scipy.integrate
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats = {'svg',}
#%config InlineBackend.figure_formats = {'png', 'retina'}
x = np.linspace(0, 2 * np.pi, 200)
y = np.exp(np.sin(np.sin(x)))
plt.plot(x, y)
plt.xlim((0, 2 * np.pi))
plt.xlabel('$x$')
plt.ylabel('$\mathrm{e}^{\sin{x}}$')
plt.title('Example plot')
plt.show()
def lorenz_attractor(r, t, p):
Compute the right hand side of system of ODEs for Lorenz attractor.
Parameters
----------
r : array_like, shape (3,)
(x, y, z) position of trajectory.
t : dummy_argument
Dummy argument, necessary to pass function into
scipy.integrate.odeint
p : array_like, shape (3,)
Parameters (s, k, b) for the attractor.
Returns
-------
output : ndarray, shape (3,)
Time derivatives of Lorenz attractor.
Notes
-----
.. Returns the right hand side of the system of ODEs describing
the Lorenz attractor.
x' = s * (y - x)
y' = x * (k - z) - y
z' = x * y - b * z
# Unpack variables and parameters
x, y, z = r
s, p, b = p
return np.array([s * (y - x),
x * (p - z) - y,
x * y - b * z])
# Parameters to use
p = np.array([10.0, 28.0, 8.0 / 3.0])
# Initial condition
r0 = np.array([0.1, 0.0, 0.0])
# Time points to sample
t = np.linspace(0.0, 80.0, 10000)
# Use scipy.integrate.odeint to integrate Lorentz attractor
r = scipy.integrate.odeint(lorenz_attractor, r0, t, args=(p,))
# Unpack results into x, y, z.
x, y, z = r.transpose()
# Plot the result
plt.plot(x, z, '-', linewidth=0.5)
plt.xlabel(r'$x(t)$', fontsize=18)
plt.ylabel(r'$z(t)$', fontsize=18)
plt.title(r'$x$-$z$ proj. of Lorenz attractor traj.')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you evaluate a Python expression that returns a value, that value is displayed as output of the code cell. This only happens, however, for the last line of the code cell.
Step2: Note, however, if the last line does not return a value, such as if we assigned a variable, there is no visible output from the code cell.
Step3: However, now if we type in the variable, its value will be displayed.
Step4: Import packages for scientific computing
Step5: Display of graphics
Step6: Example plot
Step7: These data can then be plotted as below with axes then being labeled.
Step9: An example function
Step10: With this function in hand, we just have to pick our initial conditions and time points, run the numerical integration, and then plot the result.
|
1,541
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import (create_eog_epochs, create_ecg_epochs,
compute_proj_ecg, compute_proj_eog)
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
system_projs = raw.info['projs']
raw.del_proj()
empty_room_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'ernoise_raw.fif')
empty_room_raw = mne.io.read_raw_fif(empty_room_file)
empty_room_raw.del_proj()
for average in (False, True):
empty_room_raw.plot_psd(average=average, dB=False, xscale='log')
empty_room_projs = mne.compute_proj_raw(empty_room_raw, n_grad=3, n_mag=3)
mne.viz.plot_projs_topomap(empty_room_projs, colorbar=True, vlim='joint',
info=empty_room_raw.info)
fig, axs = plt.subplots(2, 3)
for idx, _projs in enumerate([system_projs, empty_room_projs[3:]]):
mne.viz.plot_projs_topomap(_projs, axes=axs[idx], colorbar=True,
vlim='joint', info=empty_room_raw.info)
mags = mne.pick_types(raw.info, meg='mag')
for title, projs in [('system', system_projs),
('subject-specific', empty_room_projs[3:])]:
raw.add_proj(projs, remove_existing=True)
fig = raw.plot(proj=True, order=mags, duration=1, n_channels=2)
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} projectors'.format(title), size='xx-large', weight='bold')
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1}
# NOTE: appropriate rejection criteria are highly data-dependent
reject = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
# time range where we expect to see the auditory N100: 50-150 ms post-stimulus
times = np.linspace(0.05, 0.15, 5)
epochs = mne.Epochs(raw, events, event_id, proj='delayed', reject=reject)
fig = epochs.average().plot_topomap(times, proj='interactive')
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.plot_joint()
ecg_evoked.apply_baseline((None, None))
ecg_evoked.plot_joint()
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None)
ecg_projs = projs[3:]
print(ecg_projs)
mne.viz.plot_projs_topomap(ecg_projs, info=raw.info)
raw.del_proj()
for title, proj in [('Without', empty_room_projs), ('With', ecg_projs)]:
raw.add_proj(proj, remove_existing=False)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} ECG projectors'.format(title), size='xx-large',
weight='bold')
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.apply_baseline((None, None))
eog_evoked.plot_joint()
eog_projs, _ = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None,
no_proj=True)
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
for title in ('Without', 'With'):
if title == 'With':
raw.add_proj(eog_projs)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} EOG projectors'.format(title), size='xx-large',
weight='bold')
evoked = epochs.average()
# Apply the average ref first:
# It's how we typically view EEG data, and here we're really just interested
# in the effect of the EOG+ECG SSPs
evoked.del_proj().set_eeg_reference(projection=True).apply_proj()
evoked.add_proj(ecg_projs).add_proj(eog_projs)
fig, axes = plt.subplots(3, 3, figsize=(8, 6))
for ii in range(3):
axes[ii, 0].get_shared_y_axes().join(*axes[ii])
for pi, proj in enumerate((False, True, 'reconstruct')):
evoked.plot(proj=proj, axes=axes[:, pi], spatial_colors=True)
if pi == 0:
for ax in axes[:, pi]:
parts = ax.get_title().split('(')
ax.set(ylabel=f'{parts[0]} ({ax.get_ylabel()})\n'
f'{parts[1].replace(")", "")}')
axes[0, pi].set(title=f'proj={proj}')
axes[0, pi].texts = []
plt.setp(axes[1:, :].ravel(), title='')
plt.setp(axes[:, 1:].ravel(), ylabel='')
plt.setp(axes[:-1, :].ravel(), xlabel='')
mne.viz.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe
Step2: The example data <sample-dataset> also includes an "empty room"
Step3: Notice that the empty room recording itself has the system-provided SSP
Step4: Visualizing the empty-room noise
Step5: Creating the empty-room projectors
Step6: Notice that the gradiometer-based projectors seem to reflect problems with
Step7: Visualizing how projectors affect the signal
Step8: The effect is sometimes easier to see on averaged data. Here we use an
Step9: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with
Step10: Repairing ECG artifacts with SSP
Step11: Looks like the EEG channels are pretty spread out; let's baseline-correct and
Step12: To compute SSP projectors for the heartbeat artifact, you can use
Step13: The first line of output tells us that
Step14: Just like with the empty-room projectors, we can visualize the scalp
Step15: Since no dedicated ECG sensor channel was detected in the
Step16: Finally, note that above we passed reject=None to the
Step17: Just like we did with the heartbeat artifact, we can compute SSP projectors
Step18: Just like with the empty-room and ECG projectors, we can visualize the scalp
Step19: Now we repeat the plot from above (with empty room and ECG projectors) and
Step20: Notice that the small peaks in the first to magnetometer channels (MEG
|
1,542
|
<ASSISTANT_TASK:>
Python Code:
!pyspark
sc
users = context.load("s3n://path/to/users.json", "json")
young = users.filter(users.age<21)
young = users[users.age<21]
young.select(young.name, young.age+1)
young.groupBy("gender").count()
young.join(logs, logs.userId == users.userId, "left_outer")
young.registerTempTable("young")
context.sql("SELECT count(*) FROM young")
pandas_df = young.toPandas()
spark_df = context.createDataFrame(pandas_df)
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.jsonFile("file:/path/file.json")
df.show()
df.printSchema()
df.select("column_name")
df.filter(df.column_name>10)
df.groupBy("column_name").count()
df = sqlContext.inferSchema(my_data)
df.registerTempTable("dataframe_name")
rdd_from_df = sqlContext.sql("SELECT * FROM dataframe_name")
my_data = sc.textFile("file:/path/*")
my_data.count()
my_data.collect()
my_data.take(10)
my_data.filter(lambda line: ".txt" in line)
sc.textFile("file:/path/file.txt") \
.filter(lambda line: ".txt" in line) \
.count()
first_words = my_data.map(lambda line: line.split()[0])
for word in first_words.take(10):
print word
first_words.saveAsTextFile("file:/path/file")
DATE_TIME = 0
USER_ID = 1
IP_ADDRESS = 2
ACTION = 3
log_data = sc.textFile("file:/path/*")
user_actions = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[USER_ID], 1)) \
.reduceByKey(lambda count1, count2: count1 + count2)
user_actions.map(lambda pair: (pair[0], pair[1])).sortyByKey(False).take(5)
user_ips = log_data \
.map(lambda line: line.split()) \
.map(lambda words: (words[IP_ADDRESS],words[USER_ID])) \
.groupByKey()
user_data = sc.textFile("file:/path/*")
user_profile = user_data \
.map(lambda line: line.split(',')) \
.map(lambda words: (words[0], words[1:]))
user_actions_with_profile = user_actions.join(user_profile)
for (user_id, (user_info, count)) in user_actions_with_profiles.take(10):
print user_id, count, user_info
!sudo service spark-master start
!sudo service spark-worker start
!sudo service spark-master stop
!sudo service spark-worker stop
!sudo service spark-master stop
!sudo service spark-worker stop
http://localhost:18080//
!MASTER=spark://localhost:7077 pyspark
sc.master
http://localhost:4040/
val dbConnection = ...
lines.map(... dbConnection.createStatement(...) ...)
dbConnection.close() // Wrong!
def count_txt(partIter):
for line in partIter:
if ".txt" in line: txt_count += 1
yield (txt_count)
my_data = sc.textFile("file:/path/*") \
.mapPartitions(count_txt) \
.collect()
# Show the partitioning
print "Data partitions: ", my_data.toDebugString()
# Cache RDD to memory
my_data.cache()
# Persist RDD to both memory and disk (if memory is not enough), with replication of 2
my_data.persist(MEMORY_AND_DISK_2)
# Unpersist RDD, removing it from memory and disk
my_data.unpersist()
# Change the persistence level after unpersist
my_data.persist(MEMORY_AND_DISK)
# Enable checkpointing by setting the checkpoint directory,
# which will contain all checkpoints for the given data:
sc.setCheckpointDir("checkpoints")
my_data = sc.parallelize([1,2,3,4,5])
# Long loop that may cause a stack overflow
for i in range(1000):
my_data = mydata.map(lambda myInt: myInt + 1)
if i % 10 == 0:
my_data.checkpoint()
my_data.count()
my_data.collect()
# Display the lineage
for rddstring in my_data.toDebugString().split('\n'):
print rddstring.strip()
import sys
from pyspark import SparkContext
if __name__ == "__main__":
if len(sys.argv) < 2:
print >> sys.stderr, "Usage: App Name <file>"
exit(-1)
count_text_files()
def count_text_files():
sc = SparkContext()
logfile = sys.argv[1]
text_files_count = sc.textFile(logfile)
.filter(lambda line: '.txt' in line)
text_files_count.cache()
print("Number of text files: ", text_files_count.count())
!spark-submit --properties-file dir/myspark.conf script.py data/*
!spark-submit --master spark//localhost:7077 --name 'App Name' script.py data/*
spark.app.name App Name
spark.ui.port 4141
spark.master spark://localhost:7077
!spark-submit --properties-file spark.conf script.py data/*
sconf = SparkConf() \
.setAppName("Word Count") \
.set("spark.ui.port","4141")
sc = SparkContext(conf=sconf)
$SPARK_HOME/conf/log4j.properties.template
!spark-shell --master local[2]
val ssc = new StreamingContext(new SparkConf(), Seconds(1))
val my_stream = ssc.socketTextStream(hostname, port)
val logs = ssc.socketTextStream(hostname, port)
val requests = my_stream
.map(line => (line.split(" ")(2), 1))
.reduceByKey((x, y) => x + y)
val sorted_requests = requests
.map(pair => pair.swap)
.transform(rdd => rdd.sortByKey(false))
sorted_requests.foreachRDD((rdd, time) => {
println("Top users @ " + time)
rdd.take(5).foreach(
pair => printf("User: %s (%s)\n", pair._2, pair._1))
}
requests.saveAsTextFiles("/dir/requests")
ssc.start()
ssc.awaitTermination()
ssc.checkpoint("dir")
def updateCount = (newCounts: Seq[Int], state: Option[Int]) => {
val newCount = newCounts.foldLeft(0)(_ + _)
val previousCount = state.getOrElse(0)
Some(newCount + previousCount)
}
val totalUserreqs = userreqs.updateStateByKey(updateCount)
val reqcountsByWindow = logs.map(line => (line.split(' ')(2), 1))
.reduceByKeyAndWindow((x: Int, y: Int) => x + y, Minutes(5), Seconds(30))
// define listener
class MyListener extends StreamingListener {
override def onReceiverStopped(...) {
streamingContext.stop()
}
}
// attach listener
streamingContext. addStreamingListener(new MyListener())
broadcast_file = "broadcast.txt"
broadcast_list = list(map(lambda l: l.strip(), open(broadcast_file)))
broadcast_list_sc = sc.broadcast(broadcast_list)
log_file = "hdfs://localhost/user/logs/*"
filtered_data = sc.textFile(log_file)\
.filter(lambda line: any(item in line for item in broadcast_list_sc.value))
filtered_data.take(10)
txt_count = sc.accumulator(0)
my_data = sc.textFile(filePath)
my_data.foreach(lambda line: if '.txt' in line: txt_count.add(1))
jpg_count = sc.accumulator(0)
html_count = sc.accumulator(0)
css_count = sc.accumulator(0)
def countFileType(s):
if '.jpg' in s: jpg_count.add(1)
elif '.html' in s: html_count.add(1)
elif '.css' in s: css_count.add(1)
filename="hdfs://logs/*"
logs = sc.textFile(filename)
logs.foreach(lambda line: countFileType(line))
print 'File Type Totals:'
print '.css files: ', css_count.value
print '.html files: ', html_count.value
print '.jpg files: ', jpg_count.value
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: View the spark context, the main entry point to the Spark API
Step2: DataFrames
Step3: Create a new DataFrame that contains “young users” only
Step4: Alternatively, using Pandas-like syntax
Step5: Increment everybody’s age by 1
Step6: Count the number of young users by gender
Step7: Join young users with another DataFrame called logs
Step8: Count the number of users in the young DataFrame
Step9: Convert Spark DataFrame to Pandas
Step10: Create a Spark DataFrame from Pandas
Step11: Given the Spark Context, create a SQLContext
Step12: Create a DataFrame based on the content of a file
Step13: Display the content of the DataFrame
Step14: Print the schema
Step15: Select a column
Step16: Create a DataFrame with rows matching a given filter
Step17: Aggregate the results and count
Step18: Convert a RDD to a DataFrame (by inferring the schema)
Step19: Register the DataFrame as a table
Step20: Run a SQL Query on a DataFrame registered as a table
Step21: RDDs
Step22: Count the number of lines in the data
Step23: Return all the elements of the dataset as an array--this is usually more useful after a filter or other operation that returns a sufficiently small subset of the data
Step24: Return the first 10 lines in the data
Step25: Create an RDD with lines matching the given filter
Step26: Chain a series of commands
Step27: Create a new RDD mapping each line to an array of words, taking only the first word of each array
Step28: Output each word in first_words
Step29: Save the first words to a text file
Step30: Pair RDDs
Step31: Show the top 5 users by count, sorted in descending order
Step32: Group IP addresses by user id
Step33: Given a user table with the following csv format
Step34: Inner join the user_actions and user_profile RDDs
Step35: Show the joined table
Step36: Running Spark on a Cluster
Step37: Stop the standalone cluster's Master and Worker daemons
Step38: Restart the standalone cluster's Master and Worker daemons
Step39: View the Spark standalone cluster UI
Step40: Start the Spark shell and connect to the cluster
Step41: Confirm you are connected to the correct master
Step42: Viewing the Spark Application UI
Step43: Working with Partitions
Step44: However, this fails for several reasons
Step45: Caching RDDs
Step46: Checkpointing RDDs
Step47: Writing and Running a Spark Application
Step48: Submit the script to Spark for processing
Step49: Configuring Spark Applications
Step50: Configure spark.conf
Step51: Run a Spark app and set the configuration options through spark.conf
Step52: Set the config options programmatically
Step53: Set logging levels located in the following file, or place a copy in your pwd
Step54: Streaming
Step55: Create a StreamingContext (similar to SparkContext in core Spark) with a batch duration of 1 second
Step56: Get a DStream from a streaming data source (text from a socket)
Step57: DStreams support regular transformations such as map, flatMap, and filter, and pair transformations such as reduceByKey, groupByKey, and joinByKey.
Step58: The transform(function) method creates a new DStream by executing the input function on the RDDs.
Step59: foreachRDD(function) performs a function on each RDD in the DStream (map is like a shortcut not requiring you to get the RDD first before doing an operation)
Step60: Save the DStream result part files with the given folder prefix, the actual folder will be /dir/requests-timestamp0/
Step61: Start the execution of all DStreams
Step62: Wait for all background threads to complete before ending the main thread
Step63: Streaming with States
Step64: Compute a DStream based on the previous states plus the current state
Step65: Compute a DStream based Sliding window, every 30 seconds, count requests by user over the last 5 minutes
Step66: Collect statistics with the StreamingListener API
Step67: Broadcast Variables
Step68: Broadcast the target list to all workers
Step69: Filter based on the broadcast list
Step70: Accumulators
Step71: Count the number of txt files in the RDD
Step72: Count the number of file types encountered
|
1,543
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C,(n_H*n_W,n_C)))
a_G_unrolled = tf.transpose(tf.reshape(a_G,(n_H*n_W,n_C)))
# compute the cost with tensorflow (≈1 line)
sub = tf.subtract(a_C_unrolled, a_G_unrolled)
sub_squ = tf.square(sub)
sum_sub_squ = tf.reduce_sum(sub_squ)
J_content = (1/(4*n_H*n_W*n_C))*sum_sub_squ
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A,tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S,(n_H*n_W,n_C)))
a_G = tf.transpose(tf.reshape(a_G,(n_H*n_W,n_C)))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
sub = tf.subtract(GS, GG)
sub_squ = tf.square(sub)
sum_sub_squ = tf.reduce_sum(sub_squ)
J_style_layer = sum_sub_squ/(4*(n_H*n_W*n_C)**2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
def compute_style_cost(model, STYLE_LAYERS):
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
### START CODE HERE ### (≈1 line)
J = alpha*J_content + beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, 10,40)
### END CODE HERE ###
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
model_nn(sess, generated_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this
Step4: The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
Step5: Expected Output
Step7: This painting was painted in the style of impressionism.
Step9: Expected Output
Step10: Expected Output
Step12: You can combine the style costs for different layers as follows
Step14: Note
Step15: Expected Output
Step16: Let's load, reshape, and normalize our "content" image (the Louvre museum picture)
Step17: Let's load, reshape and normalize our "style" image (Claude Monet's painting)
Step18: Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in nst_utils.py to see the details of generate_noise_image(...); to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
Step19: Next, as explained in part (2), let's load the VGG16 model.
Step20: To get the program to compute the content cost, we will now assign a_C and a_G to be the appropriate hidden layer activations. We will use layer conv4_2 to compute the content cost. The code below does the following
Step21: Note
Step22: Exercise
Step23: You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference
Step24: Exercise
Step25: Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
|
1,544
|
<ASSISTANT_TASK:>
Python Code:
from paralleldomain.utilities.any_path import AnyPath
absolute_path = "/home/nisseknudsen/Data/testset_dgp"
absolute_anypath = AnyPath(path=absolute_path)
relative_path = "testset_dgp"
relative_anypath = AnyPath(path=relative_path)
s3_path = "s3://pd-sdk-c6b4d2ea-0301-46c9-8b63-ef20c0d014e9/testset_dgp/"
s3_anypath = AnyPath(path=s3_path)
print(absolute_anypath)
print(relative_anypath)
print(s3_anypath)
content_absolute = []
for fp_abs in absolute_anypath.iterdir():
content_absolute.append(fp_abs)
content_s3 = []
for fp_s3 in s3_anypath.iterdir():
content_s3.append(fp_s3)
print("Content Absolute Path:")
print(*content_absolute, sep="\n")
print("\nContent S3 Path:")
print(*content_s3, sep="\n")
assert isinstance(content_absolute[0], AnyPath)
print(f"Type: {type(content_absolute[0])}")
import os
print(os.getcwd())
try:
for fp_rel in relative_anypath.iterdir():
print(fp_rel)
except FileNotFoundError:
print(f"Nice try!\nUnfortunately, {os.getcwd()}/{relative_anypath} does not exist.")
parent_path = "/home/nisseknudsen/Data"
parent_anypath = AnyPath(parent_path)
absolute_concatenated_path = parent_anypath / relative_anypath
content_relative = []
for fp_abs in absolute_concatenated_path.iterdir():
content_relative.append(fp_abs)
# cast each `AnyPath` to `str` and generated `list` for comparison.
list(map(str, content_absolute)) == list(map(str, content_relative))
os.chdir(parent_path)
print(list(relative_anypath.iterdir()))
scene_file = absolute_anypath / "scene_dataset.json"
assert scene_file.exists()
print(f"{scene_file} found!")
import json
with scene_file.open(mode="r") as fp:
scene_dict = json.load(fp)
print(scene_dict.keys())
import tempfile
_, out_file = tempfile.mkstemp()
out_file = AnyPath(out_file)
print(out_file)
scene_dict["metadata"]["foo"] = "bar"
with out_file.open("w") as fp:
json.dump(scene_dict, fp, indent=2)
with open(str(out_file), "r") as fp:
print(fp.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate AnyPath for different addresses
Step2: Next, let's see how what files are present in each location.
Step3: We capture all contents for each AnyPath object in an array and can now print it out.
Step4: As it turns out, both directories have the same content, just once on local filesystem and once on an s3 bucket.
Step5: Relative Paths
Step6: As we can see, the cwd is currently not set to the expected (/home/nisseknudsen/Data) parent directory. In fact, calling .iterdir() now will throw a FileNotFoundError, because no sub-directory can be found with such name.
Step7: As a solution, we could either provide an absolute path as described above, or we can change the cwd of our Python environment appropriately.
Step8: As you can see, the __truediv__ operator works with AnyPath the same way as with pathlib.Path objects.
Step9: Next, let's change the cwd of the Python environment.
Step10: The simple print statement shows the expected files, just by using now the relative_anypath object without having to concatenate any absolute path information beforehand.
Step11: Files buffers are accessed through the instance method .open(). The API Reference docs provide more detail into all available parameters, but most importantly it accepts the mode for read/write and text/bytes.
Step12: The printed keys are correct. Let's add another key and save the contents to a new file.
Step13: Last but not least, let's check if the file was written correctly by doing a simple print and using Python's std IO library.
|
1,545
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from random import random
from IPython.display import FileLink, FileLinks
def σ(z):
return 1/(1 + np.e**(-z))
def σ_prime(z):
return np.e**(z) / (np.e**z + 1)**2
def Plot(fn, *args, **kwargs):
argLength = len(args);
if argLength == 1:
start = args[0][0]
end = args[0][1]
points = None
try:
points = args[0][2]
except:
pass
if not points: points = 30
xs = linspace(start, end, points);
plot(xs, list(map(fn, xs)), **kwargs);
Plot(σ, [-2, 2])
y = lambda neuron, input: neuron[0] * input + neuron[1]
α = lambda neuron, input: σ(y(neuron, input))
partial_w = lambda neuron, input: \
σ_prime(y(neuron, input)) * input
partial_y = lambda neuron, input: \
σ_prime(y(neuron, input))
class Neuron():
def __init__(self, neuron):
self.neuron = neuron
def output(self, input):
return α(self.neuron, input)
def set_η(self, η):
self.η = η
def train(self, input, target, η=None):
result = self.output(input);
δ = result - target
p_w = partial_w(self.neuron, input)
p_y = partial_y(self.neuron, input)
gradient = np.array([p_w, p_y])#/np.sqrt(p_w**2 + p_y**2)
if η is None:
η = self.η
self.neuron = - η * δ * gradient + self.neuron;
return result
class Network():
def __init__(self, shape, parameters=None):
self.shape = shape;
self.zs = {};
self.αs = {};
self.weights = {};
self.biases = {};
self.δs = {};
self.partial_ws = {};
if parameters is not None:
weights, biases = parameters;
self.weights = weights;
self.biases = biases;
else:
for i in range(1, len(shape)):
self.create_network(i, shape[i])
def create_network(self, ind, size):
if ind is 0: return;
self.weights[ind] = np.random.random(self.shape[ind-1:ind+1][::-1]) - 0.5
self.biases[ind] = np.random.random(self.shape[ind]) - 0.5
def get_partials_placeholder(self):
partial_ws = {};
δs = {};
for ind in range(1, len(self.shape)):
partial_ws[ind] = np.zeros(self.shape[ind-1:ind+1][::-1])
δs[ind] = np.zeros(self.shape[ind])
return partial_ws, δs;
def output(self, input=None):
if input is not None:
self.forward_pass(input);
return self.αs[len(self.shape) - 1]
def set_η(self, η=None):
if η is None: return
self.η = η
def train(self, input, target, η=None):
if η is None:
η = self.η
self.forward_pass(input)
self.back_propagation(target)
self.gradient_descent(η)
# done: generate a mini batch of training data,
# take an average of the gradeitn from the mini-batch
def train_batch(self, inputs, targets, η=None):
inputs_len = np.shape(inputs)[0]
targets_len = np.shape(targets)[0]
assert inputs_len == targets_len, \
"input and target need to have the same first dimension"
N = inputs_len
partial_ws, δs = self.get_partials_placeholder()
# print(partial_ws, δs)
for input, target in zip(inputs, targets):
# print(input, target)
self.forward_pass(input)
self.back_propagation(target)
for ind in range(1, len(self.shape)):
partial_ws[ind] += self.partial_ws[ind] / float(N)
δs[ind] += self.δs[ind] / float(N)
self.partial_ws = partial_ws
self.δs = δs
self.gradient_descent(η)
def forward_pass(self, input):
# forward passing
self.αs[0] = input;
for i in range(1, len(self.shape)):
self.forward_pass_layer(i);
def back_propagation(self, target):
# back-propagation
ind_last = len(self.shape) - 1
self.δs[ind_last] = σ_prime(self.zs[ind_last]) * \
(self.αs[ind_last] - target);
for i in list(range(1, len(self.shape)))[::-1]:
self.back_propagation_layer(i)
def gradient_descent(self, η):
# gradient descent
for i in range(1, len(self.shape)):
self.gradient_descent_layer(i, η)
def forward_pass_layer(self, ind):
ind is the index of the current network
self.zs[ind] = self.biases[ind] + \
np.tensordot(self.weights[ind], self.αs[ind - 1], axes=1)
self.αs[ind] = σ(self.zs[ind])
def back_propagation_layer(self, ind):
ind \in [len(self.shape) - 1, 1]
if ind > 1:
self.δs[ind - 1] = σ_prime(self.zs[ind-1]) * \
np.tensordot(self.δs[ind], self.weights[ind], axes=1)
self.partial_ws[ind] = np.tensordot(self.δs[ind], self.αs[ind - 1], axes=0)
def gradient_descent_layer(self, ind, η):
ind \in [1, ...len(shape) - 1]
self.weights[ind] = self.weights[ind] - η * self.partial_ws[ind]
self.biases[ind] = self.biases[ind] - η * self.δs[ind]
# train as a simple neuron
target_func = lambda x: 1 if x < 0.5 else 0
nw = Network([1, 4, 1])
figure(figsize=(16, 4))
subplot(131)
# todo: generate a mini batch of training data,
# take an average of the gradeitn from the mini-batch
inputs = [[x] for x in np.linspace(0, 1, 100)]
targets = [[target_func(x)] for x in np.linspace(0, 1, 100)]
for ind in range(10000):
x = np.random.random()
nw.train([x], [target_func(x)], 10)
scatter(x, target_func(x))
Plot(lambda x: nw.output([x])[0], [0, 1], label="neural net")
Plot(lambda x: target_func(x), [0, 1], color='r', linewidth=4, alpha=0.3, label="target function")
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=3, frameon=False)
subplot(132)
imshow(nw.weights[1], interpolation='none', aspect=1);colorbar();
subplot(133)
imshow(nw.weights[2], interpolation='none', aspect=1);colorbar()
# subplot(144)
# imshow(nw.weights[3], interpolation='none', aspect=1);colorbar()
# train as a simple neuron
target_func = lambda x: 1 if x < 0.5 else 0
nw = Network([1, 4, 1])
figure(figsize=(4, 4))
#subplot(141)
batch_size = 10
inputs = [[x] for x in np.linspace(0, 1, batch_size)]
targets = [[target_func(x)] for x in np.linspace(0, 1, batch_size)]
n = 0
for i in range(3):
for ind in range(40):
n += 1;
nw.train_batch(inputs, targets, 10)
Plot(lambda x: nw.output([x])[0], [0, 1], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
_title = "Training Progress Through\nMini-batches (4 hidden neurons)"
title(_title, fontsize=15)
legend(loc=(1.2, 0.25), frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn, dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
# train as a simple neuron
target_func = lambda x: np.cos(x)**2
nw = Network([1, 10, 1])
figure(figsize=(16, 4))
#subplot(141)
batch_size = 100
grid = np.linspace(0, 10, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(500):
n += 1;
nw.train_batch(inputs, targets, 40)
Plot(lambda x: nw.output([x])[0], [0, 10], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Training Progress Through Mini-batches (10 hidden neurons)"
title(_title)
xlim(-0.25, 10.25)
ylim(-0.25, 1.25)
legend(loc=4, frameon=False)
fn = "004 batch training " + _title + ".png"
savefig(fn, dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
# train as a simple neuron
target_func = lambda x: np.abs(x - 0.5)
nw = Network([1, 20, 1])
figure(figsize=(6, 6))
batch_size = 40
grid = np.linspace(0, 0.5, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(1000):
n += 1;
nw.train_batch(inputs, targets, 23)
Plot(lambda x: nw.output([x])[0], [0, 1.0], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Emulate Half of An Absolute Value Function"
title(_title)
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=1, frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn,
dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
# train as a simple neuron
target_func = lambda x: np.abs(x - 0.5)
nw = Network([1, 40, 1])
figure(figsize=(6, 6))
batch_size = 80
grid = np.linspace(0, 1, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(4000):
n += 1;
nw.train_batch(inputs, targets, 10)
Plot(lambda x: nw.output([x])[0], [0, 1.0], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Emulate An Absolute\nFunction (2 times of hidden neurons)"
title(_title)
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=1, frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn,
dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Why do we need to improve the traing method?
Step4: Now let's train the data set the way before, to validate our new class.
Step5: Now a sine function
Step6: Now an absolute function?
Step7: Now equiped with this set of hyper-parameters, I thought
|
1,546
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import matplotlib.pyplot as plt
df = pd.read_csv('imdb.csv', delimiter='\t')
df.head()
df.head(2)
df['Year'].head(3)
df[['Year','Rating']].head(3)
df[:10]
df[['Year','Rating']][:10]
df[:10][['Year','Rating']]
print( min(df['Year']), df['Year'].min(), max(df['Year']), df['Year'].max() )
year_nummovies = df["Year"].value_counts()
year_nummovies.head()
print( np.mean(df['Rating']), np.mean(df['Votes']) )
print( df['Rating'].mean() )
geq = df['Year'] >= 1990
leq = df['Year'] <= 1999
movie_nineties = df[geq & leq]
movie_nineties.head()
print( movie_nineties['Rating'].median(), movie_nineties['Votes'].median() )
sorted_by_rating = movie_nineties.sort('Rating', ascending=False)
sorted_by_rating[:10]
# implement here
df[(df['Year']==1994)]['Rating'].describe([.1,0.9])
df['Rating'].median()
df['Year'].hist()
# implement here
plt.hist(df[(df['Year']>2000) & (df['Year']<2014)]['Rating'],bins = 10)
plt.hist(df['Rating'], bins=10)
# implement here
plt.hist(df[(df['Year']>2000) & (df['Year']<2014)]['Rating'],bins = 20,facecolor='g')
plt.xlabel('bins')
plt.ylabel('# Ratings')
plt.title('Histogram of Rating Distribution for years 2000-2014')
plt.grid(True)
import seaborn as sns
plt.hist(df['Rating'], bins=10)
sns.distplot(df['Rating'])
# implement here
sns.distplot(df['Rating'],bins = 10,kde=False)
plt.xlabel('bins')
plt.ylabel('# Ratings')
plt.title('Histogram of Rating Distribution for years 2000-2014')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matplotlib magic
Step2: There are many ways to import matplotlib, but the most common way is
Step3: Q1
Step4: Let's look at the first few rows to get some sense of the data.
Step5: You can see more, or less lines of course
Step6: You can extract one column by using dictionary-like expression
Step7: or select multiple columns
Step8: To get the first 10 rows
Step9: We can also select both rows and columns. For example, to select the first 10 rows of the 'Year' and 'Rating' columns
Step10: You can swap the order of rows and columns.
Step11: It is very easy to answer the question of the number of movies per year. The value_counts() function counts how many times each data value (year) appears.
Step12: To calculate average ratings and votes
Step13: or you can even do
Step14: To get the median ratings of movies in 1990s, we first select only movies in that decade
Step15: Then, we can do the calculation
Step16: Finally, if we want to know the top 10 movies in 1990s, we can use the sort() function
Step17: Exercise
Step18: Q2
Step19: Exercise
Step20: Q3
Step21: Exercise
Step22: Q4
Step23: Let's do nothing and just run the histgram again
Step24: We can use the distplot() function to plot the histogram.
Step25: Exercise
|
1,547
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import datetime
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv('data/pm25.csv')
print(df.shape)
df.head()
df.isnull().sum()*100/df.shape[0]
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(
lambda row: datetime.datetime(year=row['year'],
month=row['month'], day=row['day'],hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
df.head()
df['year'].value_counts()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['pm2.5'], color='g')
g.set_title('pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
df.head()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['scaled_pm2.5'], color='purple')
g.set_title('Scaled pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('scaled_pm2.5 readings')
# 2014 data as validation data, before 2014 as training data
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
df_val.reset_index(drop=True, inplace=True)
df_val.head()
# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,
## collecting the data with 1 stride rolling window.
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
print(X_train[0], y_train[0])
print(X_train[1], y_train[1])
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
print(X_val[0], y_val[0])
print(X_val[1], y_val[1])
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
from keras.models import Sequential
from keras.layers import SimpleRNN
from keras.layers import Dense, Dropout, Input
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
from sklearn.metrics import mean_absolute_error
model = Sequential()
model.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])
model.summary()
save_weights_at = 'basic_rnn_model'
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
# load the best model
best_model = load_model('basic_rnn_model')
# Compare the prediction with y_true
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
# Measure MAE of y_pred and y_true
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
mae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)
print('MAE for the scaled validation set:', round(mae, 4))
# Check the metrics and loss of each apoch
mae = history.history['mae']
val_mae = history.history['val_mae']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(mae))
plt.plot(epochs, mae, 'bo', label='Training MAE')
plt.plot(epochs, val_mae, 'b', label='Validation MAE')
plt.title('Training and Validation MAE')
plt.legend()
plt.figure()
# Here I was using MAE as loss too, that's why they lookedalmost the same...
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Note
Step3: Note
|
1,548
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pydftools as df
from pydftools.plotting import mfplot
import numpy as np
from urllib.request import Request, urlopen # For getting the data online
from IPython.display import display, Math, Latex, Markdown, TextDisplayObject
req = Request('http://quantumholism.com/dftools/westmeier2017.txt', headers={'User-Agent': 'Mozilla/5.0'})
data = urlopen(req)
data = np.genfromtxt(data, skip_header=1)
x = np.log10(data[:,0])
x_err = data[:,1]/data[:,0]/np.log(10)
veff_values = data[:,2]
data = df.dffit.Data(x = x, x_err=x_err)
selection = df.selection.SelectionVeffPoints(veff=veff_values, xval = x, xmin = 5, xmax = 13)
survey = df.DFFit(data = data, selection=selection, grid_dx = 0.01)
mfplot(survey, xlim=(10**6.63, 5e10), ylim=(1e-3, 1), show_bias_correction=False);
display(Markdown(survey.fit_summary(format_for_notebook=True)))
def veff_extrap(x):
veff_max = np.max(veff_values)
return np.clip((x-6.53)*75, 0,veff_max)
selection = df.selection.SelectionVeffPoints(veff=veff_values, xval = x, veff_extrap=veff_extrap, xmin = 5, xmax = 13)
survey = df.DFFit(data = data, selection=selection, grid_dx = 0.01)
display(Markdown(survey.fit_summary(format_for_notebook=True)))
survey.resample(n_bootstrap = 1000)
fig, ax = mfplot(survey,xlim=(2e6,5e10),ylim=(1e-3,1),uncertainty_type=3,
col_fit='red',col_data='black',show_posterior_data=False,
ls_veff='none', nbins=6,bin_xmin=6.5,bin_xmax=9.5,
show_bias_correction=False,
xpower10=True)
x = survey.grid.x[0]
ax[0].plot(10**x, survey.model.gdf(x,[np.log10(6.0e-3),9.80,-1.37]), ls='--',lw=1.5, color='C0', label="HIPASS")
ax[0].plot(10**x, survey.model.gdf(x,[np.log10(4.8e-3),9.96,-1.33]), ls='--',lw=1.5, color='C1', label="ALFALFA")
ax[0].legend()
display(Markdown(survey.fit_summary(format_for_notebook=True)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the HI-mass data of Westmeier et al. 2017
Step2: There are 31 galaxies in this sample, hence the array has 31 rows. This data can be recast into the log-masses $x$, normally used by pydftools. We assume the mass uncertainties to be normal in $x$ and determine their amplitude using linear error propagation. We also define the vector of effective volumes
Step3: Now fit these data. We first must create a Data and Selection object
Step4: and show the fitted parameters
Step5: The dashed line in the bottom panel shows the effective volume as a function of mass, recovered from the 31 values of veff. By default an effective volume of 0 for masses smaller than the smallest observed mass, and identical to the maximum volume for masses larger than the largest observed mass. If a better model is available from survey-specific considerations, then this information can be exploited to improve the fit. In this example, we replace the assumption of veff=0 for x<xmin, by veff=max(0,(x−6.53)∗75)
Step6: Now fit again
Step7: and see the best fit solution
Step8: As can be seen the parameters have change very slightly due to the modified effecive volume at the lowest masses. The printed parameters have symmetric Gaussian uncertainties, determined in the Lapace approximation (i.e. by inverting the Hessian matrix of the modified likelihood function). To allow for non-Gaussian parameter posteriors, we now refit the data while bootstrapping it 10^3 times
Step9: Finally, let’s produce the plot with 68% and 95% confidence regions around the best fit. Also change fit color to red, change data color to black, remove posterior data, remove effective volume line, and adjust binning of input data. Then, add HIPASS and ALFALFA lines.
Step10: and write the bes-fitting parameters
|
1,549
|
<ASSISTANT_TASK:>
Python Code:
# TODO: Add ModelFromDirectory for log densities
# TODO: Change defaults for saliency map convertor (at least in LSUN subclass)
# TODO: Write fit functions optimize_for_information_gain(model, stimuli, fixations)
my_model = pysaliency.SaliencyMapModelFromDirectory(stimuli_salicon_train, "my_model/saliency_maps/SALICON_TRAIN")
my_model = pysaliency.SaliencyMapModelFromFile(stimuli_salicon_train, "my_model/salicon_train.mat")
my_model.AUC(stimuli_salicon_train, fixations_salicon_train, nonfixations='uniform')
my_model.AUC(stimuli_salicon_train, fixations_salicon_train, nonfixations='shuffled')
my_probabilistic_model = pysaliency.SaliencyMapConvertor(my_model, ...)
fit_stimuli, fit_fixations = pysaliency.create_subset(stimuli_isun_train, fixations_isun_train, range(0, 500))
my_probabilistic_model = pysaliency.optimize_for_information_gain
my_model, fit_stimuli, fit_fixations,
num_nonlinearity=20,
num_centerbias=12,
optimize=[
'nonlinearity',
'centerbias',
'alpha',
#'blurradius', # we do not optimize the bluring.
])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import your saliency model into pysaliency
Step2: If you have an LSUN submission file prepared, you can load it with pysaliency.SaliencyMapModelFromDirectory
Step3: Evaluate your model
Step4: Optimize your model for information gain
|
1,550
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow_graphics
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow_graphics.image import matting
from tqdm import tqdm
# Download dataset from alphamatting.com
!rm -rf input_training_lowres
!rm -rf trimap_training_lowres
!rm -rf gt_training_lowres
!wget -q http://www.alphamatting.com/datasets/zip/input_training_lowres.zip
!wget -q http://www.alphamatting.com/datasets/zip/trimap_training_lowres.zip
!wget -q http://www.alphamatting.com/datasets/zip/gt_training_lowres.zip
!unzip -q input_training_lowres.zip -d input_training_lowres
!unzip -q trimap_training_lowres.zip -d trimap_training_lowres
!unzip -q gt_training_lowres.zip -d gt_training_lowres
# Read and decode images
source = tf.io.read_file('input_training_lowres/GT07.png')
source = tf.cast(tf.io.decode_png(source), tf.float64) / 255.0
source = tf.expand_dims(source, axis=0)
trimap = tf.io.read_file('trimap_training_lowres/Trimap1/GT07.png')
trimap = tf.cast(tf.io.decode_png(trimap), tf.float64) / 255.0
trimap = tf.reduce_mean(trimap, axis=-1, keepdims=True)
trimap = tf.expand_dims(trimap, axis=0)
gt_matte = tf.io.read_file('gt_training_lowres/GT07.png')
gt_matte = tf.cast(tf.io.decode_png(gt_matte), tf.float64) / 255.0
gt_matte = tf.reduce_mean(gt_matte, axis=-1, keepdims=True)
gt_matte = tf.expand_dims(gt_matte, axis=0)
# Resize images to improve performance
source = tf.image.resize(
source,
tf.shape(source)[1:3] // 2,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
trimap = tf.image.resize(
trimap,
tf.shape(trimap)[1:3] // 2,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
gt_matte = tf.image.resize(
gt_matte,
tf.shape(gt_matte)[1:3] // 2,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Show images
figure = plt.figure(figsize=(22, 18))
axes = figure.add_subplot(1, 3, 1)
axes.grid(False)
axes.set_title('Input image', fontsize=14)
_= plt.imshow(source[0, ...].numpy())
axes = figure.add_subplot(1, 3, 2)
axes.grid(False)
axes.set_title('Input trimap', fontsize=14)
_= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
axes = figure.add_subplot(1, 3, 3)
axes.grid(False)
axes.set_title('GT matte', fontsize=14)
_= plt.imshow(gt_matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
# Extract the foreground and background constraints from the trimap image
foreground = tf.cast(tf.equal(trimap, 1.0), tf.float64)
background = tf.cast(tf.equal(trimap, 0.0), tf.float64)
# Show foreground and background constraints
figure = plt.figure(figsize=(22, 18))
axes = figure.add_subplot(1, 2, 1)
axes.grid(False)
axes.set_title('Foreground constraints', fontsize=14)
_= plt.imshow(foreground[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
axes = figure.add_subplot(1, 2, 2)
axes.grid(False)
axes.set_title('Background constraints', fontsize=14)
_= plt.imshow(background[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
# Initialize the matte with random values
matte_shape = tf.concat((tf.shape(source)[:-1], (1,)), axis=-1)
matte = tf.Variable(
tf.random.uniform(
shape=matte_shape, minval=0.0, maxval=1.0, dtype=tf.float64))
# Create the closed form matting Laplacian
laplacian, _ = matting.build_matrices(source)
# Function computing the loss and applying the gradient
@tf.function
def optimize(optimizer):
with tf.GradientTape() as tape:
tape.watch(matte)
# Compute a loss enforcing the trimap constraints
constraints = tf.reduce_mean((foreground + background) *
tf.math.squared_difference(matte, foreground))
# Compute the matting loss
smoothness = matting.loss(matte, laplacian)
# Sum up the constraint and matting losses
total_loss = 100 * constraints + smoothness
# Compute and apply the gradient to the matte
gradient = tape.gradient(total_loss, [matte])
optimizer.apply_gradients(zip(gradient, (matte,)))
# Run the Adam optimizer for 400 iterations
optimizer = tf.optimizers.Adam(learning_rate=1.0)
nb_iterations = 400
for it in tqdm(range(nb_iterations)):
optimize(optimizer)
# Clip the matte value between 0 and 1
matte = tf.clip_by_value(matte, 0.0, 1.0)
# Display the results
figure = plt.figure(figsize=(22, 18))
axes = figure.add_subplot(1, 3, 1)
axes.grid(False)
axes.set_title('Input image', fontsize=14)
plt.imshow(source[0, ...].numpy())
axes = figure.add_subplot(1, 3, 2)
axes.grid(False)
axes.set_title('Input trimap', fontsize=14)
_= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
axes = figure.add_subplot(1, 3, 3)
axes.grid(False)
axes.set_title('Matte', fontsize=14)
_= plt.imshow(matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)
!wget -q https://p2.piqsels.com/preview/861/934/460/concrete-texture-background-backdrop.jpg
background = tf.io.read_file('concrete-texture-background-backdrop.jpg')
background = tf.cast(tf.io.decode_jpeg(background), tf.float64) / 255.0
background = tf.expand_dims(background, axis=0)
# Resize images to improve performance
background = tf.image.resize(
background,
tf.shape(source)[1:3],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Inpaint the foreground over a new background
inpainted_black = matte * source
inpainted_concrete = matte * source + (1.0 - matte) * background
# Display the results
figure = plt.figure(figsize=(22, 18))
axes = figure.add_subplot(1, 2, 1)
axes.grid(False)
axes.set_title('Inpainted black', fontsize=14)
_= plt.imshow(inpainted_black[0, ...].numpy())
axes = figure.add_subplot(1, 2, 2)
axes.grid(False)
axes.set_title('Inpainted concrete', fontsize=14)
_= plt.imshow(inpainted_concrete[0, ...].numpy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Closed Form Matting Energy
Step2: Now that TensorFlow Graphics is installed, let's import everything needed to run the demos contained in this notebook.
Step3: Import the image and trimap
Step4: Extract the foreground and background constraints from the trimap image
Step5: Setup & run the optimization
Step6: Compositing
|
1,551
|
<ASSISTANT_TASK:>
Python Code:
from external_plugins.spystats import tools
%run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py
from external_plugins.spystats import tools
hx = np.linspace(0,800000,100)
new_data.residuals[:10]
gvg.plot(refresh=False,legend=False,percentage_trunked=20)
plt.title("Semivariogram of residuals $log(Biomass) ~ log(SppR)$")
## HERE we can cast a model (Whittle) and fit it inside the global variogram
whittle_model = tools.WhittleVariogram(sill=0.345,range_a=100000,nugget=0.33,alpha=1.0)
tt = gvg.fitVariogramModel(whittle_model)
plt.plot(hx,gvg.model.f(hx),'--',lw=4,c='black')
print(whittle_model)
## This section is an example for calculating GLS. Using a small section because of computing intensity
minx = -85
maxx = -80
miny = 30
maxy = 35
section = tools._subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy)
secvg = tools.Variogram(section,'logBiomass',model=whittle_model)
MMdist = secvg.distance_coordinates.flatten()
CovMat = secvg.model.corr_f(MMdist).reshape(len(section),len(section))
plt.imshow(CovMat)
import statsmodels.regression.linear_model as lm
import statsmodels.api as sm
model1 = results_gls.from_formula(formula='logBiomass ~ logSppN',data=section,sigma=CovMat)
results = model1.fit()
results.summary()
## Without spatial structure
Id = np.identity(len(section))
model2 = results_gls.from_formula(formula='logBiomass ~ logSppN',data=section,sigma=Id)
results = model2.fit()
results.summary()
matm = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=0.5)
expmm = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gausms = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
sphmm = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
wm = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.33,alpha=1)
map(lambda l : l.fit(gvg), [matm,expmm,gausms,sphmm,wm])
print(matm)
print(expmm)
print(gausms)
print(sphmm)
print(wm)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The object new_data has been reprojected to Alberts and a linear model have been fitted with residuals stored as residuals
Step2: The empirical variogram
Step3: Bonus!
|
1,552
|
<ASSISTANT_TASK:>
Python Code:
!pip show systemml
from systemml import MLContext, dml, dmlFromResource
ml = MLContext(sc)
print ("Spark Version:" + sc.version)
print ("SystemML Version:" + ml.version())
print ("SystemML Built-Time:"+ ml.buildTime())
ml.execute(dml(s = 'Hello World!').output("s")).get("s")
import sys, os, glob, subprocess
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
plt.switch_backend('agg')
def printLastLogLines(n):
fname = max(glob.iglob(os.sep.join([os.environ["HOME"],'/logs/notebook/kernel-pyspark-*.log'])), key=os.path.getctime)
print(subprocess.check_output(['tail', '-' + str(n), fname]))
script =
X = rand(rows=$nr, cols=1000, sparsity=0.5)
A = t(X) %*% X
s = sum(A)
prog = dml(script).input('$nr', 1e5).output('s')
s = ml.execute(prog).get('s')
print (s)
ml = MLContext(sc)
ml = ml.setStatistics(True)
# re-execute ML program
# printLastLogLines(22)
prog = dml(script).input('$nr', 1e6).output('s')
out = ml.execute(prog).get('s')
print (out)
ml = MLContext(sc)
ml = ml.setStatistics(False)
%matplotlib inline
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
diabetes_y_train = diabetes.target[:-20].reshape(-1,1)
diabetes_y_test = diabetes.target[-20:].reshape(-1,1)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
diabetes.data.shape
script =
# add constant feature to X to model intercept
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
A = t(X) %*% X
b = t(X) %*% y
w = solve(A, b)
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
script =
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
max_iter = 100
w = matrix(0, rows=ncol(X), cols=1)
for(i in 1:max_iter){
XtX = t(X) %*% X
dw = XtX %*%w - t(X) %*% y
alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)
w = w + dw*alpha
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias')
w, bias = ml.execute(prog).get('w', 'bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
script =
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
m = ncol(X); i = 1;
max_iter = 20;
w = matrix (0, rows = m, cols = 1); # initialize weights to 0
dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)
norm_r2 = sum (dw ^ 2);
for(i in 1:max_iter) {
q = t(X) %*% (X %*% p)
alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)
w = w + alpha * p; # update weights
dw = dw + alpha * q;
old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);
p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction
i = i + 1;
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
import os
from subprocess import call
dirName = os.path.dirname(os.path.realpath("~")) + "/scripts"
call(["mkdir", "-p", dirName])
call(["wget", "-N", "-q", "-P", dirName, "https://raw.githubusercontent.com/apache/systemml/master/scripts/algorithms/LinearRegDS.dml"])
scriptName = dirName + "/LinearRegDS.dml"
dml_script = dmlFromResource(scriptName)
prog = dml_script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')
w = ml.execute(prog).get('beta_out')
w = w.toNumPy()
bias=w[1]
print (bias)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w[0]*diabetes_X_test)+bias, color='red', linestyle ='dashed')
from pyspark.sql import SQLContext
from systemml.mllearn import LinearRegression
sqlCtx = SQLContext(sc)
regr = LinearRegression(sqlCtx)
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
predictions = regr.predict(diabetes_X_test)
# Use the trained model to perform prediction
%matplotlib inline
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, predictions, color='black')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Import SystemML API
Step3: Import numpy, sklearn, and define some helper functions
Step5: Example 1
Step6: Examine execution plans, and increase data size to obverve changed execution plans
Step7: Load diabetes dataset from scikit-learn
Step9: Example 2
Step11: Algorithm 2
Step13: Algorithm 3
Step14: Example 3
Step15: Example 4
|
1,553
|
<ASSISTANT_TASK:>
Python Code:
# Import files
import os
import sys
import numpy as np
import matplotlib as plt
import tensorflow as tf
import time
import random
import math
import pandas as pd
import sklearn
from scipy import misc
import glob
import pickle
%matplotlib inline
plt.pyplot.style.use('ggplot')
# RELU GLOROT INITIALIZATION
def fully_connected_layer_relu(inputs, input_dim, output_dim, nonlinearity=tf.nn.relu):
weights = tf.Variable(
tf.truncated_normal(
[input_dim, output_dim], stddev=2. / (input_dim + output_dim)**0.5),
'weights')
biases = tf.Variable(tf.zeros([output_dim]), 'biases')
outputs = nonlinearity(tf.matmul(inputs, weights) + biases)
return outputs
# Exponential Linear Units (ELUs) for activation to test the performace difference
def fully_connected_layer_elu(inputs, input_dim, output_dim, nonlinearity=tf.nn.elu):
weights = tf.Variable(
tf.truncated_normal(
[input_dim, output_dim], stddev=2. / (input_dim + output_dim)**0.5),
'weights')
biases = tf.Variable(tf.zeros([output_dim]), 'biases')
outputs = nonlinearity(tf.matmul(inputs, weights) + biases)
return outputs
# Dropout Layer
# https://www.tensorflow.org/api_docs/python/tf/nn/dropout
# The default dropout rate is kept at '0.5'
def dropout(input, drop_rate = 0.5):
# Uses tensorflow's dropout method
return tf.nn.droput(input, drop_rate)
# ksize = Kernel Size
# strides = Strides for Pooling Operation
# padding = Padding Style for Convolution
# NOTE: You can modify these according to your project requirements
# Max Pooling Layer
# https://www.tensorflow.org/api_docs/python/tf/nn/max_pool
def max_pool(inputs, ksize=[1, 3, 3, 1],strides=[1, 2, 2, 1],padding='VALID'):
return tf.nn.max_pool(inputs, ksize, strides, padding, name='max_pool')
# Average Pooling Layer
# https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool
def avg_pool(inputs, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],padding='VALID'):
return tf.nn.avg_pool(conv1, ksize, strides,padding, name='avg_pool')
# Check out : https://www.tensorflow.org/versions/r0.12/api_docs/python/nn/normalization
# For the documentation of the input arguments for the normalization
# The depth radius in some implementations is 2 for other AlexNet architectures
def local_response_normalization(inputs,depth_radius=4, bias=1.0, alpha=0.001/9.0, beta=0.75):
return tf.nn.lrn(inputs,depth_radius, bias, alpha, beta, name='Local Response Normalization')
# https://www.tensorflow.org/api_docs/python/tf/nn/conv2d
def conv(inputs,weights, strides = [1,1,1,1], padding = 'SAME'):
return tf.nn.conv2d(inputs, weights, strides, padding)
# NOTE: The initial weights and biases are provided with a shape and size
# Hence, they can be modified according to the requirements of your project, this is truncated normal distribution
# For details on truncated normal : https://www.tensorflow.org/api_docs/python/tf/truncated_normal
# Initializing Weights with truncated normal distribution
def initial_weights_for_layers(shape, stdev=5e-2):
return tf.Variable(tf.truncated_normal('initial_weights',shape, stddev=stdev))
# Initializing Biases
def initial_biases(size, value):
return tf.Variable(tf.constant(value, shape=[size]), 'initial_biases')
# RELU ACTIVATION
def relu_activation(input):
input = tf.nn.relu(input, name = 'RELU')
return input
# We assume the images to be stored in a numpy's array 'inputs'
# NOTE: Here we have not loaded any images currently as inputs
# For implementation of this architecture for testing on different datasets such as MNIST, SVHN, etc.
# Check out my github's 'Machine Learning' repository for the updates and implementation
# The paper takes the input size to be = 224 x 224 x 3
# We name the input images as 'inputs' and outputs as 'outputs'
# Although for this notebook, they are just like empty placeholders as this is the description of the structure
# For adding the biases, we use: https://www.tensorflow.org/api_docs/python/tf/nn/bias_add
# First Convolutional Layer
with tf.name_scope('Convolution - 1'):
# Initializing weights and biases for this layer
weights1 = initial_weights_for_layers(shape=[11, 11, 3, 96])
biases1 = initial_biases(size = 96, value = 0.0)
# Convolution-1
# First layer filters the input with '96' kernels of size '11 x 11 x 3'
# conv_o is just like a dummy to hold the initial convolution
conv_o= conv(inputs, weights1, [1, 4, 4, 1], padding='SAME')
# Adding the biases
conv_o = tf.nn.bias_add(conv_o, biases1)
# ReLU Activation
conv1 = relu_activation(conv_o)
# Normalized, Takes the conv1 as input
norm1 = local_response_normalization(conv1)
# Pooling, Takes norm1 as input
pool1 = max_pool(norm1)
# The initial biases here have been changed to '0.1' rather than '0.0' to test some difference,
# Although in other implementations you will find this initial value to '0.0' which can be changed accordingly
# Second Convolutional Layer
with tf.name_scope('Convolution - 2'):
# Initializing weights and biases for this layer
weights2 = initial_weights_for_layers(shape=[5, 5, 48, 256])
biases1 = initial_biases(size = 256, value = 0.1)
# Convolution-2
# Second layer filters the input with '256' kernels of size '5 x 5 x 48'
# conv_o is just like a dummy to hold the initial convolution
conv_o = conv(pool1, weights2, [1, 1, 1, 1], padding='SAME')
# Adding the biases
conv_o = tf.nn.bias_add(conv_o, biases2)
# ReLU Activation
conv2 = relu_activation(conv_o)
# Normalized, Takes the conv2 as input
norm2 = local_response_normalization(conv2)
# Pooling, Takes norm2 as input
pool2 = max_pool(norm2)
###
# NOTE: There is no normalization and pooling in 3rd, 4th and 5th convolution layer
###
# Third Convolutional Layer
with tf.name_scope('Convolution - 3'):
# Initializing weights and biases for this layer
weights3 = initial_weights_for_layers(shape=[3, 3, 256, 384])
biases3 = initial_biases(size = 384, value = 0.0)
# Convolution-3
# Third layer filters the input with '384' kernels of size '3 x 3 x 256'
# conv_o is just like a dummy to hold the initial convolution
conv_o = conv(pool2, weights3, [1, 1, 1, 1], padding='SAME')
# Adding the biases
conv_o = tf.nn.bias_add(conv_o, biases3)
# ReLU Activation
conv3 = relu_activation(conv_o)
# Fourth Convolutional Layer
with tf.name_scope('Convolution - 4'):
# Initializing weights and biases for this layer
weights4 = initial_weights_for_layers(shape=[3, 3, 192, 384])
biases4 = initial_biases(size = 384, value = 0.0)
# Convolution-4
# Fourth layer filters the input with '384' kernels of size '3 x 3 x 192'
# conv_o is just like a dummy to hold the initial convolution
conv_o = conv(conv3, weights4, [1, 1, 1, 1], padding='SAME')
# Adding the biases
conv_o = tf.nn.bias_add(conv_o, biases4)
# ReLU Activation
conv4 = relu_activation(conv_o)
# Fifth Convolutional Layer
with tf.name_scope('Convolution - 5'):
# Initializing weights and biases for this layer
weights5 = initial_weights_for_layers(shape=[3, 3, 192, 256])
biases5 = initial_biases(size = 256, value = 0.0)
# Convolution-5
# Fifth layer filters the input with '256' kernels of size '3 x 3 x 192'
# conv_o is just like a dummy to hold the initial convolution
conv_o = conv(conv4, weights5, [1, 1, 1, 1], padding='SAME')
# Adding the biases
conv_o = tf.nn.bias_add(conv_o, biases5)
# ReLU Activation
conv5 = relu_activation(conv_o)
# Reshaping the inputs to be passed to the fully connected layers, as the when passing the output of previous layers
# We want to separate out the dimensions as (batch_size, dimensions), where the batch_size can be dynamically calculated
# REFER TO : https://www.tensorflow.org/tutorials/layers
# To understand the formation of a neural network and hence the requirement of this operation
# NOTE: HERE INSTEAD OF THE ORIGINAL DIMENSIONS OF THE ALEXNET
# I am including for ease the variables which you can change to suit the requirements of your project
# This would help you to understand how the dimensions vary according to different datasets
a = 8 # this is just an example
b = 8 # this is just an example
c = 64 # this is just an example
# This can be understood as the dimensions of the input image
reshaped_inputs = tf.reshape(conv5, [-1,a*b*c])
# NOTE: The a,b,c parameters may vary from original implementation
###
# NOTE: Fully Connected layers have '4096' units each
# NOTE: Dropout is used in the first 2 fully connected layers
###
# First Fully Connected Layer
with tf.name_scope('Fully Connected - 1'):
fc=fully_connected_layer_relu(reshaped_input, a*b*c, 4096)
fc1 = dropout(fc)
# Second Fully Connected Layer
with tf.name_scope('Fully Connected - 2'):
fc=fully_connected_layer_relu(fc1, 4096, 4096)
fc2 = dropout(fc)
# Third Fully Connected Layer
with tf.name_scope('Fully Connected - 3'):
fc3=fully_connected_layer_relu(fc2, 4096, 4096)
# Output of third fully connected layer is passed to a 1000-way softmax classifier
# https://www.tensorflow.org/api_docs/python/tf/nn/softmax
with tf.name_scope('Softmax'):
fc4=tf.nn.softmax(fc3,1000,name='softmax')
with tf.name_scope('Outputs'):
outputs =fc4
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions for convolution, pooling, droput, etc.
Step2: 2> Dropout Layer
Step3: 3> Pooling Layer
Step4: 4> Normalization Layer
Step5: 5> Convolution Layer
Step6: 6> Initial Weights, Bias and RELU Activation for Layers
Step7: Getting your images as inputs
Step8: Alexnet Implementation
|
1,554
|
<ASSISTANT_TASK:>
Python Code:
!pip install --user --upgrade kaggle
import IPython
IPython.Application.instance().kernel.do_shutdown(True) #automatically restarts kernel
!ls ./kaggle.json
import os
current_dir=!pwd
current_dir=current_dir[0]
os.environ['KAGGLE_CONFIG_DIR']=current_dir
!${HOME}/.local/bin/kaggle datasets download mistag/arthropod-taxonomy-orders-object-detection-dataset
!unzip -q *dataset.zip
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create API Key on Kaggle
|
1,555
|
<ASSISTANT_TASK:>
Python Code:
from prox_elasticnet import ElasticNet, ElasticNetCV
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
np.random.seed(319159)
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
prop_train = 0.8
n_pts = len(y)
n_train = np.floor(n_pts * prop_train).astype(int)
n_test = n_pts - n_train
ix = np.arange(n_pts)
np.random.shuffle(ix)
train_ix = ix[0:n_train]
test_ix = ix[n_train:n_pts]
X_train = X[train_ix,:]
y_train = y[train_ix]
X_test = X[test_ix,:]
y_test = y[test_ix]
model = ElasticNet().fit(X_train, y_train)
y_pred = model.predict(X_test)
print("The coefficient of determination for this model is: {}".format(model.score(X_test,y_test)))
model.coef_
model_cv = ElasticNetCV(l1_ratio = np.arange(0.1,0.9,step=0.1)).fit(X_train, y_train)
y_pred_cv = model_cv.predict(X_test)
print("The coefficient of determination for this model is: {}".format(model_cv.score(X_test,y_test)))
model_cv.alpha_
model_cv.l1_ratio_
plt.title("Cross-validation for l1_ratio = 0.8")
plt.plot(model_cv.alphas_[7],model_cv.mse_path_.mean(axis=2)[7])
plt.xlabel("alpha")
plt.ylabel("MSE")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Diabetes dataset
Step2: Our goal is to fit a linear model using Elastic Net regularisation, which predicts the disease progression for a given patient's physiological variables.
Step3: First we run the basic ElasticNet model with the default parameters.
Step4: The model coefficients are accessed as follows
Step5: The package also provides ElasticNetCV which chooses the regularisation parameters (alpha and l1_ratio) which yield the best mean-squared error.
Step6: We can see that there been a significant increase in the coefficient of determination ($R^2$) on the test set when compared to the previous model (although it is still rather poor). The alpha and l1_ratio values that have been selected through cross-validation are accessed as follows
Step7: The mean-squared error is in fact available for all combinations of alpha, l1_ratio and each fold of cross-validation. As an example, we plot the mean-squared error for the optimal l1_ratio = 0.8 as a function of alpha. We average over the three folds of cross validation.
|
1,556
|
<ASSISTANT_TASK:>
Python Code:
df = df[df.BIN_PATTERN_INDEX == 'LINEAR linear_0006']
# now can drop that column
df = df.drop('BIN_PATTERN_INDEX', axis=1)
bin_tables = df.BIN_TBL.value_counts()
bin_tables
for ind in bin_tables.index:
print(ind)
print(df[df.BIN_TBL==ind].orbit_segment.value_counts())
df = df[df.BIN_TBL=='LINEAR 7,8 linear_0006']
df = df.drop('BIN_TBL', axis=1)
df.orbit_segment.value_counts()
df.index
df.columns
df.CHANNEL.value_counts()
df.INT_TIME.value_counts()
df.BINNING_SET.value_counts()
df.NAXIS1.value_counts()
df.NAXIS2.value_counts()
to_drop = []
for col in df.columns:
length = len(df[col].value_counts())
if length == 1:
to_drop.append(col)
df = df.drop(to_drop, axis=1)
df.columns
from iuvs import calib
df.DET_TEMP = df.DET_TEMP.map(calib.convert_det_temp_to_C) +273.15
df.CASE_TEMP = df.CASE_TEMP.map(calib.convert_case_temp_to_C) + 273.15
%matplotlib nbagg
import seaborn as sns
sns.set_context('talk')
from sklearn.preprocessing import normalize
df.index
df = df.reset_index()
df.set_index('TIME_OF_INT', inplace=True)
df['normalized_mean'] = normalize(df['mean']).T
df[['mean']].plot(style='*')
df.plot(kind='scatter', x='CASE_TEMP', y='mean')
df.plot(kind='scatter',x='DET_TEMP', y='CASE_TEMP')
df.plot(kind='scatter', x='SOLAR_LONGITUDE',y='mean')
df.plot(kind='scatter', x='SOLAR_LONGITUDE', y='DET_TEMP')
from sklearn import linear_model, decomposition, datasets
pca = decomposition.RandomizedPCA()
df.columns
Xcols = 'case_temp det_temp fov_deg lya_centroid mirror_deg mirror_dn mir_deg solar_longitude'.upper().split()
Xcols += ['mean']
Xcols
pca.fit(df[Xcols].values)
plt.close('all')
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.axes([.2, .2, .7, .7])
plt.semilogy(pca.explained_variance_, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Focusing on one of the periapse tables for now
|
1,557
|
<ASSISTANT_TASK:>
Python Code:
pipe = Pipeline(linkname = 'dSphs')
configfile = 'config/master_dSphs.yaml'
pipe.preconfigure(configfile)
pipe.update_args(dict(config=configfile))
pipe.linknames
pipe['data']
pipe['data'].linknames
pipe.print_status()
pipe.print_status(recurse=True)
pipe['data']['analyze-roi']
pipe['data']['analyze-roi']._options
pipe['data']['analyze-roi'].args
pipe['data']['analyze-roi'].jobs
pipe['data']['analyze-roi'].scatter_link
pipe['data']['analyze-roi'].scatter_link.command_template()
pipe['data']['analyze-roi'].scatter_link.jobs
pipe['data']['analyze-roi'].scatter_link.jobs['draco@dSphs.data.analyze-roi']
pipe['data']['analyze-roi'].scatter_link.jobs['draco@dSphs.data.analyze-roi'].job_config
pipe['spec-table'].run_with_log()
pipe.print_status()
pipe['data'].run(resubmit_failed=True)
pipe.print_status(recurse=True)
pipe['data']['convert-castro'].scatter_link.jobs['draco:ack2016_point:lgauss@dSphs.data.convert-castro'].job_config
pipe.run(resubmit_failed=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have to 'preconfigure' the pipeline, because we need to build up the list of targets so that we correctly set up the later stages of the pipeline.
Step2: We then tell the pipeline to update the arguments for all of the steps that comprise it.
Step3: Now we look around a bit, we can drill down into the links that make up the pipeline.
Step4: We can print the status for the various links.
Step5: We can access a particular Link in the pipeline.
Step6: We can ask what jobs get run by this link. In this paritcular case the link
Step7: Here we are talking to the link that represents any one of the disptached jobs.
Step8: Here are ask what jobs will be dispatched. Note that there are two jobs with slightly different names.
Step9: Here are are asking for information about the first of those two jobs. In particular, the specific options used for this instance of the job. You can merge the job_config with the command_template to get the exact syntax for the instance of this command.
Step10: Here we run a single link.
Step11: Here we run the 'data' sub-pipeline.
|
1,558
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.arange(12).reshape(3, 4)
a = np.delete(a, 2, axis = 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,559
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import math
import numpy as np
import tensorflow as tf
import time
from datasets import dataset_utils
# Main slim library
from tensorflow.contrib import slim
def regression_model(inputs, is_training=True, scope="deep_regression"):
Creates the regression model.
Args:
inputs: A node that yields a `Tensor` of size [batch_size, dimensions].
is_training: Whether or not we're currently training the model.
scope: An optional variable_op scope for the model.
Returns:
predictions: 1-D `Tensor` of shape [batch_size] of responses.
end_points: A dict of end points representing the hidden layers.
with tf.variable_scope(scope, 'deep_regression', [inputs]):
end_points = {}
# Set the default weight _regularizer and acvitation for each fully_connected layer.
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_regularizer=slim.l2_regularizer(0.01)):
# Creates a fully connected layer from the inputs with 32 hidden units.
net = slim.fully_connected(inputs, 32, scope='fc1')
end_points['fc1'] = net
# Adds a dropout layer to prevent over-fitting.
net = slim.dropout(net, 0.8, is_training=is_training)
# Adds another fully connected layer with 16 hidden units.
net = slim.fully_connected(net, 16, scope='fc2')
end_points['fc2'] = net
# Creates a fully-connected layer with a single hidden unit. Note that the
# layer is made linear by setting activation_fn=None.
predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction')
end_points['out'] = predictions
return predictions, end_points
with tf.Graph().as_default():
# Dummy placeholders for arbitrary number of 1d inputs and outputs
inputs = tf.placeholder(tf.float32, shape=(None, 1))
outputs = tf.placeholder(tf.float32, shape=(None, 1))
# Build model
predictions, end_points = regression_model(inputs)
# Print name and shape of each tensor.
print("Layers")
for k, v in end_points.items():
print('name = {}, shape = {}'.format(v.name, v.get_shape()))
# Print name and shape of parameter nodes (values not yet initialized)
print("\n")
print("Parameters")
for v in slim.get_model_variables():
print('name = {}, shape = {}'.format(v.name, v.get_shape()))
def produce_batch(batch_size, noise=0.3):
xs = np.random.random(size=[batch_size, 1]) * 10
ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise)
return [xs.astype(np.float32), ys.astype(np.float32)]
x_train, y_train = produce_batch(200)
x_test, y_test = produce_batch(200)
plt.scatter(x_train, y_train)
def convert_data_to_tensors(x, y):
inputs = tf.constant(x)
inputs.set_shape([None, 1])
outputs = tf.constant(y)
outputs.set_shape([None, 1])
return inputs, outputs
# The following snippet trains the regression model using a mean_squared_error loss.
ckpt_dir = '/tmp/regression_model/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
inputs, targets = convert_data_to_tensors(x_train, y_train)
# Make the model.
predictions, nodes = regression_model(inputs, is_training=True)
# Add the loss function to the graph.
loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
# The total loss is the uers's loss plus any regularization losses.
total_loss = slim.losses.get_total_loss()
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training inside a session.
final_loss = slim.learning.train(
train_op,
logdir=ckpt_dir,
number_of_steps=5000,
save_summaries_secs=5,
log_every_n_steps=500)
print("Finished training. Last batch loss:", final_loss)
print("Checkpoint saved in %s" % ckpt_dir)
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_train, y_train)
predictions, end_points = regression_model(inputs, is_training=True)
# Add multiple loss nodes.
mean_squared_error_loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions)
absolute_difference_loss = slim.losses.absolute_difference(predictions, targets)
# The following two ways to compute the total loss are equivalent
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = mean_squared_error_loss + absolute_difference_loss + regularization_loss
# Regularization Loss is included in the total loss by default.
# This is good for training, but not for testing.
total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op) # Will initialize the parameters with random weights.
total_loss1, total_loss2 = sess.run([total_loss1, total_loss2])
print('Total Loss1: %f' % total_loss1)
print('Total Loss2: %f' % total_loss2)
print('Regularization Losses:')
for loss in slim.losses.get_regularization_losses():
print(loss)
print('Loss Functions:')
for loss in slim.losses.get_losses():
print(loss)
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
# Create the model structure. (Parameters will be loaded below.)
predictions, end_points = regression_model(inputs, is_training=False)
# Make a session which restores the old parameters from a checkpoint.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
inputs, predictions, targets = sess.run([inputs, predictions, targets])
plt.scatter(inputs, targets, c='r');
plt.scatter(inputs, predictions, c='b');
plt.title('red=true, blue=predicted')
with tf.Graph().as_default():
inputs, targets = convert_data_to_tensors(x_test, y_test)
predictions, end_points = regression_model(inputs, is_training=False)
# Specify metrics to evaluate:
names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({
'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets),
'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets)
})
# Make a session which restores the old graph parameters, and then run eval.
sv = tf.train.Supervisor(logdir=ckpt_dir)
with sv.managed_session() as sess:
metric_values = slim.evaluation.evaluation(
sess,
num_evals=1, # Single pass over data
eval_op=names_to_update_nodes.values(),
final_op=names_to_value_nodes.values())
names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values))
for key, value in names_to_values.items():
print('%s: %f' % (key, value))
import tensorflow as tf
from datasets import dataset_utils
url = "http://download.tensorflow.org/data/flowers.tar.gz"
flowers_data_dir = '/tmp/flowers'
if not tf.gfile.Exists(flowers_data_dir):
tf.gfile.MakeDirs(flowers_data_dir)
dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir)
from datasets import flowers
import tensorflow as tf
from tensorflow.contrib import slim
with tf.Graph().as_default():
dataset = flowers.get_split('train', flowers_data_dir)
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32, common_queue_min=1)
image, label = data_provider.get(['image', 'label'])
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
for i in range(4):
np_image, np_label = sess.run([image, label])
height, width, _ = np_image.shape
class_name = name = dataset.labels_to_names[np_label]
plt.figure()
plt.imshow(np_image)
plt.title('%s, %d x %d' % (name, height, width))
plt.axis('off')
plt.show()
def my_cnn(images, num_classes, is_training): # is_training is not used...
with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2):
net = slim.conv2d(images, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.conv2d(net, 64, [5, 5])
net = slim.max_pool2d(net)
net = slim.flatten(net)
net = slim.fully_connected(net, 192)
net = slim.fully_connected(net, num_classes, activation_fn=None)
return net
import tensorflow as tf
with tf.Graph().as_default():
# The model can handle any input size because the first layer is convolutional.
# The size of the model is determined when image_node is first passed into the my_cnn function.
# Once the variables are initialized, the size of all the weight matrices is fixed.
# Because of the fully connected layers, this means that all subsequent images must have the same
# input size as the first image.
batch_size, height, width, channels = 3, 28, 28, 3
images = tf.random_uniform([batch_size, height, width, channels], maxval=1)
# Create the model.
num_classes = 10
logits = my_cnn(images, num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
# Initialize all the variables (including parameters) randomly.
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
# Run the init_op, evaluate the model outputs and print the results:
sess.run(init_op)
probabilities = sess.run(probabilities)
print('Probabilities Shape:')
print(probabilities.shape) # batch_size x num_classes
print('\nProbabilities:')
print(probabilities)
print('\nSumming across all classes (Should equal 1):')
print(np.sum(probabilities, 1)) # Each row sums to 1
from preprocessing import inception_preprocessing
import tensorflow as tf
from tensorflow.contrib import slim
def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False):
Loads a single batch of data.
Args:
dataset: The dataset to load.
batch_size: The number of images in the batch.
height: The size of each image after preprocessing.
width: The size of each image after preprocessing.
is_training: Whether or not we're currently training or evaluating.
Returns:
images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed.
images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization.
labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes.
data_provider = slim.dataset_data_provider.DatasetDataProvider(
dataset, common_queue_capacity=32,
common_queue_min=8)
image_raw, label = data_provider.get(['image', 'label'])
# Preprocess image for usage by Inception.
image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training)
# Preprocess the image for display purposes.
image_raw = tf.expand_dims(image_raw, 0)
image_raw = tf.image.resize_images(image_raw, [height, width])
image_raw = tf.squeeze(image_raw)
# Batch it up.
images, images_raw, labels = tf.train.batch(
[image, image_raw, label],
batch_size=batch_size,
num_threads=1,
capacity=2 * batch_size)
return images, images_raw, labels
from datasets import flowers
# This might take a few minutes.
train_dir = '/tmp/tfslim_model/'
print('Will save model to %s' % train_dir)
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
# Create the model:
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
number_of_steps=1, # For speed, we just do 1 epoch
save_summaries_secs=1)
print('Finished training. Final batch loss %d' % final_loss)
from datasets import flowers
# This might take a few minutes.
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.DEBUG)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset)
logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False)
predictions = tf.argmax(logits, 1)
# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5),
})
print('Running evaluation Loop...')
checkpoint_path = tf.train.latest_checkpoint(train_dir)
metric_values = slim.evaluation.evaluate_once(
master='',
checkpoint_path=checkpoint_path,
logdir=train_dir,
eval_op=names_to_updates.values(),
final_op=names_to_values.values())
names_to_values = dict(zip(names_to_values.keys(), metric_values))
for name in names_to_values:
print('%s: %f' % (name, names_to_values[name]))
from datasets import dataset_utils
url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import numpy as np
import os
import tensorflow as tf
try:
import urllib2
except ImportError:
import urllib.request as urllib
from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
image_string = urllib.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index]))
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = '/tmp/checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import numpy as np
import os
import tensorflow as tf
try:
import urllib2
except ImportError:
import urllib.request as urllib
from datasets import imagenet
from nets import vgg
from preprocessing import vgg_preprocessing
from tensorflow.contrib import slim
image_size = vgg.vgg_16.default_image_size
with tf.Graph().as_default():
url = 'https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg'
image_string = urllib.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)
processed_image = vgg_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
# 1000 classes instead of 1001.
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
with tf.Session() as sess:
init_fn(sess)
np_image, probabilities = sess.run([image, probabilities])
probabilities = probabilities[0, 0:]
sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]
plt.figure()
plt.imshow(np_image.astype(np.uint8))
plt.axis('off')
plt.show()
names = imagenet.create_readable_names_for_imagenet_labels()
for i in range(5):
index = sorted_inds[i]
# Shift the index of a class name by one.
print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index+1]))
# Note that this may take several minutes.
import os
from datasets import flowers
from nets import inception
from preprocessing import inception_preprocessing
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
def get_init_fn():
Returns a function run by the chief worker to warm-start the training.
checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]
exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]
variables_to_restore = []
for var in slim.get_model_variables():
excluded = False
for exclusion in exclusions:
if var.op.name.startswith(exclusion):
excluded = True
break
if not excluded:
variables_to_restore.append(var)
return slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
variables_to_restore)
train_dir = '/tmp/inception_finetuned/'
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, _, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
# Specify the loss function:
one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes)
slim.losses.softmax_cross_entropy(logits, one_hot_labels)
total_loss = slim.losses.get_total_loss()
# Create some summaries to visualize the training process:
tf.summary.scalar('losses/Total Loss', total_loss)
# Specify the optimizer and create the train op:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = slim.learning.create_train_op(total_loss, optimizer)
# Run the training:
final_loss = slim.learning.train(
train_op,
logdir=train_dir,
init_fn=get_init_fn(),
number_of_steps=2)
print('Finished training. Last batch loss %f' % final_loss)
import numpy as np
import tensorflow as tf
from datasets import flowers
from nets import inception
from tensorflow.contrib import slim
image_size = inception.inception_v1.default_image_size
batch_size = 3
with tf.Graph().as_default():
tf.logging.set_verbosity(tf.logging.INFO)
dataset = flowers.get_split('train', flowers_data_dir)
images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size)
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(inception.inception_v1_arg_scope()):
logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True)
probabilities = tf.nn.softmax(logits)
checkpoint_path = tf.train.latest_checkpoint(train_dir)
init_fn = slim.assign_from_checkpoint_fn(
checkpoint_path,
slim.get_variables_to_restore())
with tf.Session() as sess:
with slim.queues.QueueRunners(sess):
sess.run(tf.initialize_local_variables())
init_fn(sess)
np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels])
for i in range(batch_size):
image = np_images_raw[i, :, :, :]
true_label = np_labels[i]
predicted_label = np.argmax(np_probabilities[i, :])
predicted_name = dataset.labels_to_names[predicted_label]
true_name = dataset.labels_to_names[true_label]
plt.figure()
plt.imshow(image.astype(np.uint8))
plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name))
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Creating your first neural network with TF-Slim
Step3: Let's create the model and examine its structure.
Step4: Let's create some 1d regression data .
Step5: Let's fit the model to the data
Step6: Training with multiple loss functions.
Step7: Let's load the saved model and use it for prediction.
Step8: Let's compute various evaluation metrics on the test set.
Step9: Reading Data with TF-Slim
Step10: Display some of the data.
Step11: Convolutional neural nets (CNNs).
Step12: Apply the model to some randomly generated images.
Step14: Train the model on the Flowers dataset.
Step15: Evaluate some metrics.
Step16: Using pre-trained models
Step17: Apply Pre-trained Inception V1 model to Images.
Step18: Download the VGG-16 checkpoint
Step19: Apply Pre-trained VGG-16 model to Images.
Step21: Fine-tune the model on a different set of labels.
Step22: Apply fine tuned model to some images.
|
1,560
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib as mpl
mpl.use('TkAgg')
import matplotlib.pyplot as plt
import sklearn.datasets as sk
from matplotlib import animation
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
%matplotlib inline
data_selection = 1 # Choose your data set.
if data_selection==1:
m = 4 # 4 classes for QPSK demodulation
N = m*100 # must be divisible by 4
phase_shift = -np.pi/8 # phase shift
# index to symbol mapper
number2symbol = {
0 : np.array([1.0*np.cos(phase_shift), 1.0*np.sin(phase_shift)]),
1 : np.array([1.0*np.sin(phase_shift), -1.0*np.cos(phase_shift)]),
2 : np.array([-1.0*np.cos(phase_shift), -1.0*np.sin(phase_shift)]),
3 : np.array([-1.0*np.sin(phase_shift), 1.0*np.cos(phase_shift)])
}
training_set = np.array([])
for i in range(0, m): # Assign N/4 constellation points to each QPSK symbol.
constellation_points = np.add(np.random.randn(N//m,2)/5, number2symbol[i])
training_set = np.concatenate((training_set, constellation_points)) \
if training_set.size else constellation_points
elif data_selection==2:
m = 3 # You can change m arbitrarily.
N = 800
random_state = 7
transformation = [[1.0, 0.1], [-0.5, 0.3]]
training_set = np.dot(sk.make_blobs(n_samples=N, centers=m, n_features=2, random_state=random_state)[0],transformation)
elif data_selection==3:
m = 3 # You can change m arbitrarily.
N = 200
random_state = 170
transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]]
training_set = np.dot(sk.make_blobs(n_samples=N, centers=m, n_features=2, random_state=random_state)[0],transformation)
elif data_selection==4:
m = 6 # You can change m arbitrarily.
N = m*100
training_set = np.random.rand(N,2)
elif data_selection==5:
m = 2
N = 200
training_set = sk.make_circles(n_samples=N, noise=0.05, random_state=None, factor=0.4)[0]
elif data_selection==6:
m = 2
N = 200
training_set = sk.make_moons(n_samples=N, noise=.05)[0]
# Plot data set.
plt.figure(num=None, figsize=(9, 8))
plt.scatter(training_set[:,0], training_set[:,1], marker='o', c='royalblue', s=50, alpha=0.5)
plt.title('Dataset on which we want to apply clustering.', fontsize=14)
plt.tick_params(axis='both', labelsize=14)
plt.show()
max_iter = 16 # Set the maximum number of iterations.
update_empty_centers = True # If True, a center point with no assigned data points gets a new random position.
# Set random init center positions in range of training_data.
def get_rand_centers(num):
centers_x1 = (np.max(training_set[:,0])-np.min(training_set[:,0]))*np.random.rand(num)+np.min(training_set[:,0])
centers_x2 = (np.max(training_set[:,1])-np.min(training_set[:,1]))*np.random.rand(num)+np.min(training_set[:,1])
return np.stack((centers_x1, centers_x2), axis=1)
centers = get_rand_centers(m)
new_centers = np.empty([m,2])
center_history = np.array([centers])
argmin_history = np.empty((0,N))
# Prepare plot.
fig = plt.figure(1, figsize=(16,4*(-(-max_iter//4))))
if m > 6:
cmap=plt.cm.Paired
else:
cmap = mpl.colors.ListedColormap(['royalblue', 'red', 'green', 'm', 'darkorange', 'gray'][:m])
boundaries = np.arange(0,m,1.0)
norm = mpl.colors.BoundaryNorm(np.arange(0,m+1,1), cmap.N)
# Start iteration.
for n in range(0,max_iter):
# Calculate the Euclidean distance from each data point to each center point.
distances = np.sqrt(np.subtract(training_set[:,0,None],np.repeat(np.array([centers[:,0]]), repeats=N, axis=0))**2 + np.subtract(training_set[:,1,None],np.repeat(np.array([centers[:,1]]), repeats=N, axis=0))**2)
# Assignment step. Identify the closest center point to each data point.
argmin = np.argmin(distances, axis=1)
argmin_history = np.append(argmin_history, [argmin], axis=0)
# Plot current center positions and center assignments of the data points.
ax = fig.add_subplot(-(-max_iter//4), 4, n+1)
plt.scatter(training_set[:,0], training_set[:,1], marker='o', s=30, c=argmin, cmap=cmap, norm=norm, alpha=0.5)
plt.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=200)
plt.tick_params(axis='both', labelsize=12)
plt.title('Iteration %d' % n, fontsize=12)
# Update step.
for i in range(0,m):
new_centers[i] = np.sum(training_set[argmin==i], axis=0) / len(argmin[argmin==i]) if len(argmin[argmin==i])>0 else get_rand_centers(1) if update_empty_centers else centers[i]
# Calc the movement of all center points as a stopping criterion.
center_movement = np.sum(np.sqrt((new_centers[:,0]-centers[:,0])**2 + (new_centers[:,1]-centers[:,1])**2))
if center_movement < 0.0001:
print("Finished early after %d iterations." % n)
break
centers = np.array(new_centers, copy=True)
center_history = np.append(center_history, [centers], axis=0)
fig.subplots_adjust(hspace=0.3)
plt.show()
# Plot trajectory of center positions.
plt.figure(num=None, figsize=(10, 9))
plt.scatter(training_set[:,0], training_set[:,1], marker='o', s=100, c=argmin, cmap=cmap, norm=norm, alpha=0.5)
plt.plot(center_history[:,:,0], center_history[:,:,1],marker='.',color='k',linewidth=2)
plt.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=400)
plt.tick_params(axis='both', labelsize=14)
plt.savefig('k_means_gauss_trajectory.pdf',box_inches='tight')
plt.show()
%matplotlib notebook
%matplotlib notebook
# First set up the figure, the axis, and the plot element we want to animate.
fig = plt.figure(num=None, figsize=(12, 10))
ax = plt.axes()
plt.tick_params(axis='both', labelsize=23)
scat_train = ax.scatter(training_set[:,0], training_set[:,1], marker='o', s=100, c=argmin, cmap=cmap, norm=norm, alpha=0.5)
scat_center = ax.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=400)
lines = []
for n in range(0,m):
line, = ax.plot([], [], marker='.',color='k',linewidth=2)
lines.append(line)
# Initialization function.
def init():
return scat_center,
# Animation function. This is called sequentially.
def animate(i):
scat_center.set_offsets(center_history[i,:,:])
scat_train.set_array(argmin_history[i])
for n in range(0,m):
lines[n].set_data(center_history[0:i+1,n,0], center_history[0:i+1,n,1])
ax.set_title("Iteration {}".format(i), fontsize=20)
return scat_center,
# Call the animator.
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(argmin_history), interval=2000, blit=True)
# If you want to save the animation, use the following line.
fig.show()
#anim.save('k_means.gif', writer=PillowWriter(fps=.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The k-means algorithm is one of the most popular clustering algorithms and very simple with respect to the implementation. Clustering has the goal to categorize a set of
Step2: Now we apply the $k$-means algorithm. Therefore we
Step3: Visualize the trajectory of the center positions.
Step4: Visualize with matplotlib animation.
|
1,561
|
<ASSISTANT_TASK:>
Python Code:
# the function definition
def mandelbrot(cR, cI, maxIterations):
zR = cR
zI = cI
i = 1
# Iterative formula for Mandelbrot set: z => z^2 + c
# Escape point: |z|^2 >= 4. Note: z nd c are complex numbers
while (zR*zR + zI*zI < 4.0 and i < maxIterations):
newzR = zR*zR - zI*zI +cR
newzI = 2*zR*zI + cI
zR = newzR
zI = newzI
i += 1
return i
# registers the function mandelbrot as a UDF for Spark
spark.udf.register("mandelbrot", mandelbrot)
spark.sql(
with
x as (select id, -2.0 + 0.027*cast(id as Float) cR from range(0,95)),
y as (select id, -1.1 + 0.045*cast(id as Float) cI from range(0,50))
select translate(cast(collect_list(substring(' .:::-----++++%%%%@@@@#### ',
mandelbrot(x.cR, y.cI, 27), 1)) as string), ',', '') as Mandelbrot_Set
from y cross join x
group by y.id
order by y.id desc).show(200, False)
result = spark.sql(
with
x as (select id, -2.0 + 0.027*cast(id as Float) cR from range(0,95)),
y as (select id, -1.1 + 0.045*cast(id as Float) cI from range(0,50))
select translate(cast(collect_list(color) as String), ',' , '') as value
from y cross join x cross join values
(0, concat('\u001B','[48;5;0m ','\u001B','[0m')), -- Black
(1, concat('\u001B','[48;5;15m ','\u001B','[0m')), -- White
(2, concat('\u001B','[48;5;51m ','\u001B','[0m')), -- Light blue
(3, concat('\u001B','[48;5;45m ','\u001B','[0m')),
(4, concat('\u001B','[48;5;39m ','\u001B','[0m')),
(5, concat('\u001B','[48;5;33m ','\u001B','[0m')),
(6, concat('\u001B','[48;5;27m ','\u001B','[0m')),
(7, concat('\u001B','[48;5;21m ','\u001B','[0m')) -- Dark blue
as palette(id, color)
where cast(substring('012223333344445555666677770', mandelbrot(x.cR, y.cI, 27), 1) as Int) = palette.id
group by y.id
order by y.id desc).collect()
# print out the result set of the query line by line
for line in result:
print(line[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Compute and display the Mandelbrot set using Spark SQL using plain old-fashioned ASCII graphics for the output
Step4: Mandelbrot in SQL, displayed with color
|
1,562
|
<ASSISTANT_TASK:>
Python Code:
def h2stats(h):
import numpy as np
#import ia898.src as ia
hn = 1.0*h/h.sum() # compute the normalized image histogram
v = np.zeros(6) # number of statistics
# compute statistics
n = len(h) # number of gray values
v[0] = np.sum((np.arange(n)*hn)) # mean
v[1] = np.sum(np.power((np.arange(n)-v[0]),2)*hn) # variance
v[2] = np.sum(np.power((np.arange(n)-v[0]),3)*hn)/(np.power(v[1],1.5))# skewness
v[3] = np.sum(np.power((np.arange(n)-v[0]),4)*hn)/(np.power(v[1],2))-3# kurtosis
v[4] = -(hn[hn>0]*np.log(hn[hn>0])).sum() # entropy
v[5] = np.argmax(h) # mode
return v
import numpy as np
f = np.array([1,1,1,0,1,2,2,2,1])
h, bins = np.histogram(f)
print('statistics =', h2stats(h))
print(np.mean(f))
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
f = mpimg.imread('../figures/cookies.pgm')
plt.imshow(f,cmap='gray')
f = mpimg.imread('../figures/fusca.png')
plt.imshow(f,cmap='gray')
from PIL import Image
ang = Image.open('../figures/bc_angulos.png')
ang
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vamos agora fazer um teste, calculando o histograma usando a função np.histogram e depois calculando as estatísticas da imagem
Step2: Os valores calculados parecem corretos? Vejamos o valor da média, por exemplo, usando a função np.mean
Step3: Porque os valores de média calculados pela h2stats (5.7777778) e pela np.mean(1.2222222222) são diferentes? Muito provavelmente o problema é a diferença entre a função ia.histogram e np.histogram.
Step4: Exercício 4
Step5: Exercício 5
|
1,563
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.arange(1,11)
accmap = np.array([0,1,0,0,0,-1,-1,2,2,1])
add = np.max(accmap)
mask = accmap < 0
accmap[mask] += add+1
result = np.bincount(accmap, weights = a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,564
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Martin Billinger <martin.billinger@tugraz.at>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import ShuffleSplit, cross_val_score
from mne import Epochs, pick_types, events_from_annotations
from mne.channels import read_layout
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
print(__doc__)
# #############################################################################
# # Set parameters and read data
# avoid classification of evoked responses by using epochs that start 1s after
# cue onset.
tmin, tmax = -1., 4.
event_id = dict(hands=2, feet=3)
subject = 1
runs = [6, 10, 14] # motor imagery: hands vs feet
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# strip channel names of "." characters
raw.rename_channels(lambda x: x.strip('.'))
# Apply band-pass filter
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
# Read epochs (train will be done only between 1 and 2s)
# Testing will be done with a running classifier
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=None, preload=True)
epochs_train = epochs.copy().crop(tmin=1., tmax=2.)
labels = epochs.events[:, -1] - 2
# Define a monte-carlo cross-validation generator (reduce variance):
scores = []
epochs_data = epochs.get_data()
epochs_data_train = epochs_train.get_data()
cv = ShuffleSplit(10, test_size=0.2, random_state=42)
cv_split = cv.split(epochs_data_train)
# Assemble a classifier
lda = LinearDiscriminantAnalysis()
csp = CSP(n_components=4, reg=None, log=True, norm_trace=False)
# Use scikit-learn Pipeline with cross_val_score function
clf = Pipeline([('CSP', csp), ('LDA', lda)])
scores = cross_val_score(clf, epochs_data_train, labels, cv=cv, n_jobs=1)
# Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
layout = read_layout('EEG1005')
csp.plot_patterns(epochs.info, layout=layout, ch_type='eeg',
units='Patterns (AU)', size=1.5)
sfreq = raw.info['sfreq']
w_length = int(sfreq * 0.5) # running classifier: window length
w_step = int(sfreq * 0.1) # running classifier: window step size
w_start = np.arange(0, epochs_data.shape[2] - w_length, w_step)
scores_windows = []
for train_idx, test_idx in cv_split:
y_train, y_test = labels[train_idx], labels[test_idx]
X_train = csp.fit_transform(epochs_data_train[train_idx], y_train)
X_test = csp.transform(epochs_data_train[test_idx])
# fit classifier
lda.fit(X_train, y_train)
# running classifier: test classifier on sliding window
score_this_window = []
for n in w_start:
X_test = csp.transform(epochs_data[test_idx][:, :, n:(n + w_length)])
score_this_window.append(lda.score(X_test, y_test))
scores_windows.append(score_this_window)
# Plot scores over time
w_times = (w_start + w_length / 2.) / sfreq + epochs.tmin
plt.figure()
plt.plot(w_times, np.mean(scores_windows, 0), label='Score')
plt.axvline(0, linestyle='--', color='k', label='Onset')
plt.axhline(0.5, linestyle='-', color='k', label='Chance')
plt.xlabel('time (s)')
plt.ylabel('classification accuracy')
plt.title('Classification score over time')
plt.legend(loc='lower right')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classification with linear discrimant analysis
Step2: Look at performance over time
|
1,565
|
<ASSISTANT_TASK:>
Python Code:
from networkit import *
%matplotlib inline
cd ~/workspace/NetworKit
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
# 2-2) and 2-3) Decide whether graph is Eulerian or not
# Load/generate 3 graphs of different types
mit8 = readGraph("input/MIT8.edgelist", Format.EdgeListTabZero)
airf1 = readGraph("input/airfoil1.graph", Format.METIS)
gen = generators.ErdosRenyiGenerator(1000, 0.01)
er1000 = gen.generate()
from IPython.core.display import Image
Image('input/airfoil1-10p.png')
# Code for Q&A Session #3
# 3-2) extract largest connected component
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Eulerian Cycles
Step2: Differences between network types
Step3: Some context on these networks is given below, first for MIT8. It stems from a larger collection of Facebook networks from the early days of the online social network. MI8 models
Step4: The third network is a random graph generated according to the Erdös-Renyi $G(n, p)$ model. This model has been analyzed theoretically over the last 50 years or so. As we will see, however, it deviates dramatically from real networks in important aspects.
|
1,566
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
%watermark -a 'Ethen' -d -t -v -p torch,numpy,matplotlib
# make up some trainig data and specify the type to be float, i.e. np.float32
# We DO not recommend double, i.e. np.float64, especially on the GPU. GPUs have bad
# double precision performance since they are optimized for float32
X_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59,
2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1], dtype = np.float32)
X_train = X_train.reshape(-1, 1)
y_train = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53,
1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3], dtype = np.float32)
y_train = y_train.reshape(-1, 1)
# Convert numpy array to Pytorch Tensors
X = torch.FloatTensor(X_train)
y = torch.FloatTensor(y_train)
# with linear regression, we apply a linear transformation
# to the incoming data, i.e. y = Xw + b, here we only have a 1
# dimensional data, thus the feature size will be 1
model = nn.Linear(in_features=1, out_features=1)
# although we can write our own loss function, the nn module
# also contains definitions of popular loss functions; here
# we use the MSELoss, a.k.a the L2 loss, and size_average parameter
# simply divides it with the number of examples
criterion = nn.MSELoss(size_average=True)
# Then we use the optim module to define an Optimizer that will update the weights of
# the model for us. Here we will use SGD; but it contains many other
# optimization algorithms. The first argument to the SGD constructor tells the
# optimizer the parameters that it should update
learning_rate = 0.01
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# start the optimization process
n_epochs = 100
for _ in range(n_epochs):
# torch accumulates the gradients, thus before running new things
# use the optimizer object to zero all of the gradients for the
# variables it will update (which are the learnable weights of the model),
# think in terms of refreshing the gradients before doing the another round of update
optimizer.zero_grad()
# forward pass: compute predicted y by passing X to the model
output = model(X)
# compute the loss function
loss = criterion(output, y)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# call the step function on an Optimizer makes an update to its parameters
optimizer.step()
# plot the data and the fitted line to confirm the result
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 14
# convert a torch FloatTensor back to a numpy ndarray
# here, we also call .detach to detach the result from the computation history,
# to prevent future computations on it from being tracked
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
# to get the parameters, i.e. weight and bias from the model,
# we can use the state_dict() attribute from the model that
# we've defined
model.state_dict()
# or we could get it from the model's parameter
# which by itself is a generator
list(model.parameters())
class LinearRegression(nn.Module):
def __init__(self, in_features, out_features):
super().__init__() # boilerplate call
self.in_features = in_features
self.out_features = out_features
self.linear = nn.Linear(in_features, out_features)
def forward(self, x):
out = self.linear(x)
return out
# same optimization process
n_epochs = 100
learning_rate = 0.01
criterion = nn.MSELoss(size_average=True)
model = LinearRegression(in_features=1, out_features=1)
# when we defined our LinearRegression class, we've assigned
# a neural network's component/layer to a class variable in the
# __init__ function, and now notice that we can directly call
# .parameters() on the class we've defined due to some Python magic
# from the Pytorch devs
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_epochs):
# forward + backward + optimize
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
# print the loss per 20 epoch this time
if (epoch + 1) % 20 == 0:
# starting from pytorch 0.4.0, we use .item to get a python number from a
# torch scalar, before loss.item() looks something like loss.data[0]
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, n_epochs, loss.item()))
checkpoint_path = 'model.pkl'
torch.save(model.state_dict(), checkpoint_path)
model.load_state_dict(torch.load(checkpoint_path))
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
# define some toy dataset
train_data = [
('me gusta comer en la cafeteria'.split(), 'SPANISH'),
('Give it to me'.split(), 'ENGLISH'),
('No creo que sea una buena idea'.split(), 'SPANISH'),
('No it is not a good idea to get lost at sea'.split(), 'ENGLISH')
]
test_data = [
('Yo creo que si'.split(), 'SPANISH'),
('it is lost on me'.split(), 'ENGLISH')
]
idx_to_label = ['SPANISH', 'ENGLISH']
label_to_idx = {"SPANISH": 0, "ENGLISH": 1}
word_to_idx = {}
for sent, _ in train_data + test_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
print(word_to_idx)
VOCAB_SIZE = len(word_to_idx)
NUM_LABELS = len(label_to_idx)
class BoWClassifier(nn.Module):
def __init__(self, vocab_size, num_labels):
super().__init__()
self.linear = nn.Linear(vocab_size, num_labels)
def forward(self, bow_vector):
When we're performing a classification, after passing
through the linear layer or also known as the affine layer
we also need pass it through the softmax layer to convert a vector
of real numbers into probability distribution, here we use
log softmax for numerical stability reasons.
return F.log_softmax(self.linear(bow_vector), dim = 1)
def make_bow_vector(sentence, word_to_idx):
vector = torch.zeros(len(word_to_idx))
for word in sentence:
vector[word_to_idx[word]] += 1
return vector.view(1, -1)
def make_target(label, label_to_idx):
return torch.LongTensor([label_to_idx[label]])
model = BoWClassifier(VOCAB_SIZE, NUM_LABELS)
# note that instead of using NLLLoss (negative log likelihood),
# we could have used CrossEntropyLoss and remove the log_softmax
# function call in our forward method. The CrossEntropyLoss docstring
# explicitly states that this criterion combines `LogSoftMax` and
# `NLLLoss` in one single class.
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
n_epochs = 100
for epoch in range(n_epochs):
for instance, label in train_data:
bow_vector = make_bow_vector(instance, word_to_idx)
target = make_target(label, label_to_idx)
# standard step to perform the forward and backward step
model.zero_grad()
log_probs = model(bow_vector)
loss = criterion(log_probs, target)
loss.backward()
optimizer.step()
# we can also wrap the code block in with torch.no_grad(): to
# prevent history tracking, this is often used in model inferencing,
# or when evaluating the model as we won't be needing the gradient during
# this stage
with torch.no_grad():
# predict on the test data to check if the model actually learned anything
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_idx)
log_probs = model(bow_vec)
y_pred = np.argmax(log_probs[0].numpy())
label_pred = idx_to_label[y_pred]
print('true label: ', label, ' predicted label: ', label_pred)
torch.manual_seed(777)
# suppose we have a
# one hot encoding for each char in 'hello'
# and the sequence length for the word 'hello' is 5
seq_len = 5
h = [1, 0, 0, 0]
e = [0, 1, 0, 0]
l = [0, 0, 1, 0]
o = [0, 0, 0, 1]
# here we specify a single RNN cell with the property of
# input_dim (4) -> output_dim (2)
# batch_first explained in the following
rnn_cell = nn.RNN(input_size=4, hidden_size=2, batch_first=True)
# our input shape should be of shape
# (batch, seq_len, input_size) when batch_first=True;
# the input size basically refers to the number of features
# (seq_len, batch_size, input_size) when batch_first=False (default)
# thus we reshape our input to the appropriate size, torch.view is
# equivalent to numpy.reshape
inputs = torch.Tensor([h, e, l, l, o])
inputs = inputs.view(1, 5, -1)
# our hidden is the weights that gets passed along the cells,
# here we initialize some random values for it:
# (batch, num_layers * num_directions, hidden_size) for batch_first=True
# disregard the second argument as of now
hidden = torch.zeros(1, 1, 2)
out, hidden = rnn_cell(inputs, hidden)
print('sequence input size', inputs.size())
print('out size', out.size())
print('sequence size', hidden.size())
# the first value returned by the rnn cell is all
# of the hidden state throughout the sequence, while
# the second value is the most recent hidden state;
# hence we can compare the last slice of the the first
# value with the second value to confirm that they are
# the same
print('\ncomparing rnn cell output:')
print(out[:, -1, :])
hidden[0]
# create an index to character mapping
idx2char = ['h', 'i', 'e', 'l', 'o']
# Teach hihell -> ihello
x_data = [[0, 1, 0, 2, 3, 3]] # hihell
x_one_hot = [[[1, 0, 0, 0, 0], # h 0
[0, 1, 0, 0, 0], # i 1
[1, 0, 0, 0, 0], # h 0
[0, 0, 1, 0, 0], # e 2
[0, 0, 0, 1, 0], # l 3
[0, 0, 0, 1, 0]]] # l 3
x_one_hot = np.array(x_one_hot)
y_data = np.array([1, 0, 2, 3, 3, 4]) # ihello
# As we have one batch of samples, we will change them to variables only once
inputs = torch.Tensor(x_one_hot)
labels = torch.LongTensor(y_data)
# hyperparameters
seq_len = 6 # |hihell| == 6, equivalent to time step
input_size = 5 # one-hot size
batch_size = 1 # one sentence per batch
num_layers = 1 # one-layer rnn
num_classes = 5 # predicting 5 distinct character
hidden_size = 4 # output from the RNN
class RNN(nn.Module):
The RNN model will be a RNN followed by a linear layer,
i.e. a fully-connected layer
def __init__(self, seq_len, num_classes, input_size, hidden_size, num_layers):
super().__init__()
self.seq_len = seq_len
self.num_layers = num_layers
self.input_size = input_size
self.num_classes = num_classes
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
self.linear = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# assuming batch_first = True for RNN cells
batch_size = x.size(0)
hidden = self._init_hidden(batch_size)
x = x.view(batch_size, self.seq_len, self.input_size)
# apart from the output, rnn also gives us the hidden
# cell, this gives us the opportunity to pass it to
# the next cell if needed; we won't be needing it here
# because the nn.RNN already computed all the time steps
# for us. rnn_out will of size [batch_size, seq_len, hidden_size]
rnn_out, _ = self.rnn(x, hidden)
linear_out = self.linear(rnn_out.view(-1, hidden_size))
return linear_out
def _init_hidden(self, batch_size):
Initialize hidden cell states, assuming
batch_first = True for RNN cells
return torch.zeros(batch_size, self.num_layers, self.hidden_size)
# Set loss, optimizer and the RNN model
torch.manual_seed(777)
rnn = RNN(seq_len, num_classes, input_size, hidden_size, num_layers)
print('network architecture:\n', rnn)
# train the model
num_epochs = 15
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(rnn.parameters(), lr=0.1)
for epoch in range(1, num_epochs + 1):
optimizer.zero_grad()
outputs = rnn(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# check the current predicted string
# max gives the maximum value and its
# corresponding index, we will only
# be needing the index
_, idx = outputs.max(dim = 1)
idx = idx.detach().numpy()
result_str = [idx2char[c] for c in idx]
print('epoch: {}, loss: {:1.3f}'.format(epoch, loss.item()))
print('Predicted string: ', ''.join(result_str))
# These will usually be more like 32 or 64 dimensional.
# We will keep them small for this toy example
EMBEDDING_SIZE = 6
HIDDEN_SIZE = 6
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
idx_to_tag = ['DET', 'NN', 'V']
tag_to_idx = {'DET': 0, 'NN': 1, 'V': 2}
word_to_idx = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
word_to_idx
def prepare_sequence(seq, to_idx):
Convert sentence/sequence to torch Tensors
idxs = [to_idx[w] for w in seq]
return torch.LongTensor(idxs)
seq = training_data[0][0]
inputs = prepare_sequence(seq, word_to_idx)
inputs
class LSTMTagger(nn.Module):
def __init__(self, embedding_size, hidden_size, vocab_size, tagset_size):
super().__init__()
self.embedding_size = embedding_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.tagset_size = tagset_size
self.embedding = nn.Embedding(vocab_size, embedding_size)
self.lstm = nn.LSTM(embedding_size, hidden_size)
self.hidden2tag = nn.Linear(hidden_size, tagset_size)
def forward(self, x):
embed = self.embedding(x)
hidden = self._init_hidden()
# the second dimension refers to the batch size, which we've hard-coded
# it as 1 throughout the example
lstm_out, lstm_hidden = self.lstm(embed.view(len(x), 1, -1), hidden)
output = self.hidden2tag(lstm_out.view(len(x), -1))
return output
def _init_hidden(self):
# the dimension semantics are [num_layers, batch_size, hidden_size]
return (torch.rand(1, 1, self.hidden_size),
torch.rand(1, 1, self.hidden_size))
model = LSTMTagger(EMBEDDING_SIZE, HIDDEN_SIZE, len(word_to_idx), len(tag_to_idx))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
epochs = 300
for epoch in range(epochs):
for sentence, tags in training_data:
model.zero_grad()
sentence = prepare_sequence(sentence, word_to_idx)
target = prepare_sequence(tags, tag_to_idx)
output = model(sentence)
loss = criterion(output, target)
loss.backward()
optimizer.step()
inputs = prepare_sequence(training_data[0][0], word_to_idx)
tag_scores = model(inputs)
# validating that the sentence "the dog ate the apple".
# the correct tag should be DET NOUN VERB DET NOUN
print('expected target: ', training_data[0][1])
tag_scores = tag_scores.detach().numpy()
tag = [idx_to_tag[idx] for idx in np.argmax(tag_scores, axis = 1)]
print('generated target: ', tag)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pytorch Introduction
Step2: Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss.
Step3: Linear Regression Version 2
Step4: After training our model, we can also save the model's parameter and load it back into the model in the future
Step5: Logistic Regression
Step6: The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector for the sentence "hello world hello world" will be [2, 2], i.e. the count for the word "hello" will be at position 0 of the array and so on.
Step8: Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
Step9: We are now ready to train this!
Step10: Recurrent Neural Network (RNN)
Step13: In the next section, we'll teach our RNN to produce "ihello" from "hihell".
Step15: LSTM
|
1,567
|
<ASSISTANT_TASK:>
Python Code:
# read the data into a Pandas DataFrame
import pandas as pd
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data'
col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
pima = pd.read_csv(url, header=None, names=col_names)
# print the first 5 rows of data
pima.head()
# define X and y
feature_cols = ['pregnant', 'insulin', 'bmi', 'age']
X = pima[feature_cols]
y = pima.label
# split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# make class predictions for the testing set
y_pred_class = logreg.predict(X_test)
# calculate accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
# examine the class distribution of the testing set (using a Pandas Series method)
y_test.value_counts()
# calculate the percentage of ones
y_test.mean()
# calculate the percentage of zeros
1 - y_test.mean()
# calculate null accuracy (for binary classification problems coded as 0/1)
max(y_test.mean(), 1 - y_test.mean())
# calculate null accuracy (for multi-class classification problems)
y_test.value_counts().head(1) / len(y_test)
# print the first 25 true and predicted responses
from __future__ import print_function
print('True:', y_test.values[0:25])
print('Pred:', y_pred_class[0:25])
# IMPORTANT: first argument is true values, second argument is predicted values
print(metrics.confusion_matrix(y_test, y_pred_class))
# print the first 25 true and predicted responses
print('True:', y_test.values[0:25])
print('Pred:', y_pred_class[0:25])
# save confusion matrix and slice into four pieces
confusion = metrics.confusion_matrix(y_test, y_pred_class)
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
print((TP + TN) / float(TP + TN + FP + FN))
print(metrics.accuracy_score(y_test, y_pred_class))
print((FP + FN) / float(TP + TN + FP + FN))
print(1 - metrics.accuracy_score(y_test, y_pred_class))
print(TP / float(TP + FN))
print(metrics.recall_score(y_test, y_pred_class))
print(TN / float(TN + FP))
print(FP / float(TN + FP))
print(TP / float(TP + FP))
print(metrics.precision_score(y_test, y_pred_class))
# print the first 10 predicted responses
logreg.predict(X_test)[0:10]
# print the first 10 predicted probabilities of class membership
logreg.predict_proba(X_test)[0:10, :]
# print the first 10 predicted probabilities for class 1
logreg.predict_proba(X_test)[0:10, 1]
# store the predicted probabilities for class 1
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14
# histogram of predicted probabilities
plt.hist(y_pred_prob, bins=8)
plt.xlim(0, 1)
plt.title('Histogram of predicted probabilities')
plt.xlabel('Predicted probability of diabetes')
plt.ylabel('Frequency')
# predict diabetes if the predicted probability is greater than 0.3
from sklearn.preprocessing import binarize
y_pred_class = binarize([y_pred_prob], 0.3)[0]
# print the first 10 predicted probabilities
y_pred_prob[0:10]
# print the first 10 predicted classes with the lower threshold
y_pred_class[0:10]
# previous confusion matrix (default threshold of 0.5)
print(confusion)
# new confusion matrix (threshold of 0.3)
print(metrics.confusion_matrix(y_test, y_pred_class))
# sensitivity has increased (used to be 0.24)
print(46 / float(46 + 16))
# specificity has decreased (used to be 0.91)
print(80 / float(80 + 50))
# IMPORTANT: first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for diabetes classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
# define a function that accepts a threshold and prints sensitivity and specificity
def evaluate_threshold(threshold):
print('Sensitivity:', tpr[thresholds > threshold][-1])
print('Specificity:', 1 - fpr[thresholds > threshold][-1])
evaluate_threshold(0.5)
evaluate_threshold(0.3)
# IMPORTANT: first argument is true values, second argument is predicted probabilities
print(metrics.roc_auc_score(y_test, y_pred_prob))
# calculate cross-validated AUC
from sklearn.cross_validation import cross_val_score
cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Classification accuracy
Step3: Null accuracy
Step4: Comparing the true and predicted response values
Step5: Conclusion
Step6: Every observation in the testing set is represented in exactly one box
Step7: Metrics computed from a confusion matrix
Step8: Classification Error
Step9: Sensitivity
Step10: Specificity
Step11: False Positive Rate
Step12: Precision
Step13: Many other metrics can be computed
Step14: Decrease the threshold for predicting diabetes in order to increase the sensitivity of the classifier
Step15: Conclusion
Step16: ROC curve can help you to choose a threshold that balances sensitivity and specificity in a way that makes sense for your particular context
Step17: AUC is the percentage of the ROC plot that is underneath the curve
Step18: AUC is useful as a single number summary of classifier performance.
Step19: Confusion matrix advantages
|
1,568
|
<ASSISTANT_TASK:>
Python Code:
# Import Node and Function module
from nipype import Node, Function
# Create a small example function
def add_two(x_input):
return x_input + 2
# Create Node
addtwo = Node(Function(input_names=["x_input"],
output_names=["val_output"],
function=add_two),
name='add_node')
from nipype import Node, Function
# Create the Function object
def get_random_array(array_shape):
# Import random function
from numpy.random import random
return random(array_shape)
# Create Function Node that executes get_random_array
rndArray = Node(Function(input_names=["array_shape"],
output_names=["random_array"],
function=get_random_array),
name='rndArray_node')
# Specify the array_shape of the random array
rndArray.inputs.array_shape = (3, 3)
# Run node
rndArray.run()
# Print output
print rndArray.result.outputs
from nipype import Node, Function
# Import random function
from numpy.random import random
# Create the Function object
def get_random_array(array_shape):
return random(array_shape)
# Create Function Node that executes get_random_array
rndArray = Node(Function(input_names=["array_shape"],
output_names=["random_array"],
function=get_random_array),
name='rndArray_node')
# Specify the array_shape of the random array
rndArray.inputs.array_shape = (3, 3)
# Run node
rndArray.run()
# Print output
print rndArray.result.outputs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trap 1
Step2: Now, let's see what happens if we move the import of random outside the scope of get_random_array
|
1,569
|
<ASSISTANT_TASK:>
Python Code:
%%file hellompi.py
Parallel Hello World
from mpi4py import MPI
import sys
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
sys.stdout.write(
"Hello, World! I am process %d of %d on %s.\n"
% (rank, size, name))
!mpiexec -n 4 python2.7 hellompi.py
%%file mpipt2pt.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
if rank == 0:
data = range(10)
more = range(0,20,2)
print 'rank %i sends data:' % rank, data
comm.send(data, dest=1, tag=1337)
print 'rank %i sends data:' % rank, more
comm.send(more, dest=2 ,tag=1456)
elif rank == 1:
data = comm.recv(source=0, tag=1337)
print 'rank %i got data:' % rank, data
elif rank == 2:
more = comm.recv(source=0, tag=1456)
print 'rank %i got data:' % rank, more
!mpiexec -n 4 python2.7 mpipt2pt.py
%%file mpipt2pt2.py
'''nonblocking communication
'''
from mpi4py import MPI
import numpy as np
import time
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
pair = {0:1, 1:0} # rank 0 sends to 1 and vice versa
sendbuf = np.zeros(5) + rank
recvbuf = np.empty_like(sendbuf)
print 'rank %i sends data:' % rank, sendbuf
sreq = comm.Isend(sendbuf, dest=pair[rank], tag=1337)
rreq = comm.Irecv(recvbuf, source=pair[rank], tag=1337)
# rreq.Wait(); sreq.Wait()
MPI.Request.Waitall([rreq, sreq])
if rank == 1:
time.sleep(0.001) # delay slightly for better printing
print 'rank %i got data:' % rank, recvbuf
!mpiexec -n 2 python2.7 mpipt2pt2.py
%%file mpiscattered.py
'''mpi scatter
'''
from mpi4py import MPI
import numpy as np
import time
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
if rank == 0:
data = np.arange(10)
print 'rank %i has data' % rank, data
data_split_list = np.array_split(data, size)
else:
data_split_list = None
data_split = comm.scatter(data_split_list, root=0)
# some delays for printing purposes
if rank == 1:
time.sleep(0.001)
elif rank == 2:
time.sleep(0.002)
print 'rank %i got data' % rank, data_split
!mpiexec -n 3 python2.7 mpiscattered.py
%%file mpibroadcasted.py
'''mpi broadcast
'''
from mpi4py import MPI
import numpy as np
import time
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
N = 10.
data = np.arange(N) if rank == 0 else np.zeros(N)
if rank == 1:
time.sleep(0.001)
elif rank == 2:
time.sleep(0.002)
print 'rank %i has data' % rank, data
comm.Bcast(data, root=0)
if rank == 1:
time.sleep(0.001)
elif rank == 2:
time.sleep(0.002)
print 'rank %i got data' % rank, data
!mpiexec -n 3 python2.7 mpibroadcasted.py
from mpi4py import MPI
try:
import dill
MPI._p_pickle.dumps = dill.dumps
MPI._p_pickle.loads = dill.loads
except ImportError, AttributeError:
pass
%%file jobscript.sh
#!/bin/sh
#PBS -l nodes=1:ppn=4
#PBS -l walltime=00:03:00
cd ${PBS_O_WORKDIR} || exit 2
mpiexec -np 4 python hellompi.py
%%file pyinapool.py
def test_pool(obj):
from pyina.launchers import Mpi
x = range(6)
p = Mpi(8)
# worker pool strategy + dill
p.scatter = False
print p.map(obj, x)
# worker pool strategy + dill.source
p.source = True
print p.map(obj, x)
# scatter-gather strategy + dill.source
p.scatter = True
print p.map(obj, x)
# scatter-gather strategy + dill
p.source = False
print p.map(obj, x)
if __name__ == '__main__':
from math import sin
f = lambda x:x+1
def g(x):
return x+2
for func in [g, f, abs, sin]:
test_pool(func)
!python2.7 pyinapool.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MPI and cluster computing
Step2: Executes with mpiexec
Step3: Coding for multiple "personalities" (nodes, actually)
Step4: Collective communication
Step5: Not covered
Step6: Working with cluster schedulers, the JOB file
Step7: Beyond mpi4py
|
1,570
|
<ASSISTANT_TASK:>
Python Code:
import json
series_types = ["Don't Know", "Other nonmetal", "Alkali metal",
"Alkaline earth metal", "Nobel gas", "Metalloid",
"Halogen", "Transition metal", "Post-transition metal",
"Lanthanoid", "Actinoid"]
class Element:
fields = "protons symbol long_name mass series"
repstr = ("Atom(protons={protons}, symbol='{symbol}', "
"long_name='{long_name}', "
"mass={mass}, series='{series}')")
def __init__(self, protons: int, symbol: str,
long_name: str, mass: float, series: str):
# build self.__dict__
self.protons = protons
self.symbol = symbol
self.long_name = long_name
self.__dict__['mass'] = mass # same idea
self.series = series
def __getitem__(self, idx): # simulates collection.namedtuple behavior
return self.__dict__[self.fields[idx]]
def __repr__(self):
return self.repstr.format(**self.__dict__)
Atom = Element # synonyms
lithium = Atom(3, "Li", "Lithium", 6.941, "Alkali metal")
print(lithium) # __str__, then __repr__
print(lithium.__dict__)
print(lithium.protons) # print(lithium.__getattr__('protons'))
import unittest
class Test_Element(unittest.TestCase):
def test_instance(self):
lithium = Atom(3, "Li", "Lithium", 6.941, "Alkali metal")
self.assertEqual(lithium.protons, 3, "Houston, we have a problem")
a = Test_Element() # the test suite
suite = unittest.TestLoader().loadTestsFromModule(a) # fancy boilerplate
unittest.TextTestRunner().run(suite) # run the test suite
class ElementEncoder(json.JSONEncoder):
See: https://docs.python.org/3.5/library/json.html
def default(self, obj):
if isinstance(obj, Element): # how to encode an Element
return [obj.protons, obj.symbol, obj.long_name, obj.mass, obj.series]
return json.JSONEncoder.default(self, obj) # just do your usual
# Element = namedtuple("Atom", "protons abbrev long_name mass")
def load_elements():
global all_elements # <--- will be visible to entire module
try:
the_file = "periodic_table.json"
f = open(the_file, "r") # <--- open the_file instead
except IOError:
print("Sorry, no such file!")
else:
the_dict = json.load(f)
f.close()
all_elements = {}
for symbol, data in the_dict.items():
all_elements[symbol] = Atom(*data) # "explode" data into 5 inputs
print("File:", the_file, 'loaded.')
load_elements() # actually do it
def print_periodic_table(sortby=1):
sort all_elements by number of protons, ordered_elements local only
What about series?
Sort Order:
1. protons
2. symbol
3. series
print("Selected:", sortby)
if sortby == 1:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.protons)
elif sortby == 2:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.symbol)
elif sortby == 3:
ordered_elements = sorted(all_elements.values(), key = lambda k: k.series)
print("PERIODIC TABLE OF THE ELEMENTS")
print("-" * 70)
print("Symbol |Long Name |Protons |Mass |Series " )
print("-" * 70)
for the_atom in ordered_elements:
print("{:6} | {:20} | {:6} | {:5.2f} | {:15}".format(the_atom.symbol,
the_atom.long_name,
the_atom.protons,
the_atom.mass,
the_atom.series))
print_periodic_table() # do it for real
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python for STEM Teachers<br/>Oregon Curriculum Network
Step3: <div align="center">graphic by Kenneth Snelson</div>
|
1,571
|
<ASSISTANT_TASK:>
Python Code:
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
from numpy import linalg
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate = np.random.default_rng(819)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
ndim = 3 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 3-D correlated multivariate normal log-likelihood
def loglikelihood(x):
Multivariate normal log-likelihood.
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
# prior transform
def prior_transform(u):
Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable.
return 10. * (2. * u - 1.)
# initialize our nested sampler
dsampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim=3,
bound='single', sample='unif', rstate=rstate)
dsampler.run_nested(maxiter=20000, use_stop=False)
dres = dsampler.results
noise = 1.
# 3-D correlated multivariate normal log-likelihood
def loglikelihood2(x):
Multivariate normal log-likelihood.
xp = rstate.normal(x, noise)
logl = -0.5 * np.dot(xp, np.dot(Cinv, xp)) + lnorm
scale = - 0.5 * noise**2 # location and scale
bias_corr = scale * ndim # ***bias correction term***
return logl - bias_corr
# compute estimator
x = np.zeros(ndim)
logls = np.array([loglikelihood2(x) for i in range(10000)])
print('True log-likelihood:', loglikelihood(x))
print('Estimated:', np.mean(logls), '+/-', np.std(logls))
dsampler2 = dynesty.DynamicNestedSampler(loglikelihood2, prior_transform, ndim=3,
bound='single', sample='unif',
update_interval=50.,
rstate=rstate)
dsampler2.run_nested(maxiter=20000, use_stop=False)
dres2 = dsampler2.results
# plot results
from dynesty import plotting as dyplot
lnz_truth = ndim * -np.log(2 * 10.) # analytic evidence solution
fig, axes = dyplot.runplot(dres, color='blue') # noiseless
fig, axes = dyplot.runplot(dres2, color='red', # noisy
lnz_truth=lnz_truth, truth_color='black',
fig=(fig, axes))
fig.tight_layout()
# initialize figure
fig, axes = plt.subplots(3, 7, figsize=(35, 15))
axes = axes.reshape((3, 7))
[a.set_frame_on(False) for a in axes[:, 3]]
[a.set_xticks([]) for a in axes[:, 3]]
[a.set_yticks([]) for a in axes[:, 3]]
# plot noiseless run (left)
fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',
show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, axes[:, :3]))
# plot noisy run (right)
fg, ax = dyplot.cornerplot(dres2, color='red', truths=[0., 0., 0.], truth_color='black',
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))
# importance reweighting
logl = np.array([loglikelihood(s) for s in dres2.samples])
dres2_rwt = dynesty.utils.reweight_run(dres2, logl)
# initialize figure
fig, axes = plt.subplots(3, 7, figsize=(35, 15))
axes = axes.reshape((3, 7))
[a.set_frame_on(False) for a in axes[:, 3]]
[a.set_xticks([]) for a in axes[:, 3]]
[a.set_yticks([]) for a in axes[:, 3]]
# plot noiseless run (left)
fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',
show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, axes[:, :3]))
# plot reweighted noisy run (right)
fg, ax = dyplot.cornerplot(dres2_rwt, color='red', truths=[0., 0., 0.], truth_color='black',
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))
Nmc = 50
# compute realizations of covariances (noiseless)
covs = []
for i in range(Nmc):
if i % 5 == 0: sys.stderr.write(str(i)+' ')
dres_t = dynesty.utils.resample_run(dres)
x, w = dres_t.samples, np.exp(dres_t.logwt - dres_t.logz[-1])
covs.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())
# noisy case (ignoring reweighting)
covs2 = []
for i in range(Nmc):
if i % 5 == 0: sys.stderr.write(str(i)+' ')
dres2_t = dynesty.utils.resample_run(dres2)
x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])
covs2.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())
# noisy case (w/ naive reweighting)
covs3 = []
for i in range(Nmc):
if i % 5 == 0: sys.stderr.write(str(i)+' ')
dres2_t = dynesty.utils.resample_run(dres2)
logl_t = np.array([loglikelihood2(s) for s in dres2_t.samples])
dres2_t = dynesty.utils.reweight_run(dres2_t, logp_new=logl_t)
x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])
covs3.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())
# compute errors
cov_mean, cov_std = np.mean(covs, axis=0), np.std(covs, axis=0)
cov2_mean, cov2_std = np.mean(covs2, axis=0), np.std(covs2, axis=0)
cov3_mean, cov3_std = np.mean(covs3, axis=0), np.std(covs3, axis=0)
# print results
print('Noiseless Likelihood Std:\n', cov_mean[[0, 4, 8]],
'+/-', cov_std[[0, 4, 8]])
print('Noisy Likelihood Std:\n', cov2_mean[[0, 4, 8]],
'+/-', cov2_std[[0, 4, 8]])
print('Noisy Likelihood (Naive Reweight) Std:\n', cov3_mean[[0, 4, 8]],
'+/-', cov3_std[[0, 4, 8]])
# compute sample mean and std(sample mean)
logls = np.array([[loglikelihood2(s) for s in dres2.samples] for i in range(Nmc)])
logls_est = logls.mean(axis=0) # sample mean
logls_bt = []
for i in range(Nmc * 10):
idx = rstate.choice(Nmc, size=Nmc)
logls_bt.append(logls[idx].mean(axis=0)) # bootstrapped mean
logls_std = np.std(logls_bt, axis=0) # bootstrapped std(mean)
# noisy case (w/ mean reweighting)
covs4 = []
for i in range(Nmc):
if i % 5 == 0: sys.stderr.write(str(i)+' ')
dres2_t, idx = dynesty.utils.resample_run(dres2, return_idx=True)
logl_t = rstate.normal(logls_est[idx], logls_std[idx])
dres2_t = dynesty.utils.reweight_run(dres2_t, logp_new=logl_t)
x, w = dres2_t.samples, np.exp(dres2_t.logwt - dres2_t.logz[-1])
covs4.append(dynesty.utils.mean_and_cov(x, w)[1].flatten())
# print results
cov4_mean, cov4_std = np.mean(covs4, axis=0), np.std(covs4, axis=0)
print('Noiseless Likelihood Std:\n', cov_mean[[0, 4, 8]],
'+/-', cov_std[[0, 4, 8]])
print('Noisy Likelihood Std:\n', cov2_mean[[0, 4, 8]],
'+/-', cov2_std[[0, 4, 8]])
print('Noisy Likelihood (Naive Reweight) Std:\n', cov3_mean[[0, 4, 8]],
'+/-', cov3_std[[0, 4, 8]])
print('Noisy Likelihood (Mean+Bootstrap Reweight) Std:\n', cov4_mean[[0, 4, 8]],
'+/-', cov4_std[[0, 4, 8]])
# initialize figure
fig, axes = plt.subplots(3, 7, figsize=(35, 15))
axes = axes.reshape((3, 7))
[a.set_frame_on(False) for a in axes[:, 3]]
[a.set_xticks([]) for a in axes[:, 3]]
[a.set_yticks([]) for a in axes[:, 3]]
# plot noiseless run (left)
fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',
show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, axes[:, :3]))
# plot realization of reweighted run (right)
logl_t = rstate.normal(logls_est, logls_std)
dres2_rwt2 = dynesty.utils.reweight_run(dres2, logp_new=logl_t)
fg, ax = dyplot.cornerplot(dres2_rwt2, color='red', truths=[0., 0., 0.], truth_color='black',
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Noisy Likelihoods
Step4: We'll again define our prior (via prior_transform) to be uniform in each dimension from -10 to 10 and 0 everywhere else.
Step5: Noiseless Case
Step7: Noisy Case
Step8: Note the additional bias correction term we have now included in the log-likelihood. This ensures that our noisy likelihood is unbiased relative to the true likelihood.
Step9: Let's now sample from our noisy distribution.
Step10: As expected, sampling is substantially more inefficient in the noisy case since more likelihood calls are required to get a noisy realization that is "better" than the previous noisy realization.
Step11: This effect also propagates through to our posteriors, broadening them relative to the underlying distribution.
Step12: Importance Reweighting
Step13: Full Analysis
Step14: While including the noise from our intrinsic likelihoods appears to substantially increase our error budget, it didn't actually shift our mean prediction closer to the truth. What gives? The issue is that we aren't accounting for the fact that we are able to get an estimate of the true (expected) log-likelihood from our many repeated realizations (via the mean). We can estimate this and our possible uncertainties around the mean using bootstrapping.
Step15: We see that after reweighting using our mean likelihoods (with bootstrapped errors) now properly shifts the mean while leaving us with uncertainties that are slightly larger than the noiseless case. This is what we'd expect given that we only have a noisy estimate of the true log-likelihood at a given position.
|
1,572
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
1,573
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
np.random.seed(123)
birds = np.random.choice(['African Swallow', 'Dead Parrot', 'Exploding Penguin'], size=int(5e4))
someTuple = np.unique(birds, return_counts=True)
def g(someTuple):
return pd.DataFrame(np.column_stack(someTuple),columns=['birdType','birdCount'])
result = g(someTuple)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,574
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
from sklearn import linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
clf = linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
import pandas as pd
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
from sklearn import preprocessing
scaling = preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
Plot an image along with its histogram and cumulative histogram.
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from sklearn import datasets
digits = datasets.load_digits()
#print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tabular data
Step2: Normalization
Step3: Categorical data
Step5: Exercises
Step6: Text
|
1,575
|
<ASSISTANT_TASK:>
Python Code:
import pint
from math import exp, sqrt
u = pint.UnitRegistry()
Q_ = u.Quantity
T = Q_(300, u.degC)
print('T = {}'.format(T))
T.ito('degK')
print('T = {}'.format(T))
T = T.magnitude * u.kelvin
print(T)
Qn = 37600 * u.J/u.mol
R = 8.31 * u.J/(u.mol*u.kelvin)
PN1 = 0.10
PN2 = 5.0
CN1 = (4.9e-3)*sqrt(PN1)*exp(-Qn/(R*T))
print(CN1)
CN2 = (4.9e-3)*sqrt(PN2)*exp(-Qn/(R*T))
print(CN2)
p=7.874*u.g/u.cm**3
p.ito(u.kg/u.m**3)
mFe = 1*u.kg
vFe = mFe/p
mN1 = mFe*CN1*0.01
CN1 = mN1/vFe
print(CN1)
mN2 = mFe*CN2*0.01
CN2 = mN2/vFe
print(CN2)
dC = CN2-CN1
dx = 1.5 *u.mm
dx.ito(u.m)
D0 = 5e-7 * u.m**2/u.s
Qd = 77000 * u.J/u.mol
D = D0 * exp(-Qd/(R*T))
print(D)
J = -D*(dC/dx)
J
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start with the temperature, T = 300 °C.
Step2: Next we'll create variables for $Q_n = 37,600 \frac{J}{mol}$ and the universal gas contant $R=8.31 \frac{J}{mol-K}$
Step3: Our first nitrogen pressure is 0.10 MPa and our second nitrogen pressure is 5.0 MPa, we'll make variables for both
Step4: Now we can calculate the two nitrogren concentrations in wt% using the equation
Step5: These values CN1 and CN2 are in units of wt% N in an iron-nitrogen "alloy" where almost all of the alloy is iron with only a small amount of nitrogen. To use Fick's First Law of Diffusion
Step6: Now we'll determine how many kg of nitrogen there are in 1 kg of iron given our concentrations CN1 and CN2 in wt%. Note that we have to multiply CN1 and CN2 by 0.01 because CN1 and CN2 are in units of %.
Step7: Back to Fick's Fist Law of Diffusion
Step8: Next we need to find the diffusion coefficient $D$. To do this, we need the pre-exponential term $D_0$ and the activating envery of diffusion $Q_d$.
Step9: To calculate diffusion constant $D$, we use the equation which relates diffusion coefficient, $D$ to temperature, $T$ according to
Step10: Now that we have $D$, $dC$ and $dx$, we can finally calculate diffusion flux, $J$ through the plate using Fick's First Law of Diffusion
|
1,576
|
<ASSISTANT_TASK:>
Python Code:
import larch, numpy, pandas, os
from larch import P, X
larch.__version__
hh, pp, tour, skims = larch.example(200, ['hh', 'pp', 'tour', 'skims'])
exampville_mode_choice_file = larch.example(201, output_file='exampville_mode_choice.html')
m = larch.read_metadata(exampville_mode_choice_file)
raw = tour.merge(hh, on='HHID').merge(pp, on=('HHID', 'PERSONID'))
raw["HOMETAZi"] = raw["HOMETAZ"] - 1
raw["DTAZi"] = raw["DTAZ"] - 1
raw = raw[raw.TOURPURP == 1]
raw = raw.join(
skims.get_rc_dataframe(
raw.HOMETAZi, raw.DTAZi,
)
)
# For clarity, we can define numbers as names for modes
DA = 1
SR = 2
Walk = 3
Bike = 4
Transit = 5
dfs = larch.DataFrames(
co=raw,
alt_codes=[DA,SR,Walk,Bike,Transit],
alt_names=['DA','SR','Walk','Bike','Transit'],
ch_name='TOURMODE',
)
m.dataservice = dfs
logsums = pandas.DataFrame(
data=0.0,
index=raw.index,
columns=skims.TAZ_ID
)
for destination_i, dtaz in enumerate(logsums.columns):
# Get the LOS data for this destination
new_data = skims.get_rc_dataframe(
raw.HOMETAZi, destination_i,
)
# Write this data into the model's dataservice.
dfs.data_co[new_data.columns] = new_data
# Loading this data runs the pre-processing on
# the dataservice, to create the arrays needed
# for computation.
m.load_data()
# Lastly, compute the logsums and save them
# to the new DataFrame.
logsums[dtaz] = m.logsums()
logsums.head()
logsums.to_pickle('logsums.pkl.gz')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example notebook, we will walk through the creation of logsums from
Step2: We'll also load the saved model from the mode choice estimation.
Step3: We'll replicate the pre-processing used in the mode choice estimation,
Step4: Then we bundle the raw data into the larch.DataFrames structure,
Step5: We'll also initialize a DataFrame to hold the computed logsums.
Step6: The logsums from a Model can be computed using the Model.logsums method.
Step7: Then we can persist the logsums dataframe to disk, for use in the next
|
1,577
|
<ASSISTANT_TASK:>
Python Code:
# Import packages
import neurokit as nk
import pandas as pd
import numpy as np
import matplotlib
import seaborn as sns
# Plotting preferences
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = [14.0, 10.0] # Bigger figures
sns.set_style("whitegrid") # White background
sns.set_palette(sns.color_palette("colorblind")) # Better colours
# Download resting-state data
df = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/data/bio_rest.csv", index_col=0)
# Plot it
df.plot()
# We want to find events on the Photosensor channel, when it goes down (hence, cut is set to lower).
events = nk.find_events(df["Photosensor"], cut="lower")
print(events)
df = nk.create_epochs(df, events["onsets"], duration=events["durations"], onset=0)
df = df[0] # Select the first (0th) element of that list.
bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"])
# Plot the processed dataframe
bio["df"].plot()
bio["ECG"]["HRV"]
bio["ECG"]["Heart_Beats"]
pd.DataFrame(bio["ECG"]["Heart_Beats"]).T.plot(legend=False) # Plot all the heart beats
# Print all the HRV indices
bio["ECG_Features"]["ECG_HRV"]
condition_list = ["Negative", "Negative", "Neutral", "Neutral", "Neutral", "Negative", "Negative", "Neutral"]
events = nk.find_events(df["Photosensor"], treshold = 3, cut="lower")
events
epochs = nk.create_epochs(bio["Bio"], events["onsets"], duration=5000, onset=0)
evoked = {} # Initialize an empty dict
for epoch in epochs:
evoked[epoch] = {} # Initialize an empty dict for the current epoch
evoked[epoch]["Heart_Rate"] = epochs[epoch]["Heart_Rate"].mean() # Heart Rate mean
evoked[epoch]["RSP_Rate"] = epochs[epoch]["RSP_Rate"].mean() # Respiration Rate mean
evoked[epoch]["EDA_Filtered"] = epochs[epoch]["EDA_Filtered"].mean() # EDA mean
evoked[epoch]["EDA_Max"] = max(epochs[epoch]["EDA_Filtered"]) # Max EDA value
# SRC_Peaks are scored np.nan (NaN values) in the absence of peak. We want to change it to 0
if np.isnan(epochs[epoch]["SCR_Peaks"].mean()):
evoked[epoch]["SCR_Peaks"] = 0
else:
evoked[epoch]["SCR_Peaks"] = epochs[epoch]["SCR_Peaks"].mean()
evoked = pd.DataFrame.from_dict(evoked, orient="index") # Convert to a dataframe
evoked["Condition"] = condition_list # Add the conditions
evoked # Print
sns.boxplot(x="Condition", y="Heart_Rate", data=evoked)
sns.boxplot(x="Condition", y="RSP_Rate", data=evoked)
sns.boxplot(x="Condition", y="EDA_Filtered", data=evoked)
sns.boxplot(x="Condition", y="EDA_Max", data=evoked)
sns.boxplot(x="Condition", y="SCR_Peaks", data=evoked)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Block Paradigms
Step2: df contains about 5 minutes of data recorded at 1000Hz. There are 4 channels, EDA, ECG, RSP and the Photosensor used to localize events. In the present case, there is only one event, one sequence of 5 min during which the participant was instructed to to nothing.
Step3: find_events returns a dict containing onsets and durations of each event. Here, it correctly detected only one event. Then, we're gonna crop our data according to that event. The create_epochs function returns a list containing epochs of data corresponding to each event. As we have only one event, we're gonna select the 0th element of that list.
Step4: Processing
Step5: Bio Features Extraction
Step6: Respiratory Sinus Arrythmia (RSA)
Step7: Heart Rate Variability (HRV)
Step8: Event-Related Analysis
Step9: Find Events
Step10: Create Epochs
Step11: Create Evoked-Data
Step12: Plot Results
|
1,578
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../data/wiki/wiki.csv.gz', encoding='utf8', index_col=None)
df['text'] = df.text.str[:3000]
totalvocab_stemmed = []
totalvocab_tokenized = []
for doc_text in df.text:
allwords_stemmed = tokenize_and_stem(doc_text) #for each item in 'synopses', tokenize/stem
totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list
allwords_tokenized = tokenize_only(doc_text)
totalvocab_tokenized.extend(allwords_tokenized)
vocab_frame = pd.DataFrame({'words': totalvocab_tokenized}, index = totalvocab_stemmed)
vocab_frame.head(10)
len(vocab_frame)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=12, min_df=3,
stop_words='english',
tokenizer=tokenize_and_stem, ngram_range=(1,3))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(df.text)
print(tfidf_matrix.shape)
terms = tfidf_vectorizer.get_feature_names()
len(terms)
len(vocab_frame)
terms = tfidf_vectorizer.get_feature_names()
idx = 2742
terms[idx]
terms[800]
vocab_frame.ix[terms[idx]].head(5)
from sklearn.cluster import KMeans
num_clusters = 10
km = KMeans(n_clusters=num_clusters)
%time km.fit(tfidf_matrix)
clusters = km.labels_.tolist()
len(clusters)
clustered = df.join(pd.DataFrame({'cluster': clusters}))
clustered.head()
len(km.cluster_centers_[1])
order_centroids = km.cluster_centers_.argsort()
order_centroids
term_words = vocab_frame.ix[terms[idx]]
term_words['words'].value_counts().keys()[0]
cluster_names = []
for cluster_centeroids in order_centroids:
words = []
for ind in cluster_centeroids[-6:]:
term_words = vocab_frame.ix[terms[ind].split(' ')]
best_word = term_words['words'].value_counts().keys()[0]
words.append(best_word)
cluster_names.append(', '.join(words))
cluster_names
clustered['cluster_name'] =clustered.cluster.map(lambda cluster: cluster_names[cluster])
clustered.head(10)
from sklearn.metrics.pairwise import cosine_similarity
dist = 1 - cosine_similarity(tfidf_matrix)
import os # for os.path.basename
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.manifold import MDS
MDS()
# two components as we're plotting points in a two-dimensional plane
# "precomputed" because we provide a distance matrix
# we will also specify `random_state` so the plot is reproducible.
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
pos = mds.fit_transform(dist) # shape (n_components, n_samples)
xs, ys = pos[:, 0], pos[:, 1]
import seaborn as sns
#set up colors per clusters using a dict
cluster_colors = sns.color_palette(n_colors=len(clusters))
%matplotlib inline
#create data frame that has the result of the MDS plus the cluster numbers and titles
df_print = pd.DataFrame(dict(x=xs, y=ys, label=clusters, title=df.name))
#group by cluster
groups = df_print.groupby('label')
# set up plot
fig, ax = plt.subplots(figsize=(30, 30)) # set size
ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
#iterate through groups to layer the plot
#note that I use the cluster_name and cluster_color dicts with the 'name' lookup to return the appropriate color/label
for name, group in groups:
ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=cluster_names[name], color=cluster_colors[name], mec='none')
ax.set_aspect('auto')
ax.tick_params(\
axis= 'x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelbottom='off')
ax.tick_params(\
axis= 'y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelleft='off')
ax.legend(numpoints=1) #show legend with only 1 point
#add label in x,y position with the label as the film title
for i in range(len(df)):
ax.text(df_print.ix[i]['x'], df_print.ix[i]['y'], df_print.ix[i]['title'], size=8)
#plt.show() #show the plot
#uncomment the below to save the plot if need be
plt.savefig('clusters_small_noaxes.png', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tf-idf and document similarity
Step2: Lets cluster!
Step3: But what did we get?
|
1,579
|
<ASSISTANT_TASK:>
Python Code:
import nltk
nltk.download("movie_reviews")
from nltk.corpus import movie_reviews
len(movie_reviews.fileids())
movie_reviews.fileids()[:5]
movie_reviews.fileids()[-5:]
negative_fileids = movie_reviews.fileids('neg')
positive_fileids = movie_reviews.fileids('pos')
len(negative_fileids), len(positive_fileids)
print(movie_reviews.raw(fileids=positive_fileids[0]))
romeo_text = Why then, O brawling love! O loving hate!
O any thing, of nothing first create!
O heavy lightness, serious vanity,
Misshapen chaos of well-seeming forms,
Feather of lead, bright smoke, cold fire, sick health,
Still-waking sleep, that is not what it is!
This love feel I, that feel no love in this.
romeo_text.split()
nltk.download("punkt")
romeo_words = nltk.word_tokenize(romeo_text)
romeo_words
movie_reviews.words(fileids=positive_fileids[0])
{word:True for word in romeo_words}
type(_)
def build_bag_of_words_features(words):
return {word:True for word in words}
build_bag_of_words_features(romeo_words)
nltk.download("stopwords")
import string
string.punctuation
useless_words = nltk.corpus.stopwords.words("english") + list(string.punctuation)
#useless_words
#type(useless_words)
def build_bag_of_words_features_filtered(words):
return {
word:1 for word in words \
if not word in useless_words}
build_bag_of_words_features_filtered(romeo_words)
all_words = movie_reviews.words()
len(all_words)/1e6
filtered_words = [word for word in movie_reviews.words() if not word in useless_words]
type(filtered_words)
len(filtered_words)/1e6
from collections import Counter
word_counter = Counter(filtered_words)
most_common_words = word_counter.most_common()[:10]
most_common_words
%matplotlib inline
import matplotlib.pyplot as plt
sorted_word_counts = sorted(list(word_counter.values()), reverse=True)
plt.loglog(sorted_word_counts)
plt.ylabel("Freq")
plt.xlabel("Word Rank");
plt.hist(sorted_word_counts, bins=50);
plt.hist(sorted_word_counts, bins=50, log=True);
negative_features = [
(build_bag_of_words_features_filtered(movie_reviews.words(fileids=[f])), 'neg') \
for f in negative_fileids
]
print(negative_features[3])
positive_features = [
(build_bag_of_words_features_filtered(movie_reviews.words(fileids=[f])), 'pos') \
for f in positive_fileids
]
print(positive_features[6])
from nltk.classify import NaiveBayesClassifier
split = 800
sentiment_classifier = NaiveBayesClassifier.train(positive_features[:split]+negative_features[:split])
nltk.classify.util.accuracy(sentiment_classifier, positive_features[:split]+negative_features[:split])*100
nltk.classify.util.accuracy(sentiment_classifier, positive_features[split:]+negative_features[split:])*100
sentiment_classifier.show_most_informative_features()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can also list and download other datasets interactively just typing
Step2: The fileids method provided by all the datasets in nltk.corpus gives access to a list of all the files available.
Step3: fileids can also filter the available files based on their category, which is the name of the subfolders they are located in. Therefore we can have lists of positive and negative reviews separately.
Step4: We can inspect one of the reviews using the raw method of movie_reviews, each file is split into sentences, the curators of this dataset also removed from each review from any direct mention of the rating of the movie.
Step6: Tokenize Text in Words
Step7: The first step in Natural Language processing is generally to split the text into words, this process might appear simple but it is very tedious to handle all corner cases, see for example all the issues with punctuation we have to solve if we just start with a split on whitespace
Step8: nltk has a sophisticated word tokenizer trained on English named punkt, we first have to download its parameters
Step9: Then we can use the word_tokenize function to properly tokenize this text, compare to the whitespace splitting we used above
Step10: Good news is that the movie_reviews corpus already has direct access to tokenized text with the words method
Step11: Build a bag-of-words model
Step12: This is what we wanted, but we notice that also punctuation like "!" and words useless for classification purposes like "of" or "that" are also included.
Step13: Using the Python string.punctuation list and the English stopwords we can build better features by filtering out those words that would not help in the classification
Step14: Plotting Frequencies of Words
Step15: First we want to filter out useless_words as defined in the previous section, this will reduce the length of the dataset by more than a factor of 2
Step16: The collection package of the standard library contains a Counter class that is handy for counting frequencies of words in our list
Step17: It also has a most_common() method to access the words with the higher count
Step18: Then we would like to have a visualization of this using matplotlib.
Step19: We can sort the word counts and plot their values on Logarithmic axes to check the shape of the distribution. This visualization is particularly useful if comparing 2 or more datasets, a flatter distribution indicates a large vocabulary while a peaked distribution a restricted vocabulary often due to a focused topic or specialized language.
Step20: Another related plot is the histogram of sorted_word_counts, which displays how many words have a count in a specific range.
Step21: Train a Classifier for Sentiment Analysis
Step22: One of the simplest supervised machine learning classifiers is the Naive Bayes Classifier, it can be trained on 80% of the data to learn what words are generally associated with positive or with negative reviews.
Step23: We can check after training what is the accuracy on the training set, i.e. the same data used for training, we expect this to be a very high number because the algorithm already "saw" those data. Accuracy is the fraction of the data that is classified correctly, we can turn it into percent
Step24: The accuracy above is mostly a check that nothing went very wrong in the training, the real measure of accuracy is on the remaining 20% of the data that wasn't used in training, the test data
Step25: Accuracy here is around 70% which is pretty good for such a simple model if we consider that the estimated accuracy for a person is about 80%.
|
1,580
|
<ASSISTANT_TASK:>
Python Code:
!qiime tools import \
--input-path ../data/deblur-feature-table.biom \
--type 'FeatureTable[Frequency]' \
--source-format BIOMV210Format \
--output-path ../data/dblr_haddad.qza
!qiime dsfdr permutation-fdr \
--i-table ../data/dblr_haddad.qza \
--m-metadata-file ../data/metadata_rare2k.txt \
--m-metadata-column 'exposure_type' \
--o-reject haddad.dsfdr --verbose
!qiime tools export haddad.dsfdr.qza --output-dir haddad.dsfdr.results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: select interested category to compare using DS-FDR
Step2: output the list of differential abundant taxa (True indicates statistical significance)
|
1,581
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame(python_obj=X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
%matplotlib inline
import matplotlib.pyplot as plt
fr = h2o.import_file("Boston_housing_train.csv")
fr.head()
fr.tail()
fr["CRIM"].head() # Tab completes
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
fr[2:7,:] # explicitly select all columns with :
# The columns attribute is exactly like Pandas
print "Columns:", fr.columns, "\n"
print "Columns:", fr.names, "\n"
print "Columns:", fr.col_names, "\n"
# There are a number of attributes to get at the shape
print "length:", str( len(fr) ), "\n"
print "shape:", fr.shape, "\n"
print "dim:", fr.dim, "\n"
print "nrow:", fr.nrow, "\n"
print "ncol:", fr.ncol, "\n"
# Use the "types" attribute to list the column types
print "types:", fr.types, "\n"
fr.shape
mask = fr["CRIM"]>1
fr[mask,:].shape
fr.describe()
x = fr.names
y="Median_value"
x.remove(y)
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
from sklearn.cross_validation import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
from sklearn import __version__
sklearn_version = __version__
print sklearn_version
%%time
from sklearn.grid_search import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,50),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=30,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
def report_grid_score_detail(random_search, charts=True):
Input fit grid search estimator. Returns df of scores with details
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
Return list of numeric dtypes variables
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
%%time
params = {"ntrees": randint(30,40),
"max_depth": randint(4,10),
"mtries": randint(4,10),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=10, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
report_grid_score_detail(random_search)
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print X_test_norm.shape
X_test_norm
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print X_test_norm_pca.shape
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(50,80),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.grid_search import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=30,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print save_path
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(fr)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enable inline plotting in the Jupyter Notebook
Step2: Intro to H2O Data Munging
Step3: View the top of the H2O frame.
Step4: View the bottom of the H2O Frame
Step5: Select a column
Step6: Select a few columns
Step7: Select a subset of rows
Step8: Key attributes
Step9: Select rows based on value
Step10: Boolean masks can be used to subselect rows based on a criteria.
Step11: Get summary statistics of the data and additional data distribution information.
Step12: Set up the predictor and response column names
Step13: Machine Learning With H2O
Step14: The performance of the model can be checked using the holdout dataset
Step15: Train-Test Split
Step16: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Step17: However, you can still make use of the cross_val_score from Scikit-Learn
Step18: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.
Step19: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Step20: Grid Search
Step21: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
Step24: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
Step25: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs
Step26: Transformations
Step27: Normalize Data
Step28: Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
Step29: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Steps
Step30: This is so much easier!!!
Steps
Step31: Currently Under Development (drop-in scikit-learn pieces)
|
1,582
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from scipy import stats
import collections
import time
from sklearn.linear_model import SGDRegressor
total_bills = np.random.randint(100, size=1000)
tips = total_bills * 0.10
x = pd.Series(tips, name='tips')
y = pd.Series(total_bills, name='total_bills')
df = pd.concat([x, y], axis=1)
df.plot(kind='scatter', x='total_bills', y='tips');
slope, intercept, r_value, p_value, std_err = stats.linregress(x=total_bills, y=tips)
print("slope is %f and intercept is %s" % (slope,intercept))
predicted_tips = (slope * 70) + intercept
print('The customer will leave the tip of $%f' % predicted_tips)
large_total_bills = np.random.randint(10000, size=100000000)
large_tips = total_bills * 0.10
now = time.time()
slope, intercept, r_value, p_value, std_err = stats.linregress(x=large_total_bills, y=large_tips)
predicted_tips = (slope * 700) + intercept
later = time.time()
difference = int(later - now)
print('The customer will leave the tip of $%f' % predicted_tips)
print('The time spent is %f seconds' % difference)
def compute_error_for_line_given_points (b, m, points):
totalError = 0
for i in range(0, len(points)):
totalError += (points[i].y - (m * points[i].x + b)) ** 2
return totalError / float(len(points))
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
b_gradient += -(2/N) * (points[i].y - ((m_current*points[i].x) + b_current))
m_gradient += -(2/N) * points[i].x * (points[i].y - ((m_current * points[i].x) + b_current))
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
for i in range(num_iterations):
b, m = step_gradient(b, m, points, learning_rate)
return [b, m]
class point:
def __init__(self,x,y):
self.x=x
self.y=y
x = np.random.randint(100, size=1000)
y = x * 0.10
np.column_stack((x,y))
points = []
collections.namedtuple('Point', ['x', 'y'])
for i in range(len(x)):
points.append(point(x[i],y[i]))
learning_rate = 0.0001
initial_b = 0 # initial y-intercept guess
initial_m = 0 # initial slope guess
num_iterations = 1000
print("Starting gradient descent at b = {0}, m = {1}, error = {2}".format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points)))
print("Running...")
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
print("After {0} iterations b = {1}, m = {2}, error = {3}".format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points)))
gradient_predicted_tips = (m * 70) + b
gradient_predicted_tips
x = np.random.randint(100, size=100000000)
y = x * 0.10
x = x[:,None]
now = time.time()
clf = SGDRegressor()
clf.fit(x, y)
later = time.time()
difference = int(later - now)
print("Time spent for SGDRegressor is %d seconds" % difference)
print("slope is %f and intercept is %s" % (clf.coef_, clf.intercept_[0]))
clf.predict(70) # How much tip
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Data
Step2: It's easier if we select the correct X and Y axis. Usually, The Y axis would be the value we want to predict and X would be the feed data.
Step3: As we can see from the graph that there's a strong correlation between amount of tip and meal. Now we want to calculate the regression line. We need the slope and intercept to feed in the formula. Y = MX + C
Step4: Let's say if the customer spent $70 how much the customer will tip
Step5: Large dataset
Step6: Gradient Decent
Step7: Let's see after 1000 interations how close are we. Pretty close I think
Step8: But you really don't need to write that on your own as Scikit provides that for you already.
|
1,583
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
path_to_csv = "data/iris.csv"
df = pd.read_csv(path_to_csv, index_col=['species', 'sample_number'])
df.head()
df.plot(kind='hist');
versicolor = df.loc['versicolor']
versicolor.plot(kind='hist');
versicolor.plot(kind='hist', subplots=True, layout=(2,2), figsize=(12,6));
versicolor.plot(kind='hist', subplots=True, layout=(2,2), figsize=(12,6), bins=30);
versicolor.boxplot();
# Here, q1 = first quartile, q3 = third quartile, iqr = interquartile range, lw = lower whisker, uw = upper whisker
q1 = versicolor.quantile(0.25)
q3 = versicolor.quantile(0.75)
iqr = q3 - q1
lw = q1 - 1.5 * iqr
uw = q3 + 1.5 * iqr
# Outliers are below the lower whisker OR above the upper whisker
outliers = (versicolor < lw) | (versicolor > uw)
# Print the last few rows of "outliers"
outliers.tail()
versicolor[~outliers].tail()
removed = versicolor[~outliers].dropna()
removed.tail() # Just show the last five rows
versicolor.median()
replaced = versicolor[~outliers].fillna(versicolor.median())
replaced.tail() # Just show the last five rows
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's load the data. Write the path to your iris.csv file (i.e. the one from Lab 02) in the cell below
Step2: Execute the cell below to load the data into a pandas data frame and index that data frame by the species and sample_number columns
Step3: Finding outliers
Step4: We also saw how data frame indexing can be used to limit our view of the data to just one species of Iris. For instance, to plot a histogram for each column in our data frame, but only for the rows corresponding to Iris versicolor, we can write
Step5: Plotting multiple histograms on one chart can be a little cluttered though. We also saw how we could create individual charts for each column by passing subplots=True when we call the plot method, like this
Step6: This is much more useful, but the histograms look a bit chunky because the default number of bins is set to ten. We can change this easily though, by passing the optional bins argument to the plot method, like in the cell below.
Step7: Increasing the number of bins gives us a more detailed view of how the data is behaving, which can often make it easier to detect outliers visually. In this instance, however, it seems that all of the data is reasonably well behaved - there are no obvious extreme values.
Step8: As you can see, pandas creates a boxplot for each column in our data frame and places all four boxplots in the same chart, so that we can compare the distributions of the data in the columns side by side.
Step9: As you can see, the outlier occurs in the 49th row of the data frame.
Step10: Next, we call the dropna method on the dataframe to remove all the rows containing outlying values
Step11: As you can see, the 49th row (where the outlier was) has now been removed.
Step12: To set the new value, we first compute a copy of the data frame without the outlying value, just like earlier. Then, we can call the fillna method to fill any missing column values with the median values of those columns, like this
|
1,584
|
<ASSISTANT_TASK:>
Python Code:
#Importiere Python Libraries
%matplotlib inline
import pylab as pl
import seaborn as sns
sns.set(font_scale=1.7)
from plotly.offline import init_notebook_mode, iplot
from plotly.graph_objs import *
import plotly.tools as tls
#Set to True
init_notebook_mode(connected=True)
import scipy as sp
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.pipeline import Pipeline
from ipywidgets import *
from IPython.display import display
#Funktion zum Plotten der Daten
def plot_data(X,y,model=None,interactive=False):
fig = pl.figure(figsize=(10,6))
pl.plot(X,y,'o',markersize=10)
pl.xlabel("x")
pl.ylabel("y")
pl.title("")
pl.ylim([-1.1,1.1])
pl.xlim([-3.1,3.1])
pl.xticks([-3,-2,-1,0,1,2,3],["50","60","70","80","90","100","110"])
pl.yticks([-1,-0.5,0,0.5,1],["200k","400k","600k","800k","1M"])
if not model==None:
X_new=sp.linspace(-3, 3, 100).reshape(100, 1)
y_new = model.predict(X_new)
pl.plot(X_new,y_new,"r-",linewidth=4,label="Learned Regression Fit")
pl.legend()
if interactive:
plotly_fig = tls.mpl_to_plotly(fig)
iplot(plotly_fig, show_link=False)
#Funktion um Beispieldaten zu simulieren
def generate_data():
sp.random.seed(42)
X = sp.arange(-3,3,1.0/20.0).reshape(-1,1)
y = sp.sin(0.2*sp.pi*X+0.1*sp.random.randn(X.shape[0],1))
return X,y
def generate_polynomial_features(X,degree=1,return_transformer=True):
transformer = PolynomialFeatures(degree=degree, include_bias=False)
X_poly = transformer.fit_transform(X)
if return_transformer:
return X_poly, transformer
else:
return X_poly
#Generiere Daten
X,y = generate_data()
print X.shape
#Plotte Daten
plot_data(X,y,interactive=True);
#Lerne Lineare Regression
print "Anzahl der Trainingsinstanzen:\t%d"%(X.shape[0])
print "Anzahl der Features:\t\t%d"%(X.shape[1])
model = LinearRegression()
model.fit(X,y)
#Plotte Daten und die gelernte Funktion
plot_data(X,y,model,interactive=True);
#Funktion um eine Polynomielle Regression unterschiedlichen Grades zu plotten
def render_polynomial_regression(degree=150):
#Lerne Lineare Regression auf polynomiellen Features
transformer = PolynomialFeatures(degree=degree, include_bias=False)
scaler = StandardScaler()
model = LinearRegression()
#Polynomielle Regression mit Feature Scaling
polynomial_regression = Pipeline((
('make_poly_features',transformer),
("scale_features",scaler),
("run_linreg",model),
))
polynomial_regression.fit(X,y)
#Plotte Daten und die gelernte Funktion
plot_data(X,y,polynomial_regression)
pl.show()
#Render einen Interaktiven Plot
#interact(render_polynomial_regression,degree=IntSlider(min=1,max=300,value=100,
# description="Grad des Polynoms:"));
render_polynomial_regression(degree=100)
#Lerne Ridge Regression auf polynomiellen Features mit alpha=1.1
ridge_regression = Pipeline((
('make_poly_features',PolynomialFeatures(degree=100, include_bias=False)),
("scale_features",StandardScaler()),
("run_ridgereg",Ridge(alpha=1.1)),
))
ridge_regression.fit(X,y)
plot_data(X,y,ridge_regression,interactive=True)
#Funktion um den Effekt von alpha auf die Gewichte zu illustrieren
def plot_effect_of_alpha(interactive=False):
coefs = []
alphas = sp.logspace(5,-6,200)
poly_feat = PolynomialFeatures(degree=10, include_bias=False)
scaler = StandardScaler()
for alpha in alphas:
model = Ridge(alpha=alpha)
ridge_regression = Pipeline((
('make_poly_features',poly_feat),
("scale_features",scaler),
("run_ridgereg",model),
))
ridge_regression.fit(X,y)
X_new=sp.linspace(-3, 3, 100).reshape(100, 1)
y_new = ridge_regression.predict(X_new)
coefs.append(model.coef_.flatten()[1:])
fig = pl.figure(figsize=(10,6))
ax = pl.gca()
ax.plot(alphas, coefs,linewidth=3)
ax.set_xscale('log')
if interactive:
pl.xlabel("alpha")
else:
pl.xlabel('$\\alpha$')
pl.ylabel('Gewichte')
pl.axis('tight')
if interactive:
pl.xticks(fontsize=13)
plotly_fig = tls.mpl_to_plotly(fig)
iplot(plotly_fig, show_link=False)
else:
pl.show()
#Plot Effect of Alpha
plot_effect_of_alpha(interactive=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lerne Lineare Regression auf Daten
Step2: <span style="color
Step3: <span style="color
Step4: <span style="color
|
1,585
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import psycopg2
import paramiko
import os
import numpy as np
import json
import zipfile
DATA_DIR=os.path.join('..','data')
Path to local data directory
#read postgres connection parameters
with open('postgres_settings.json') as settings_file:
settings = json.load(settings_file)
DBNAME = settings['dbname']
USER = settings['user']
HOST = settings['host']
PASSWORD = settings['password']
conn_str = "dbname = {0} user = {1} host = {2} password = {3}".format(DBNAME, USER, HOST, PASSWORD)
try:
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
except:
print ("Cannot connection. Check settings")
# TODO: add putty connection too.
#read SSH connection parameters
with open('ssh_settings.json') as settings_file:
settings = json.load(settings_file)
HOSTNAME = settings['hostname']
USERNAME = settings['username']
PASSWORD = settings['password']
LOCAL_KEY_DIR = settings['local_key_dir']
CENSUS_DIR = 'synthetic_population'
Remote directory with census data
RESULTS_DIR = 'craigslist_census'
Remote directory for results
# estbalish SSH connection
ssh = paramiko.SSHClient()
ssh.load_host_keys(LOCAL_KEY_DIR)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(HOSTNAME,username=USERNAME, password=PASSWORD)
sftp = ssh.open_sftp()
BLOCK_DIR = os.path.join('..','data','urbansim')
BLOCK_ZFILE = 'ba_block_variables.csv.zip'
BLOCK_FILE = 'ba_block_variables.csv'
# make dictionary of states and fips codes.
fips_state = pd.read_csv(os.path.join(DATA_DIR,'state_fips_codes.csv'),dtype=str)
fips2state=dict(zip(fips_state['FIPS'],fips_state['USPS']))
state2fips=dict(zip(fips_state['USPS'],fips_state['FIPS']))
# Make lookup for county to MPO code
mpo_counties = pd.read_csv(os.path.join(DATA_DIR,'us_2015_mpo_regions_counties_v1.csv'), encoding='latin1', dtype={'MPO_ID':str,'COUNTYFP':str,'STFIPS':str})
mpo_counties['COUNTYFP'] = mpo_counties['COUNTYFP'].str.zfill(2)
mpo_counties['st_co_fips'] = mpo_counties['STFIPS']+mpo_counties['COUNTYFP'] # we will want to join on 2-char state + 3-char county fips
county2mpo=dict(zip(mpo_counties['st_co_fips'],mpo_counties['MPO_ID'])) # do we want MPO_ID or do we want GEOID?
mpo_counties.head()
def run_query(q):
Get results given SQL query
cur.execute(q)
return(cur.fetchall())
def get_craiglist(filters, split_fips=True):
Get craiglist data from database.
Args:
filters (list): list of strings containing filter criteria. Format as individual SQL WHERE statements. E.g., ["region='sandiego'","rent>100"]
split_fips (bool): if True, split fips code into block and fips12 (useful if merging wtih blockgroup)
Returns:
DataFrame: listings data.
#q="SELECT pid,date,rent,bedrooms,bathrooms,sqft,rent_sqft,fips_block,state,region,longitude,latitude FROM rental_listings WHERE state='{}';".format(state)
filters_str = ' AND '.join([x for x in filters])
q="SELECT pid,date,rent,bedrooms,bathrooms,sqft,rent_sqft,fips_block,state,region,longitude,latitude FROM rental_listings WHERE {};".format(filters_str)
results=run_query(q)
df=pd.DataFrame(results,columns=['listing_id', 'date','rent','bedrooms','bathrooms','sqft','rent_sqft','fips_block','state','region','lng','lat'] ) # put it all into a dataframe
if split_fips==True:
# split FIPS into different columns - split off the last 3 chars
df['block']=df.fips_block.str[-4:]
df['fips12']=df.fips_block.str[:-3]
return(df)
def read_census_file(fname):
Read census csv file via SFTP and return as dataframe.
with sftp.open(os.path.join(CENSUS_DIR,fname)) as f:
df = pd.read_csv(f, delimiter=',',dtype={'age_of_head':float, 'block group':str, 'cars':float, 'children':float, 'county':str,
'household_id':str, 'income':float, 'persons':float, 'race_of_head':str, 'recent_mover':str,
'serialno':str, 'state':str, 'tenure':str, 'tract':str, 'workers':float})
return df
def write_results_file(data,fname):
Write merged data to csv file via SFTP.
with sftp.open(os.path.join(RESULTS_DIR,fname),'w') as f:
data.to_csv(f,index=True)
return
def get_census_by_state(state, table='households'):
Return all census data for state given two-char abbreviation. Can be 'households' or 'persons' data.
filelist=sftp.listdir(CENSUS_DIR)
if table=='households':
files = [f for f in filelist if f[:5]=='hh_{}'.format(state)]
elif table=='persons':
files = [f for f in filelist if f[:4]=='p_{}'.format(state)]
#files = files[:3] # uncomment this line for testing.
new_df = pd.DataFrame()
for f in files:
df = read_census_file(f)
new_df = pd.concat([new_df,df])
return(new_df)
def strip_zeros(s):
Remove '.0 from end of string
if s.endswith('.0'):
return(s[:-2])
else:
return(s)
def format_hh_data(df):
Fix formatting for hhs census data. Replace '' strings with zero. Format other strings.
df['county'] = df['county'].str.zfill(2) # make county 3-char string.
for col in ['children','workers']:
df[col] = df[col].replace('','0')
for col in ['race_of_head','recent_mover','tenure']:
df[col] = df[col].astype(str)
df[col] = df[col].map(strip_zeros) # make sure strings are formatted.
return(df)
def aggregate_census(df, groupby_cols=['county','tract','block group'],cols_to_sum=['cars','children','persons','workers'], cols_to_median=['age_of_head','income'],categ_cols=['race_of_head','recent_mover','tenure'],id_col='serialno',table='hhs'):
Aggregate census table to block group. Made this for hh data, may need to revised for persons data.
Args:
groupby_cols (list): names of columns to group by (default=['county','tract','block group'])
cols_to_sum (list): names of columns for which to compute totals.
cols_to_median (list): names of columns for which to compute medians
categ_cols (list): names of categorical columns
id_col (str): name of column that serves as the id column, to use in counting rows.
table (str): 'hhs' (default) or 'per'
Returns:
DataFrame: aggregated data.
# For some columns we'll want to find the sum or average/median. These will need only a simple groupby
sums = df.groupby(by=groupby_cols).sum()[cols_to_sum]
sums.columns = [x+'_tot' for x in cols_to_sum]
medians = df.groupby(by=groupby_cols).median()[cols_to_median]
medians.columns = [x+'_med' for x in cols_to_median]
counts = pd.DataFrame(df.groupby(by=groupby_cols).count()[id_col])
counts.columns=[table+'_tot']
# Categorical columns will need pivot tables.
categoricals = pd.DataFrame(index=counts.index)
for col in categ_cols:
pivoted=df.pivot_table(index = groupby_cols, columns = col, aggfunc='count')[id_col]
pivoted.columns = [col+'_'+x for x in pivoted.columns]
pivoted.columns = pivoted.columns.map(strip_zeros)
# merge back together
categoricals = pd.merge(categoricals, pivoted, left_index=True, right_index=True)
# put all back together in one table
merged = pd.merge(sums, medians, left_index=True, right_index=True)
merged = pd.merge(merged, counts, left_index=True, right_index=True)
merged = pd.merge(merged, categoricals, left_index=True, right_index=True)
# check lengths of dataframes to detect any problems in grouping or merging
lengths = [len(sums),len(medians),len(counts),len(categoricals),len(merged)]
if len(set(lengths))>1:
print('Warning: Aggregated tables have different lengths.',lengths,'for sums, medians, counts, categoricals, and merged.')
return(merged)
def match_mpo(s, mpo_dict=county2mpo):
Match a 5-char state-county FIPS code to an MPO code
Args:
s (str): 5-char state-county string
mpo_dict (dict): county2mpo dictionary
Returns:
str: MPO code
try:
return mpo_dict[s]
except KeyError: # in this case, the county is not in an MPO
return ''
def run_all(state, filters=None):
Get craigslist data and merge with census data, by state, and save. with additional filters if needed.
Args:
state (str): 2-char state abbreviation
filters (list): additional filters. Do not need to include state in filter list
# load and prepare craiglist data
# If filters are provided, use them to filter data
if filters:
filters.append("state='{}'".format(state))
print(filters)
df_cl=get_craiglist(filters)
# If no filters provided, get all data for the specified state.
else:
df_cl=get_craiglist(["state='{}'".format(state)])
df_cl['st_co_fps'] = df_cl.fips_block.map(lambda x: x[:5])
df_cl['mpo_id'] = df_cl.st_co_fps.map(match_mpo)
# load and prepare census data for households
hhs = get_census_by_state(state, table='households')
hhs = format_hh_data(hhs)
hhs_bg = aggregate_census(hhs)
hhs_bg=hhs_bg.reset_index()
hhs_bg['fips12']=state2fips[state]+hhs_bg['county']+hhs_bg['tract']+hhs_bg['block group'] # create 12-digit FIPS code for merging.
# merge with craigslist data.
merged = pd.merge(df_cl, hhs_bg, on='fips12',how='left')
merged = merged.set_index('listing_id')
#TODO: add persons data here, if needed.
# Keep only columns we'll need.
cols_to_keep=['date','rent','bedrooms','bathrooms','sqft','rent_sqft','fips_block','state','region','mpo_id','lng','lat','cars_tot','children_tot','persons_tot','workers_tot','age_of_head_med','income_med','hhs_tot','race_of_head_1','race_of_head_2','race_of_head_3','race_of_head_4','race_of_head_5','race_of_head_6','race_of_head_7','race_of_head_8','race_of_head_9','recent_mover_0','recent_mover_1','tenure_1','tenure_2']
# This is a bit of a hack in case some columns are missing in some states.
for col in cols_to_keep:
if col not in merged.columns:
merged[col] = np.nan
# save file either locally or remotely.
print('Saving data for {s}: {m} rows'.format(s=state,m=len(merged)))
outfile = 'cl_census_{}.csv'.format(state)
#merged[cols_to_keep].to_csv(os.path.join(DATA_DIR,outfile), index=True) # uncomment to save locally
#write_results_file(merged[cols_to_keep], outfile) # uncomment to save remotely.
return merged[cols_to_keep]
df_bayarea = run_all(state='CA',filters=["region = 'sfbay'","rent>0"]) # define whatever filters you want here.
df_bayarea.head()
# save locally
outfile = 'sfbay_listings_04282017.csv'
df_bayarea.to_csv(os.path.join(DATA_DIR,outfile), index=True)
for state in fips_state['USPS']:# uncomment when done with testing.
if state != 'DC': # the DC census data is missing.
print('\n Working on',state)
df_state = run_all(state)
df_state.head()
ssh.close()
# first unzip csv file into temp dir
os.mkdir('temp') # make temp dir for unzipped files
zip_ref = zipfile.ZipFile(os.path.join(BLOCK_DIR,BLOCK_ZFILE), 'r')
zip_ref.extractall('temp')
zip_ref.close()
# temporarily read first 100 lines just to see header names
df_temp = pd.read_csv(os.path.join('temp',BLOCK_FILE), nrows=100, dtype={'block_id':str})
print(df_temp.shape)
df_temp.head()
# define the columns we need
block_cols = df_temp.columns
#df_temp.columns[:100]
# make sure to now include unneeded columns like these:
unneeded_cols = [x for x in block_cols if x.startswith('puma10_id_is_')]+
[x for x in block_cols if (x.endswith('mean_y'))|(x.endswith('mean_x'))]+
[x for x in block_cols if (x.endswith('std_y'))|(x.endswith('std_x'))]+
[x for x in block_cols if x.startswith('pumas_prop_sector_id')]+
[x for x in block_cols if x.startswith('county_id_is_')]+
[x for x in block_cols if x.startswith('tracts_prop_sector_id')]+
[x for x in block_cols if x.startswith('counties_prop_sector_id')]
len(unneeded_cols)
cols_to_use = ['block_id','nodes_population_1500m','block_groups_total_jobs', 'block_groups_median_children',
'block_groups_median_income', 'prop_tenure_1', 'nodes_low_income_hh_1500m', 'nodes_high_income_hh_1500m',
'nodes_jobs_3000m','nodes_jobs_20km', 'nodes_population_400m', 'nodes_population_800m',
'block_groups_prop_race_of_head_1','block_groups_prop_race_of_head_2', 'block_groups_prop_race_of_head_3',
'block_groups_prop_race_of_head_7','block_groups_prop_race_of_head_8','block_groups_prop_race_of_head_6',
'pumas_density_residential_units','block_groups_density_jobs',
'nodes_jobs_1500m_4445','nodes_jobs_3000m_4445',
'nodes_du_5000m','nodes_du_800m','block_groups_median_rent',
'block_groups_median_persons', 'block_groups_median_age_of_head', 'nodes_ave_year_built_800m']
for col in cols_to_use:
if col not in block_cols:
print(col)
# Read all rows, using only the columns we want
df_blocks = pd.read_csv(os.path.join('temp',BLOCK_FILE),dtype={'block_id':str}, usecols = cols_to_use)
print(df_blocks.shape)
df_blocks.head()
df_blocks['block_id'].head()
df_listings = get_craiglist(filters = ["region='sfbay'","rent>100"])
# merge listings with vars on block_id
df_listings.fips_block.head()
print(len(df_listings), len(df_blocks))
df_merged = pd.merge(df_listings, df_blocks, left_on='fips_block', right_on='block_id', how='inner')
if len(df_merged)<len(df_listings):
print('Warning: only {0} of {1} rows matched'.format(len(df_merged), len(df_listings)))
# save to hdf
outfile = 'ba_listings.h5'
df_merged.to_hdf(os.path.join(DATA_DIR,outfile),'merged')
outfile = 'ba_listings.csv'
df_merged.to_csv(os.path.join(DATA_DIR,outfile),index=False)
# connect to local databse
DBNAME = settings['DBNAME_RENT']
# USER = settings['USER_RENT']
# PASSWORD = settings['PASSWORD_RENT']
conn_str = "dbname = {0}".format(DBNAME)
try:
conn = psycopg2.connect(conn_str)
cur = conn.cursor()
except:
print ("Cannot connection. Check settings")
# first save data as csv.
# let's use that temp dir again
FULL_PATH = '/Users/lisarayle/Dropbox/craigslist/src/' # can't use relative path in postgres, I guess
csvfile = 'blocks_temp.csv'
df_blocks.to_csv(os.path.join('temp',csvfile), index=False)
table_name = 'block_vars'
def create_db_table(col_names,t_name,id_var='block_id'):
Create a new table with schema to hold the block-level data.
Args:
col_names (list): list of names of columns to use. First one can be 'block id'
t_name (str): name of database table
id_var (str): name of id variable (default: 'block_id')
# drop table if already exists
q = "DROP TABLE IF EXISTS {}".format(t_name)
cur.execute(q)
conn.commit()
# build the SQL string
sql_begin = "CREATE TABLE {0} (id BIGSERIAL PRIMARY KEY, {1} varchar(15) not null, ".format(t_name, id_var)
if col_names[0]==id_var:
sql_middle = " real,".join([c for c in col_names[1:]]) # leave off block id if it's there.
else:
sql_middle = " real,".join([c for c in col_names])
sql_end = " real);"
q = sql_begin+sql_middle+sql_end
cur.execute(q)
conn.commit()
return
def copy_block_data(col_names,t_name,fname):
Copy data from csv file into block variables table.
Args:
col_names (list): list of names of columns to use. First one can be 'block id'
t_name (str): name of database table
fname (str): name of csv file with data
var_string = ','.join([c for c in col_names])
q="COPY {t}({v}) FROM '{f}' DELIMITERS ',' CSV HEADER;".format(t=t_name,v=var_string, f=os.path.join(FULL_PATH,'temp',fname))
print(q)
cur.execute(q)
conn.commit()
return
create_db_table(cols_to_use, table_name)
copy_block_data(cols_to_use, table_name, csvfile)
# test queries
q = "select count(*) from block_vars;"
run_query(q)
q = "select column_name from information_schema.columns where table_name='block_vars';"
run_query(q)
q = "select * from block_vars limit 10;"
run_query(q)
cols_to_use
DATA_DIR='/Users/lisarayle/rent_predictor/data/'
Path to data directory
# read file with variable names
infile = 'variables.txt'
with open(os.path.join(DATA_DIR, infile), 'r') as f:
VARLIST = f.read().split(',')
print(VARLIST)
len(VARLIST)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps
Step4: Craigslist data table columns
Step5: Local data files
Step16: Create FIPS look-up tables
Step17: Get data for a single region
Step18: Process all data by state
Step19: Merge listings data with urbansim block-level data
Step20: Certain columns are definitely not useful. E.g, 'puma10_id_is_0609502', 'puma10_id_is_0609503'
Step24: Store data to database
Steps
|
1,586
|
<ASSISTANT_TASK:>
Python Code:
import cantera as ct
import numpy as np
from multiprocessing import Pool
from pyked import ChemKED
# Suppress warnings from loading the mechanism file
ct.suppress_thermo_warnings()
def run_simulation(T, P, X):
gas = ct.Solution('LLNL_sarathy_butanol.cti')
gas.TPX = T, P, X
reac = ct.IdealGasReactor(gas)
netw = ct.ReactorNet([reac])
while reac.T < T + 400:
netw.step()
return netw.time
from urllib.request import urlopen
import yaml
st_link = 'https://raw.githubusercontent.com/pr-omethe-us/PyKED/master/pyked/tests/testfile_st_p5.yaml'
with urlopen(st_link) as response:
testfile_st = yaml.safe_load(response.read())
ck = ChemKED(dict_input=testfile_st)
def collect_input(dp):
T_initial = dp.temperature.to('K').magnitude
P_initial = dp.pressure.to('Pa').magnitude
species_conversion = {'t-butanol': 'tc4h9oh', 'O2': 'o2', 'Ar': 'ar'}
X_initial = dp.get_cantera_mole_fraction(species_conversion)
return (T_initial, P_initial, X_initial)
initial_conditions = list(map(collect_input, ck.datapoints))
with Pool(processes=4) as pool:
ignition_delays = pool.starmap(run_simulation, initial_conditions)
for (T, P, X), tau in zip(initial_conditions, ignition_delays):
print(f'The ignition delay for T_initial={T} K, P_initial={P} Pa is: {tau} seconds')
%matplotlib notebook
import matplotlib.pyplot as plt
inv_temps = [1000/i[0] for i in initial_conditions]
exp_ignition_delays = [dp.ignition_delay.to('ms').magnitude for dp in ck.datapoints]
sim_ignition_delays = np.array(ignition_delays)*1.0E3
plt.figure()
plt.scatter(inv_temps, exp_ignition_delays, label='Experimental ignition delays')
plt.scatter(inv_temps, sim_ignition_delays, label='Simulated ignition delays', marker='s')
plt.legend(loc='best')
plt.yscale('log')
plt.ylabel('Ignition delay [ms]')
plt.xlabel('1000/T [1/K]');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we define a function that will be mapped onto each job. This function takes the initial
Step2: Then, we load the ChemKED file and generate a list of initial conditions that will be mapped onto
Step3: Finally, we create the processor Pool (with 4 processes) and send the jobs out to run
Step4: The simulated ignition delay results are returned in the ignition_delays list. The results
|
1,587
|
<ASSISTANT_TASK:>
Python Code:
## start qkit and import the necessary classes; here we assume a already configured qkit environment
import qkit
qkit.start()
from qkit.analysis.resonator import Resonator
r = Resonator(qkit.fid.measure_db['XXXXXX'])
r.fit_lorentzian(f_min = 5.0e9) ## set lower frequency boundary
r.fit_skewed_lorentzian(f_min = 5.2e9, f_max = 5.6e9) ## set frequency range
r.fit_fano(fit_all = True) ## fit all entries of a value matrix
r.fit_circle(reflection = True, fit_all = True) ## reflection resonator; fit all entries of a value matrix
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A resonator object takes the path of the data file as an argument (mandatory). The path can be retrieved by using the file UUID and qkit's file information database.
Step2: The fitting is done by calling one of the fit functions of the object. It is asumend, that the datasets for amplitude and phase are propperly named. Fitting all entries of a value matrix dataset requires an additional parameter fit_all = True. Fitting a value box is not yet possible.<br>
|
1,588
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from modsim import *
system = System(t0 = 0,
t_end = 10,
adult_pop0 = 10,
birth_rate = 0.9,
death_rate = 0.5)
system
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object with t0, t_end, p0,
birth_rate and death_rate
adults = TimeSeries()
adults[system.t0] = system.adult_pop0
for t in linrange(system.t0, system.t_end):
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
adults[t+1] = adults[t] + births - deaths
system.adults = adults
run_simulation(system)
system.adults
def plot_results(system, title=None):
Plot the estimates and the model.
system: System object with `results`
newfig()
plot(system.adults, 'bo-', label='adults')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
plot_results(system, title='Proportional growth model')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rabbit Redux
Step3: Now update run_simulation with the following changes
Step4: Test your changes in run_simulation
Step6: Next, update plot_results to plot both the adult and juvenile TimeSeries.
Step7: And test your updated version of plot_results.
|
1,589
|
<ASSISTANT_TASK:>
Python Code:
def get_glove(name):
with open(path+ 'glove.' + name + '.txt', 'r') as f: lines = [line.split() for line in f]
words = [d[0] for d in lines]
vecs = np.stack(np.array(d[1:], dtype=np.float32) for d in lines)
wordidx = {o:i for i,o in enumerate(words)}
save_array(res_path+name+'.dat', vecs)
pickle.dump(words, open(res_path+name+'_words.pkl','wb'))
pickle.dump(wordidx, open(res_path+name+'_idx.pkl','wb'))
get_glove('6B.50d')
get_glove('6B.100d')
get_glove('6B.200d')
get_glove('6B.300d')
def load_glove(loc):
return (load_array(loc+'.dat'),
pickle.load(open(loc+'_words.pkl','rb')),
pickle.load(open(loc+'_idx.pkl','rb')))
vecs, words, wordidx = load_glove(res_path+'6B.50d')
vecs.shape
' '.join(words[:25])
def w2v(w): return vecs[wordidx[w]]
w2v('of')
reload(sys)
sys.setdefaultencoding('utf8')
tsne = TSNE(n_components=2, random_state=0)
Y = tsne.fit_transform(vecs[:500])
start=0; end=350
dat = Y[start:end]
plt.figure(figsize=(15,15))
plt.scatter(dat[:, 0], dat[:, 1])
for label, x, y in zip(words[start:end], dat[:, 0], dat[:, 1]):
plt.text(x,y,label, color=np.random.rand(3)*0.7,
fontsize=14)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Looking at the vectors
Step2: Here's the first 25 "words" in glove.
Step3: This is how you can look up a word vector.
Step4: Just for fun, let's take a look at a 2d projection of the first 350 words, using T-SNE.
|
1,590
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_curve, auc
from pandas.tseries.offsets import *
import simulated_data
from graphviz import Source
cut = 0.55
window = 24
# df = simulated_data.get_simulated_data()
df = simulated_data.get_simulated_fixed_data()
df.head()
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=16)
plt.savefig('simulated_fixed.png')
def check_for_anomaly(ref, sub):
y_ref = pd.Series([0] * ref.shape[0])
X_ref = ref
del X_ref['flag']
del X_ref['auc_score']
y_sub = pd.Series([1] * sub.shape[0])
X_sub=sub
del X_sub['flag']
del X_sub['auc_score']
# separate Reference and Subject into Train and Test
X_ref_train, X_ref_test, y_ref_train, y_ref_test = train_test_split(X_ref, y_ref, test_size=0.3, random_state=42)
X_sub_train, X_sub_test, y_sub_train, y_sub_test = train_test_split(X_sub, y_sub, test_size=0.3, random_state=42)
# combine training ref and sub samples
X_train = pd.concat([X_ref_train, X_sub_train])
y_train = pd.concat([y_ref_train, y_sub_train])
# combine testing ref and sub samples
X_test = pd.concat([X_ref_test, X_sub_test])
y_test = pd.concat([y_ref_test, y_sub_test])
clf = AdaBoostClassifier() #dtc
# clf = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),algorithm="SAMME",n_estimators=200)
#train an AdaBoost model to be able to tell the difference between the reference and subject data
clf.fit(X_train, y_train)
#Predict using the combined test data
y_predict = clf.predict(X_test)
# scores = cross_val_score(clf, X, y)
# print(scores)
fpr, tpr, thresholds = roc_curve(y_test, y_predict) # calculate the false positive rate and true positive rate
auc_score = auc(fpr, tpr) #calculate the AUC score
print ("auc_score = ", auc_score, "\tfeature importances:", clf.feature_importances_)
if auc_score > cut:
plot_roc(fpr, tpr, auc_score)
filename='tree_'+sub.index.min().strftime("%Y-%m-%d_%H")
tree.export_graphviz(clf.estimators_[0] , out_file=filename +'_1.dot')
tree.export_graphviz(clf.estimators_[1] , out_file=filename +'_2.dot')
return auc_score
def plot_roc(fpr,tpr, roc_auc):
plt.figure()
plt.plot(fpr, tpr, color='darkorange', label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.plot([0, 1], [0, 1], linestyle='--', color='r',label='Luck', alpha=.8)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
df['auc_score']=0.5
#find min and max timestamps
start = df.index.min()
end = df.index.max()
#round start
start.seconds=0
start.minutes=0
ref = window * Hour()
sub = 1 * Hour()
# loop over them
ti=start+ref+sub
count=0
while ti < end + 1 * Minute():
ref_start = ti-ref-sub
ref_end = ti-sub
ref_df = df[(df.index >= ref_start) & (df.index < ref_end)]
sub_df = df[(df.index >= ref_end) & (df.index < ti)]
auc_score = check_for_anomaly(ref_df, sub_df)
df.loc[(df.index>=ref_end) & (df.index<=ti),['auc_score']] = auc_score
print(ti,"\trefes:" , ref_df.shape[0], "\tsubjects:", sub_df.shape[0], '\tauc:', auc_score)
ti = ti + sub
count=count+1
#if count>2: break
ax = df.plot(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
plt.savefig('BDT_simulated_fixed.png')
fig, ax = plt.subplots(figsize=(20,7))
ax.set_xlabel("time", fontsize=14)
df.loc[:,'Detected'] = 0
df.loc[df.auc_score>0.55,'Detected']=1
df.head()
ax.plot(df.flag, 'r')
ax.plot(df.auc_score,'g')
ax.fill( df.Detected, 'b', alpha=0.3)
ax.legend(loc='upper left')
plt.show()
fig.savefig('BDT_shaded_simulated_fixed.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: parameters to set
Step2: generate data
Step3: plot timeseries. Can take a minute to appear due to plot size and complexity.
Step4: functions to check for anomaly and plot roc curves.
Step5: Looping over time intervals
Step6: make plot of created anomalies, auc values, and shade periods where anomaly is suspected.
|
1,591
|
<ASSISTANT_TASK:>
Python Code:
df_raw.tail()
def getForward(v,t1=1,t2=2):
return (np.power(np.power(1+v[1]/100,t2)/np.power(1+v[0]/100,t1),1/(t2-t1))-1)*100
ind1 = 0
ind2 = 1
v2 = df_raw.iloc[-1,ind2]
v1 = df_raw.iloc[-1,ind1]
t1 = int(df_raw.columns[ind1].strip('y'))
t2 = int(df_raw.columns[ind2].strip('y'))
print('v1 is {}, v2 is {}'.format(v1,v2))
v = [v1,v2]
f = getForward(v,t1,t2)
f
df_raw['1yf1y'] = pd.Series('',index = df_raw.index)
df_raw.tail()
df_raw['1yf1y'] = df_raw[['1y','2y']].apply(getForward,axis=1)
df_raw['1yf1y'] = df_raw[['1y','2y']].apply(lambda x: getForward(x,t1=1,t2=2),axis=1)
df_raw.tail()
def AddForward(df,t1=1,t2=2):
#add an empty column
df[str(t2-t1)+'yf'+str(t1)+'y'] = pd.Series('',index = df.index)
df[str(t2-t1)+'yf'+str(t1)+'y'] = df[[str(t1)+'y',str(t2)+'y']].apply(lambda x: getForward(x,t1,t2),axis=1)
AddForward(df_raw,t1=3,t2=4)
df_raw.tail()
t1,t2=1,3
#df[str(t2-t1)+'yf'+str(t1)+'y'] = pd.Series('',index = df.index)
df.tail()
df_raw[['1y','3y']]
df[str(t2-t1)+'yf'+str(t1)+'y'] = df[[str(t1)+'y',str(t2)+'y']].apply(lambda x: getForward(x,t1,t2),axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: df_raw['2yf1y'] = df_raw[['1y','3y']].apply(lambda x
|
1,592
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
from sklearn import datasets, linear_model
%matplotlib inline
def set_data(p, x):
temp = x.flatten()
n = len(temp[p:])
x_T = temp[p:].reshape((n, 1))
X_p = np.ones((n, p + 1))
for i in range(1, p + 1):
X_p[:, i] = temp[i - 1: i - 1 + n]
return X_p, x_T
def AR(coeff, init, T):
offset = coeff[0]
mult_coef = np.flip(coeff, 0)[:-1]
series = np.zeros(T)
for k, x_i in enumerate(init):
series[k] = x_i
for i in range(k + 1, T):
series[i] = np.sum(mult_coef * series[i - k - 1:i]) + np.random.normal() + offset
return series
def estimated_autocorrelation(x):
n = len(x)
mu, sigma2 = np.mean(x), np.var(x)
r = np.correlate(x - mu, x - mu, mode = 'full')[-n:]
result = r/(sigma2 * (np.arange(n, 0, -1)))
return result
def test_AR(x, coef, N):
x = x.flatten()
offset = coef[0]
slope = coef[1]
ave_err = np.empty((len(x) - N, N))
x_temp = np.empty(N)
for i in range(len(x) - N):
x_temp[0] = x[i] * slope + offset
for j in range(N -1):
x_temp[j + 1] = x_temp[j] * slope + offset
ave_err[i, :] = (x_temp - x[i:i+N])**2
return ave_err
x = sio.loadmat('Tut2_file1.mat')['x'].flatten()
plt.plot(x * 2, ',')
plt.xlabel('time')
plt.ylabel('x')
X_p, x_T = set_data(1, x)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
model.coef_
x_1 = AR(np.append(model.coef_, 0), [0, x[0]], 50001)
plt.plot(x_1[1:], ',')
plt.xlabel('time')
plt.ylabel('x')
rgr = linear_model.LinearRegression()
x = x.reshape((len(x)), 1)
t = np.arange(len(x)).reshape(x.shape)
rgr.fit(t, x)
x_star= x - rgr.predict(t)
plt.plot(x_star.flatten(), ',')
plt.xlabel('time')
plt.ylabel('x')
X_p, x_T = set_data(1, x_star)
model.fit(X_p, x_T)
model.coef_
x_1 = AR(np.append(model.coef_[0], 0), [0, x_star[0]], 50000)
plt.plot(x_1, ',')
plt.xlabel('time')
plt.ylabel('x')
plt.plot(x_star[1:], x_star[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
err = test_AR(x_star, model.coef_[0], 10)
np.sum(err, axis=0) / err.shape[0]
plt.plot(np.sum(err, axis=0) / err.shape[0], 'o', label='Error')
plt.plot([0, 10.], np.ones(2)* np.var(x_star), 'r', label='Variance')
plt.grid(linestyle='dotted')
plt.xlabel(r'$\Delta t$')
plt.ylabel('Error')
x = sio.loadmat('Tut2_file2.mat')['x'].flatten()
plt.plot(x, ',')
plt.xlabel('time')
plt.ylabel('x')
np.mean(x)
X_p, x_T = set_data(1, x)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
model.coef_
x_1 = AR(model.coef_[0], x[:1], 50001)
plt.plot(x_1[1:], ',')
plt.xlabel('time')
plt.ylabel('x')
plt.plot(x[1:], x[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
plt.plot(x_star[1:], x_star[:-1], ',')
plt.xlabel(r'x$_{t - 1}$')
plt.ylabel(r'x$_{t}$')
plt.plot(estimated_autocorrelation(x)[:200])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
plt.plot(estimated_autocorrelation(x_1.flatten())[:20])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
data = sio.loadmat('Tut2_file3.mat')
x_AR = data['x_AR'].flatten()
x_MA = data['x_MA'].flatten()
for i in range(3,7):
X_p, x_T = set_data(i, x_AR)
model = linear_model.LinearRegression()
model.fit(X_p, x_T)
plt.plot(estimated_autocorrelation((x_T - model.predict(X_p)).flatten())[:20], \
label='AR(' + str(i) + ')')
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
plt.legend()
plt.plot(estimated_autocorrelation(x_MA)[:20])
plt.xlabel(r'$\Delta$t')
plt.ylabel(r'$\rho$')
test_AR(x, )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can see that simulating the data as an AR(1) model is not effective in giving us anything similar the aquired data. This is due to the fact that we made the wrong assumptions when we computed the coefficients of our data. Our data is in fact clearly not a stationary process and in particular cannot be from an AR(1) model alone, as there is a linear trend in time. The meaning of the slope that we computed shows that successive data points are strongly correlated.
Step2: 1.2
Step3: This time we obtain different coefficients, that we can use to simulate the data and see if they give us a similar result the real data.
Step4: In the next plot we can see that our predicted values have an error that decays exponentially the further we try to make a prediction. By the time it arrives to 5 time steps of distance it equal to the variance.
Step5: 1.4
Step6: We tried to simulate the data with these coefficients but it is clearly uneffective
Step7: By plotting the return plot we can better understand what is going on. The data can be divided in two parts. We can see that successive data is always around one of this two poles. If it were a real AR model we would expect something like the return plots shown below this one.
Step8: We can see that in the autocorelation plot the trend is exponential, which is what we would expect, but it is taking too long to decay for being a an AR model with small value of $p$
Step9: Task 2. Autocorrelation and partial autocorrelation.
Step10: For computing the $\hat p$ for the AR model we predicted the parameters $a_i$ for various AR(p). We find that for p = 6 we do not have any correlation between previous values and future values.
Step11: For the MA $\hat q$ could be around 4-6
|
1,593
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
from IPython.display import HTML
from IPython.display import display
assert True # leave this to grade the import statements
Image(url='http://upload.wikimedia.org/wikipedia/commons/4/43/The_Earth_seen_from_Apollo_17_with_transparent_background.png')
assert True # leave this to grade the image display
table = <table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass (MeV/C^2)</th>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>ubar</td>
<td>+2/3</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>dbar</td>
<td>-1/3</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>cbar</td>
<td>+2/3</td>
<td>1,160-1,340</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>sbar</td>
<td>-1/3</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>tbar</td>
<td>+2/3</td>
<td>169,100-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>bbar</td>
<td>-1/3</td>
<td>4,130-4,370</td>
</tr>
</table>
table1 = HTML(table)
display(table1)
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step3: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
1,594
|
<ASSISTANT_TASK:>
Python Code:
import re
phoneNumRegex = re.compile(r'\d\d\d-\d\d\d-\d\d\d\d')
phoneNumRegex.search('My number is 415-555-4242') # returns a match object
mo = phoneNumRegex.search('My number is 415-555-4242') # store match object
mo.group() # print matched strings in match object
phoneNumRegex = re.compile(r'(\d\d\d)-(\d\d\d-\d\d\d\d)') # The first () is group 1, the second () is group 2
phoneNumRegex.search('My number is 415-555-4242') # returns a match object with subgroups
mo = phoneNumRegex.search('My number is 415-555-4242') # store match object with subgroups
print('The area code is ' + mo.group(1)) # print out the subgroup matching the parameter
print('The rest of it is ' + mo.group(2)) # print out the other subgroup
phoneNumRegex = re.compile(r'\(\d\d\d\)-\d\d\d-\d\d\d\d')
mo = phoneNumRegex.search('My number is 415-555-4242') # returns no match
print(mo)
mo =phoneNumRegex.search('My number is (415)-555-4242') # returns a match object
mo.group()
batRegex = re.compile(r'Bat(man|mobile|copter|cat)') # The pipe character seperates the suffixes, and allows any match.
mo = batRegex.search('Batmobile lost a wheel.')
print(mo.group()) # Print matching string
print(mo.group(1)) # Pass variable 1 into the group function to find which suffix group actually matched
#mo2 = batRegex.search('Batmotorcycle lost a wheel.')
#mo.group() # will return error because no match
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To get different sections of a string, we can use groups via parenthesis
Step2: If parenthesis are actually part of the pattern, they need to be escaped
Step3: The '|' RegEx Operater
|
1,595
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
ctx = vcsn.context('law_char, q')
def exp(e):
return ctx.expression(e)
exp('a*b') * exp('ab*')
exp('<2>a') * exp('<3>\e')
exp('<2>a') * exp('\z')
exp('a') * exp('b') # Two one-letter words
exp('ab') # One two-letter word
exp('(a)(b)') # Two one-letter words
exp('ab') ** 3
exp('a*') * 3
exp('ab') ** -1
exp('ab').star()
exp('ab') ** (2, 2)
exp('ab') ** (2, 4)
exp('ab') ** (-1, 2)
exp('ab') ** (2, -1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple Multiplication
Step2: Of course, trivial identities are applied.
Step3: In the case of word labels, adjacent words are not fused
Step4: Repeated Multiplication
Step5: Use the exponent -1 to mean infinity. Alternatively, you may invoke a.star instead of a ** -1.
Step6: Sums of Repeated Multiplications
|
1,596
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 0
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
import helper
import random
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
#%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
#display 20 random images from the dataset
num_images = 20
num_cols = 4
num_rows = 5
features, labels = helper.load_cfar10_batch(cifar10_dataset_folder_path, 1)
label_names = helper._load_label_names()
rand_list = random.sample(range(len(features)), num_images)
#fig = plt.figure()
fig, axs = plt.subplots(num_rows,num_cols,figsize=(15,15))
fig.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0.2, hspace=0.25)
#fig.subplots(num_rows,num_cols)
for i in range(len(rand_list)):
sample = rand_list.pop()
sample_img = features[sample]
sample_label_name = label_names[labels[sample]]
a = fig.add_subplot(num_rows,num_cols,i+1)
imgplot = plt.imshow(sample_img)
a.set_title(sample_label_name)
a.axis('off')
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (255 - x) / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
x_df = pd.DataFrame(x)
enc = OneHotEncoder(n_values = 10)
return enc.fit_transform(x_df).toarray()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, image_shape[0], image_shape[1], image_shape[2]), name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, n_classes), name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name = 'keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
#print(x_tensor)
#print(conv_num_outputs)
#print(conv_ksize)
#print(conv_strides)
#print(pool_ksize)
#print(pool_strides)
input_channel_depth = int(x_tensor.get_shape()[3])
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_channel_depth, conv_num_outputs],mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
layer = tf.nn.conv2d(x_tensor, weight, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME')
layer = tf.nn.bias_add(layer,bias)
layer = tf.nn.relu(layer)
return tf.nn.max_pool(layer, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
#print(shape)
dim = np.prod(shape[1:])
#print(dim)
return tf.reshape(x_tensor, [-1,dim])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weight = tf.Variable(tf.truncated_normal((x_tensor.get_shape().as_list()[1], num_outputs),mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.relu(tf.matmul(x_tensor,weight) + bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#print(x_tensor)
#print(num_outputs)
weight = tf.Variable(tf.truncated_normal((x_tensor.get_shape().as_list()[1], num_outputs),mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.matmul(x_tensor,weight) + bias
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
num_classes = 10
image_size = x.get_shape().as_list()
#print(image_size)
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 255
conv_ksize = [2,2]
conv_strides = [2,2]
pool_ksize = [2,2]
pool_strides = [2,2]
layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
layer = flatten(layer)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
layer = tf.nn.dropout(fully_conn(layer, 255), keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
layer = output(layer, num_classes)
# TODO: return output
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
train_loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
val_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print("Loss = {:>10.4f}, Accuracy = {:.04f}".format(train_loss, val_acc))
# TODO: Tune Parameters
epochs = 64
batch_size = 4096*2
keep_probability = 0.6
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
1,597
|
<ASSISTANT_TASK:>
Python Code:
import mdcs
user='admin'
pswd='admin'
host='http://127.0.0.1:8000'
template_name='TEM-Tutorial'
name_list=[
"Al6061_DF_343C",
"Al6061_SAED_343C",
"Al6061-BF-343C",
"CoAl-W-900-1000h-SAED",
"CoAlW-900-1000h",
"TRIP700-750C-carbide-SAED",
"TRIP700-carbide-750C-DF",
"TRIP700-Wferrite-750C"]
for name in name_list:
xml_name=name+".xml"
jpg_name=name+".jpg"
print "Uploading:",jpg_name
url = mdcs.blob.upload(jpg_name,host,user,pswd)
print "Reading:",xml_name
with open(xml_name, 'r') as f:
content = f.read()
content = content.replace("http://127.0.0.1:8000/rest/blob?id=REPLACE-ME-BLOB-ID",url)
print "Uploading:",xml_name
response = mdcs.curate_as(xml_name,name,host,user,pswd,template_title=template_name,content=content)
print "Response:",response
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Host and user information
Step2: List of file prefixes for micrograph images and XML metadata
Step3: For each name in the list
|
1,598
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (10, 20)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = []
target_id_text = []
for sentence in source_text.split('\n'):
ids = []
for w in sentence.split():
ids.append(source_vocab_to_int[w])
source_id_text.append(ids)
for sentence in target_text.split('\n'):
ids = []
for w in sentence.split():
ids.append(target_vocab_to_int[w])
ids.append(target_vocab_to_int['<EOS>'])
target_id_text.append(ids)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
inputs = tf.placeholder(tf.int32, shape = [None, None], name = "input")
targets = tf.placeholder(tf.int32, shape = [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name = "keep_prob")
target_seq_len = tf.placeholder(tf.int32, shape = [None], name = "target_sequence_length")
max_target_len = tf.reduce_max(target_seq_len, name = "max_target_len")
src_seq_len = tf.placeholder(tf.int32, shape = [None], name = "source_sequence_length")
return inputs, targets, learning_rate, keep_prob, target_seq_len, max_target_len, src_seq_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
rev = tf.reverse(target_data, axis = [1]) # reverse each row to put last element first
sliced = tf.slice(rev, [0, 1], [-1, -1]) # slice to strip the last element of each row (now in first position) off
unrev = tf.reverse(sliced, axis = [1]) # reverse rows to restore original order
go = tf.constant(target_vocab_to_int['<GO>'], dtype = tf.int32, shape = [batch_size, 1])
concat = tf.concat([go, unrev], axis = 1)
return concat
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
# Taken from the github link about stacked LSTM cell above
def lstm_cell(rnn_size):
return tf.contrib.rnn.BasicLSTMCell(rnn_size)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
embedded_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob)
outputs, state = tf.nn.dynamic_rnn(rnn, embedded_inputs, dtype = tf.float32)
return outputs, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
trainhelper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, trainhelper, encoder_state, output_layer = output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_summary_length)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_target_sequence_length)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_decoder_output = decoding_layer_train(encoder_state, rnn, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse = True):
inference_decoder_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
enc_outputs, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_dec_output, inference_dec_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return train_dec_output, inference_dec_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256 # 32
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 1
# Embedding Size
encoding_embedding_size = 256 # 16
decoding_embedding_size = 256 # 16
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence = sentence.lower()
unknown_id = vocab_to_int['<UNK>']
ids = [vocab_to_int.get(w, unknown_id) for w in sentence.split()]
return ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a yellow old truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
1,599
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from scipy.stats import ttest_rel, norm
# Read in data
df = pd.read_csv('stroopdata.csv')
IQR_congruent = df['Congruent'].quantile(0.75) - df['Congruent'].quantile(0.25)
IQR_incongruent = df['Incongruent'].quantile(0.75) - df['Incongruent'].quantile(0.25)
print(df['Congruent'].median(), df['Incongruent'].median(), IQR_congruent, IQR_incongruent)
%matplotlib inline
plt.plot(df['Incongruent']-df['Congruent'], 'o')
plt.title('Difference Between Congruent and \n Incongruent Conditions on Stroop Task')
plt.xlabel('Participant')
plt.ylabel('Reaction time');
plt.plot(x,norm.pdf(x, 7.3, 5), 'g-', label="N(7.3, 5)")
sns.kdeplot(df['Incongruent']-df['Congruent'], label="KDE")
plt.title('Kernel Density Estimate of the Distribution\nof Differences Between the Two Conditions');
x = np.linspace(-10, 30, 1000)
plt.legend();
ttest_rel(df['Congruent'], df['Incongruent'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is our independent variable? What is our dependent variable?
Step2: Median reaction time, congruent condition
Step3: Notice that not a single participant performed better on the incongruent task than on the congruent task (all differences are above zero).
Step4: At a glance, the difference distribution looks roughly normally-distributed (a manually fitted normal distribution is plotted for comparison), justifying the use of a paired t-test.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.