Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,700
|
<ASSISTANT_TASK:>
Python Code:
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
# %load solutions/15A_ridge_grid.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Previously, we applied the feature extraction manually, like so
Step2: The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
Step3: As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
Step4: 2.1.2 What did we do wrong?
Step5: Note that we need to tell the pipeline where at which step we wanted to set the parameter C.
Step6: <div class="alert alert-success">
|
12,701
|
<ASSISTANT_TASK:>
Python Code:
%gui qt
import vtk
from vtkviewer import SimpleVtkViewer
#help(vtk.vtkRectilinearGridReader())
# do not forget to call "Update()" at the end of the reader
rectGridReader = vtk.vtkRectilinearGridReader()
rectGridReader.SetFileName("data/jet4_0.500.vtk")
rectGridReader.Update()
rectGridOutline = vtk.vtkRectilinearGridOutlineFilter()
rectGridOutline.SetInputData(rectGridReader.GetOutput())
rectGridOutlineMapper = vtk.vtkPolyDataMapper()
rectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())
outlineActor = vtk.vtkActor()
outlineActor.SetMapper(rectGridOutlineMapper)
outlineActor.GetProperty().SetColor(0, 0, 0)
#Option 1: Default vtk render window
renderer = vtk.vtkRenderer()
renderer.SetBackground(0.5, 0.5, 0.5)
renderer.AddActor(outlineActor)
renderer.ResetCamera()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(500, 500)
renderWindow.Render()
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renderWindow)
iren.Start()
#Option 2: Using the vtk-viewer for Jupyter to interactively modify the pipeline
vtkSimpleWin = SimpleVtkViewer()
vtkSimpleWin.resize(1000,800)
vtkSimpleWin.hide_axes()
vtkSimpleWin.add_actor(outlineActor)
vtkSimpleWin.add_actor(gridGeomActor)
vtkSimpleWin.ren.SetBackground(0.5, 0.5, 0.5)
vtkSimpleWin.ren.ResetCamera()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Data input (source)
Step2: 2. Filters
Step3: 3. Mappers
Step4: 4. Actors
Step5: 5. Renderers and Windows
|
12,702
|
<ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%pylab inline
import json
import os
# Support to initialise and configure your test environment
import devlib
from env import TestEnv
# Support to configure and run RTApp based workloads
from wlgen import RTA, Periodic, Ramp, Step, Pulse
# Suport for FTrace events parsing and visualization
import trappy
# Support for performance analysis of RTApp workloads
from perf_analysis import PerfAnalysis
# Setup a target configuration
my_target_conf = {
# Define the kind of target platform to use for the experiments
"platform" : 'linux', # Linux system, valid other options are:
# android - access via ADB
# linux - access via SSH
# host - direct access
# Preload settings for a specific target
"board" : 'juno', # juno - JUNO board with mainline hwmon
# Define devlib module to load
"modules" : [
'bl', # enable big.LITTLE support
'cpufreq' # enable CPUFreq support
],
# Account to access the remote target
"host" : '192.168.0.1',
"username" : 'root',
"password" : 'juno',
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
'0': 361, '1': 138, '2': 138, '3': 352, '4': 360, '5': 353
}
}
# Setup the required Test Environment supports
my_tests_conf = {
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"cpu_frequency"
],
"buffsize" : 10240
},
}
# Initialize a test environment using
# - the provided target configuration (my_target_conf)
# - the provided test configuration (my_test_conf)
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'simple', calibration=te.calibration())
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind='profile',
# 2. define the "profile" of each task
params={
# 3. PERIODIC task
#
# This class defines a task which load is periodic with a configured
# period and duty-cycle.
#
# This class is a specialization of the 'pulse' class since a periodic
# load is generated as a sequence of pulse loads.
#
# Args:
# cuty_cycle_pct (int, [0-100]): the pulses load [%]
# default: 50[%]
# duration_s (float): the duration in [s] of the entire workload
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_per20': Periodic(
period_ms=100, # period
duty_cycle_pct=20, # duty cycle
duration_s=5, # duration
cpus=None, # run on all CPUS
sched={
"policy": "FIFO", # Run this task as a SCHED_FIFO task
},
delay_s=0 # start at the start of RTApp
).get(),
# 4. RAMP task
#
# This class defines a task which load is a ramp with a configured number
# of steps according to the input parameters.
#
# Args:
# start_pct (int, [0-100]): the initial load [%], (default 0[%])
# end_pct (int, [0-100]): the final load [%], (default 100[%])
# delta_pct (int, [0-100]): the load increase/decrease [%],
# default: 10[%]
# increase if start_prc < end_prc
# decrease if start_prc > end_prc
# time_s (float): the duration in [s] of each load step
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_rmp20_5-60': Ramp(
period_ms=100, # period
start_pct=5, # intial load
end_pct=65, # end load
delta_pct=20, # load % increase...
time_s=1, # ... every 1[s]
cpus="0" # run just on first CPU
).get(),
# 5. STEP task
#
# This class defines a task which load is a step with a configured
# initial and final load.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default 0[%])
# end_pct (int, [0-100]): the final load [%]
# default 100[%]
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# period_ms (float): the period used to define the load in [ms]
# default 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_stp10-50': Step(
period_ms=100, # period
start_pct=0, # intial load
end_pct=50, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
# 6. PULSE task
#
# This class defines a task which load is a pulse with a configured
# initial and final load.
#
# The main difference with the 'step' class is that a pulse workload is
# by definition a 'step down', i.e. the workload switch from an finial
# load to a final one which is always lower than the initial one.
# Moreover, a pulse load does not generate a sleep phase in case of 0[%]
# load, i.e. the task ends as soon as the non null initial load has
# completed.
#
# Args:
# start_pct (int, [0-100]): the initial load [%]
# default: 0[%]
# end_pct (int, [0-100]): the final load [%]
# default: 100[%]
# NOTE: must be lower than start_pct value
# time_s (float): the duration in [s] of the start and end load
# default: 1.0[s]
# NOTE: if end_pct is 0, the task end after the
# start_pct period completed
# period_ms (float): the period used to define the load in [ms]
# default: 100.0[ms]
# delay_s (float): the delay in [s] before ramp start
# default: 0[s]
# loops (int): number of time to repeat the ramp, with the
# specified delay in between
# default: 0
# sched (dict): the scheduler configuration for this task
# cpus (list): the list of CPUs on which task can run
'task_pls5-80': Pulse(
period_ms=100, # period
start_pct=65, # intial load
end_pct=5, # end load
time_s=1, # ... every 1[s]
delay_s=0.5 # start .5[s] after the start of RTApp
).get(),
},
# 7. use this folder for task logfiles
run_dir=target.working_directory
);
# Initial phase and pinning parameters
ramp = Ramp(period_ms=100, start_pct=5, end_pct=65, delta_pct=20, time_s=1, cpus="0")
# Following phases
medium_slow = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=100)
high_fast = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=10)
medium_fast = Periodic(duty_cycle_pct=10, duration_s=5, period_ms=1)
high_slow = Periodic(duty_cycle_pct=60, duration_s=5, period_ms=100)
#Compose the task
complex_task = ramp + medium_slow + high_fast + medium_fast + high_slow
# Configure this RTApp instance to:
# rtapp.conf(
# # 1. generate a "profile based" set of tasks
# kind='profile',
#
# # 2. define the "profile" of each task
# params={
# 'complex' : complex_task.get()
# },
#
# # 6. use this folder for task logfiles
# run_dir='/tmp'
#)
logging.info('#### Setup FTrace')
te.ftrace.start()
logging.info('#### Start energy sampling')
te.emeter.reset()
logging.info('#### Start RTApp execution')
rtapp.run(out_dir=te.res_dir, cgroup="")
logging.info('#### Read energy consumption: %s/energy.json', te.res_dir)
nrg_report = te.emeter.report(out_dir=te.res_dir)
logging.info('#### Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
logging.info('#### Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('#### Save platform description: %s/platform.json', te.res_dir)
(plt, plt_file) = te.platform_dump(te.res_dir)
# Inspect the JSON file used to run the application
with open('{}/simple_00.json'.format(te.res_dir), 'r') as fh:
rtapp_json = json.load(fh, )
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
# All data are produced in the output folder defined by the TestEnv module
logging.info('Content of the output folder %s', te.res_dir)
!ls -la {te.res_dir}
# Dump the energy measured for the LITTLE and big clusters
logging.info('Energy: %s', nrg_report.report_file)
print json.dumps(nrg_report.channels, indent=4, sort_keys=True)
# Dump the platform descriptor, which could be useful for further analysis
# of the generated results
logging.info('Platform description: %s', plt_file)
print json.dumps(plt, indent=4, sort_keys=True)
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(te.res_dir)
# Parse the RT-App generate log files to compute performance metrics
pa = PerfAnalysis(te.res_dir)
# For each task which has generated a logfile, plot its performance metrics
for task in pa.tasks():
pa.plotPerf(task, "Performance plots for task [{}] ".format(task))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test environment setup
Step2: Workload configuration
Step3: The output of the previous cell reports the main properties of the generated
Step4: Workload execution
Step5: Collected results
Step6: Trace inspection
Step7: RTApp task performance plots
|
12,703
|
<ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
import requests
import pandas as pd
from pandas import Series,DataFrame
url = 'http://www.ucop.edu/operating-budget/budgets-and-reports/legislative-reports/2013-14-legislative-session.html'
# Request content from web page
result = requests.get(url)
c = result.content
# Set as Beautiful Soup Object
soup = BeautifulSoup(c)
# Go to the section of interest
summary = soup.find("div",{'class':'list-land','id':'content'})
# Find the tables in the HTML
tables = summary.find_all('table')
# Set up empty data list
data = []
# Set rows as first indexed object in tables with a row
rows = tables[0].findAll('tr')
# now grab every HTML cell in every row
for tr in rows:
cols = tr.findAll('td')
# Check to see if text is in the row
for td in cols:
text = td.find(text=True)
print text,
data.append(text)
data
# Set up empty lists
reports = []
date = []
# Se tindex counter
index = 0
# Go find the pdf cells
for item in data:
if 'pdf' in item:
# Add the date and reports
date.append(data[index-1])
# Get rid of \xa0
reports.append(item.replace(u'\xa0', u' '))
index += 1
# Set up Dates and Reports as Series
date = Series(date)
reports = Series(reports)
# Concatenate into a DataFrame
legislative_df = pd.concat([date,reports],axis=1)
# Set up the columns
legislative_df.columns = ['Date','Reports']
# Show the finished DataFrame
legislative_df
# http://docs.python-guide.org/en/latest/scenarios/scrape/
from lxml import html
import requests
page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
tree = html.fromstring(page.content)
# inspect element
# <div title="buyer-name">Carson Busses</div>
# <span class="item-price">$29.95</span>
#This will create a list of buyers:
buyers = tree.xpath('//div[@title="buyer-name"]/text()')
#This will create a list of prices
prices = tree.xpath('//span[@class="item-price"]/text()')
print 'Buyers: ', buyers
print 'Prices: ', prices
# https://www.flightradar24.com/56.16,-52.58/7
# http://stackoverflow.com/questions/39489168/how-to-scrape-real-time-streaming-data-with-python
# If you look at the network tab in the developer console in Chrome (for example), you'll see the requests to https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=59.09,52.64,-58.77,-47.71&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1
import requests
from bs4 import BeautifulSoup
import time
def get_count():
url = "https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=57.78,54.11,-56.40,-48.75&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1"
# Request with fake header, otherwise you will get an 403 HTTP error
r = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
# Parse the JSON
data = r.json()
counter = 0
# Iterate over the elements to get the number of total flights
for element in data["stats"]["total"]:
counter += data["stats"]["total"][element]
return counter
while True:
print(get_count())
time.sleep(8)
# Hmm, that was just my first thaught. As I wrote, the code is not meant as something final
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For our quick web scraping tutorial, we'll look at some legislative reports from the University of California Web Page. Feel free to experiment with other webpages, but remember to be cautious and respectful in what you scrape and how often you do it. Always check the legality of a web scraping job.
Step2: Now let's go ahead and set up requests to grab content form the url, and set it as a Beautiful Soup object.
Step3: Now we'll use Beautiful Soup to search for the table we want to grab!
Step4: Now we need to use Beautiful Soup to find the table entries. A 'td' tag defines a standard cell in an HTML table. The 'tr' tag defines a row in an HTML table.
Step5: Let's see what the data list looks like
Step6: Now we'll use a for loop to go through the list and grab only the cells with a pdf file in them, we'll also need to keep track of the index to set up the date of the report.
Step7: You'll notice a line to take care of '\xa0 ' This is due to a unicode error that occurs if you don't do this. Web pages can be messy and inconsistent and it is very likely you'll have to do some research to take care of problems like these.
Step8: There are other less intense options for web scraping
|
12,704
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,6))
b.add_dataset('mesh')
print b['times@mesh']
print b['include_times@mesh']
b['times@mesh'] = [10]
b['include_times@mesh'] = ['lc01']
b.run_compute()
print b['mesh@model'].times
print b['mesh@model'].qualifiers
print b['columns@mesh']
b['columns@mesh'] = ['teffs']
b.run_compute()
print b['mesh@model'].qualifiers
print b.get_value('teffs', time=0.0, component='primary')
afig, mplfig = b['mesh@model'].plot(time=0.2, fc='teffs', ec='none', show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: The 'Mesh' Dataset
Step3: Note that we can no manually set the times of the mesh AND/OR reference the times for existing non-mesh datasets (such as the light curve we just added) as well as any of the various t0s in the system.
Step4: By default, the mesh only exposes the geometric columns of the triangles
Step5: But we can also specify other columns to be included (by setting the columns parameter before calling run_compute)
Step6: Any of the exposed columns are then available for plotting the mesh, via b.plot.
|
12,705
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3]) # NxHxWxC == NHWC format, not shape, not type, maybe structure
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3]) # NxHxWxC == NHWC format, not shape, not type, maybe structure
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# # Fully connection input
# X_fc = tf.reshape(name='X_fc', shape=[-1, 32*32*3], tensor=X)
# # X: N, 32*32*3
# W2 = tf.get_variable("W2", shape=[32*32*3, 10])
# b2 = tf.get_variable("b2", shape=[10])
# y_out = tf.matmul(X_fc, W2) + b2
# # y: N, 10
# setup conv variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
y_conv = tf.nn.conv2d(X, Wconv1, strides=[1, 2, 2, 1], padding='VALID') + bconv1
# y: N, H, W, C = N, 13, 13, 32
# tf.nn.max_pool(data_format=, input=, ksize=, name=, padding=, strides=, Targmax=, value=)
y_pool = tf.nn.max_pool(value=y_conv, ksize=[1, 2, 2, 1], padding='VALID', strides=[1, 1, 1, 1])
# y: N, 12, 12, 32
y_nl = tf.nn.relu(y_pool)
# y: N, 12, 12, 32
y_bn = tf.layers.batch_normalization(inputs=y_nl)
# y: N, 12, 12, 32
X_fc = tf.reshape(name='X_fc', shape=[-1, 12*12*32], tensor=y_bn)
# X: N, 12*12*32
W1 = tf.get_variable("W1", shape=[12*12*32, 10]) # 5408 = 13 * 13 * 32
b1 = tf.get_variable("b1", shape=[10])
y_out = tf.matmul(X_fc, W1) + b1
# y: N, 10
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%Xd.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[idx].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
# # Fully connection input
# X_fc = tf.reshape(name='X_fc', shape=[-1, 32*32*3], tensor=X)
# # X: N, 32*32*3
# W2 = tf.get_variable("W2", shape=[32*32*3, 10])
# b2 = tf.get_variable("b2", shape=[10])
# y_out = tf.matmul(X_fc, W2) + b2
# # y: N, 10
# setup conv variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
y_conv = tf.nn.conv2d(X, Wconv1, strides=[1, 1, 1, 1], padding='VALID') + bconv1
# (32 - 7)//1 +1 == 25//1 +1 == 26
# y: N, H, W, C = N, 26, 26, 32
# Non-linearity, activation layer, or non-linear function
y_nl = tf.nn.relu(y_conv)
# y: N, 26, 26, 32
# Regularizer as well: 0-mean and 1-var
y_bn = tf.layers.batch_normalization(inputs=y_nl)
# y: N, 26, 26, 32
# Max pooling: non-linear conv
# tf.nn.max_pool(data_format=, input=, ksize=, name=, padding=, strides=, Targmax=, value=)
y_pool = tf.nn.max_pool(value=y_bn, ksize=[1, 2, 2, 1], padding='VALID', strides=[1, 2, 2, 1])
# (26 - 2)//2 +1 == 24//2 +1 == 12 +1 == 13
# y: N, 13, 13, 32
# Fully connection input
X_fc = tf.reshape(name='X_fc', shape=[-1, 13*13*32], tensor=y_pool)
# X: N, 13*13*32
# FC layer or affine layer
W1 = tf.get_variable("W1", shape=[13*13*32, 1024])
b1 = tf.get_variable("b1", shape=[1024])
y_fc = tf.matmul(X_fc, W1) + b1
# y: N, 1024
# Non-linearity or activation function
y_fc_nl = tf.nn.relu(y_fc)
# y: N, 1024
W2 = tf.get_variable("W2", shape=[1024, 10])
b2 = tf.get_variable("b2", shape=[10])
y_out = tf.matmul(y_fc_nl, W2) + b2
# y: N, 10
return y_out
y_out = complex_model(X,y,is_training)
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
%timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
# define our loss
# total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
total_loss = tf.losses.softmax_cross_entropy(logits=y_out, onehot_labels=tf.one_hot(y, 10))
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
# optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-3)
# train_step = optimizer.minimize(mean_loss)
pass
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Feel free to play with this cell
def my_model(X,y,is_training):
# setup conv variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
y_conv = tf.nn.conv2d(X, Wconv1, strides=[1, 1, 1, 1], padding='VALID') + bconv1
# (32 - 7)//1 +1 == 25//1 +1 == 26
# y: N, H, W, C = N, 26, 26, 32
# Non-linearity, activation layer, or non-linear function
y_nl = tf.nn.relu(y_conv)
# y: N, 26, 26, 32
# Regularizer as well: 0-mean and 1-var
y_bn = tf.layers.batch_normalization(inputs=y_nl)
# y: N, 26, 26, 32
# Max pooling: non-linear conv
# tf.nn.max_pool(data_format=, input=, ksize=, name=, padding=, strides=, Targmax=, value=)
y_pool = tf.nn.max_pool(value=y_bn, ksize=[1, 2, 2, 1], padding='VALID', strides=[1, 2, 2, 1])
# (26 - 2)//2 +1 == 24//2 +1 == 12 +1 == 13
# y: N, 13, 13, 32
# Fully connection input
X_fc = tf.reshape(name='X_fc', shape=[-1, 13*13*32], tensor=y_pool)
# X: N, 13*13*32
# FC layer or affine layer
W1 = tf.get_variable("W1", shape=[13*13*32, 1024])
b1 = tf.get_variable("b1", shape=[1024])
y_fc = tf.matmul(X_fc, W1) + b1
# y: N, 1024
# Non-linearity or activation function
y_fc_nl = tf.nn.relu(y_fc)
# y: N, 1024
W2 = tf.get_variable("W2", shape=[1024, 10])
b2 = tf.get_variable("b2", shape=[10])
y_out = tf.matmul(y_fc_nl, W2) + b2
# y: N, 10
return y_out
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X,y,is_training)
total_loss = tf.losses.softmax_cross_entropy(logits=y_out, onehot_labels=tf.one_hot(y, 10))
mean_loss = tf.reduce_mean(total_loss)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4)
pass
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What's this TensorFlow business?
Step2: Example Model
Step3: TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
Step4: Training a specific model
Step5: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step6: You should see the following from the run above
Step7: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
Step8: Train the model
Step9: Check the accuracy of the model.
Step10: Train a great model on CIFAR-10!
Step11: Describe what you did here
|
12,706
|
<ASSISTANT_TASK:>
Python Code:
from collections import defaultdict, Counter
from itertools import zip_longest
import json
import os
import re
import sys
import urllib
import numpy as np
import requests
from consequence_prediction.vep_mapping_pipeline.consequence_mapping import *
from eva_cttv_pipeline.clinvar_xml_io.clinvar_xml_io import *
from eva_cttv_pipeline.evidence_string_generation.clinvar_to_evidence_strings import convert_allele_origins
%matplotlib inline
import matplotlib.pyplot as plt
PROJECT_ROOT = '/home/april/projects/opentargets/complex-events'
# dump of all records with no functional consequences and no complete coordinates
# uses June consequence pred + ClinVar 6/26/2021
no_consequences_path = os.path.join(PROJECT_ROOT, 'no-conseq_no-coords.xml.gz')
dataset = ClinVarDataset(no_consequences_path)
def get_somatic_germline_counts(dataset):
all_allele_origins = [convert_allele_origins(record.valid_allele_origins) for record in dataset]
# Our pipeline's definition for distinguishing somatic & germline
def is_somatic(allele_origins):
return allele_origins == ['somatic']
phenotypes_counts = Counter()
for allele_origins in all_allele_origins:
germline = False
somatic = False
for ao in allele_origins:
if is_somatic(ao):
somatic = True
else:
germline = True
if germline and somatic:
phenotypes_counts['both'] += 1
if germline and not somatic:
phenotypes_counts['germline'] += 1
if somatic and not germline:
phenotypes_counts['somatic'] += 1
# flat count of allele origins
flattened_allele_origins = [x for allele_origins in all_allele_origins for ao in allele_origins for x in ao]
flat_pheno_counts = Counter(flattened_allele_origins)
return phenotypes_counts, flat_pheno_counts
complex_phenotypes, complex_flat_aos = get_somatic_germline_counts(dataset)
complex_phenotypes
# check if these are enriched in somatic relative to full set
full_dataset = ClinVarDataset(os.path.join(PROJECT_ROOT, 'ClinVarFullRelease_2021-07.xml.gz'))
full_phenotypes, full_flat_aos = get_somatic_germline_counts(full_dataset)
full_phenotypes
def percent_somatic(c):
return c['somatic'] / sum(c.values()) * 100.0
print('percent somatic for complex:', percent_somatic(complex_phenotypes))
print('percent somatic for all:', percent_somatic(full_phenotypes))
(67974756 - 67967551) / (67974774 - 67967534)
(67967551 - 67967534) + (67974774 - 67974756)
(26775295 - 26547773) / 101991189 # length of chr 15
sequence_identifier = r'[a-zA-Z0-9_.]+'
genomic_sequence = f'^({sequence_identifier}):g\.'
# only INS, DEL, DUP supported by VEP
variant_type_regex = {
re.compile(f'{genomic_sequence}.*?del(?!ins).*?') : 'DEL',
re.compile(f'{genomic_sequence}.*?dup.*?') : 'DUP',
re.compile(f'{genomic_sequence}.*?(?<!del)ins.*?') : 'INS',
}
# for this we EXCLUDE unknown bounds, and capture all numeric bounds on endpoints
def_range = r'([0-9]+)_([0-9]+)'
var_range = r'\(([0-9]+)_([0-9]+)\)_\(([0-9]+)_([0-9]+)\)'
ch = r'[^?_+-]'
def_span_regex = re.compile(f'{genomic_sequence}{ch}*?{def_range}{ch}*?$')
var_span_regex = re.compile(f'{genomic_sequence}{ch}*?{var_range}{ch}*?$')
def endpoint_bounds(dataset, include_precise=False, limit=None):
Returns inner and outer bounds on endpoints (duplicating inner/outer if precise).
n = 0
all_bounds = []
all_hgvs = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
# NC_000011.8:g.(67967534_67967551)_(67974756_67974774)del
var_match = var_span_regex.match(h)
if var_match and all(var_match.group(i) for i in range(2,6)):
# use terminology from dbVar data model
# see https://www.ncbi.nlm.nih.gov/core/assets/dbvar/files/dbVar_VCF_Submission.pdf
outer_start = int(var_match.group(2))
inner_start = int(var_match.group(3))
inner_stop = int(var_match.group(4))
outer_stop = int(var_match.group(5))
all_bounds.append(((outer_start, inner_start), (inner_stop, outer_stop)))
all_hgvs.append(h)
# presumably all hgvs expressions for one record have the same span, don't double count
break
elif include_precise:
# NC_000016.10:g.12595039_12636793del
def_match = def_span_regex.match(h)
if def_match and def_match.group(2) and def_match.group(3):
outer_start = inner_start = int(def_match.group(2))
inner_stop = outer_stop = int(def_match.group(3))
all_bounds.append(((outer_start, inner_start), (inner_stop, outer_stop)))
all_hgvs.append(h)
break
return all_hgvs, all_bounds
all_hgvs, all_bounds = endpoint_bounds(dataset)
def is_valid(bounds):
# invalid if any range is negative
return (bounds[0][1] >= bounds[0][0]
and bounds[1][1] >= bounds[1][0]
and bounds[1][0] >= bounds[0][1])
def certainty_ratio(bounds):
For an HGVS range (A_B)_(C_D), this computes (C-B) / (D-A)
return (bounds[1][0] - bounds[0][1]) / (bounds[1][1] - bounds[0][0])
def uncertain_bounds_region(bounds):
For an HGVS range (A_B)_(C_D), this computes (A-B) + (D-C)
return (bounds[0][1] - bounds[0][0]) + (bounds[1][1] - bounds[1][0])
len(all_bounds)
all_valid_bounds = [bounds for bounds in all_bounds if is_valid(bounds)]
len(all_valid_bounds)
all_certainty_ratios = [certainty_ratio(bounds) for bounds in all_valid_bounds]
all_uncertain_ranges = [uncertain_bounds_region(bounds) for bounds in all_valid_bounds]
# 1.0 is the most certain
print(min(all_certainty_ratios))
print(max(all_certainty_ratios))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Variants per certainty ratio (imprecise, known bounds)')
plt.hist(all_certainty_ratios, bins=100)
print(min(all_uncertain_ranges))
print(max(all_uncertain_ranges))
# exclude the max
i = all_uncertain_ranges.index(max(all_uncertain_ranges))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title('Variants per total size of uncertain bounds region')
plt.hist(all_uncertain_ranges[:i] + all_uncertain_ranges[i+1:], bins=100)
# the max is screwing up all my plots, get rid of it
i = all_uncertain_ranges.index(max(all_uncertain_ranges))
xs = all_certainty_ratios[:i] + all_certainty_ratios[i+1:]
ys = all_uncertain_ranges[:i] + all_uncertain_ranges[i+1:]
print(all_uncertain_ranges[i])
print(all_certainty_ratios[i])
plt.figure(figsize=(12,10))
plt.grid(visible=True)
plt.title('Certainty ratio vs. size of uncertain bounds region')
plt.xlabel('Certainty ratio')
plt.ylabel('Size of uncertain bounds region')
plt.scatter(xs, ys, marker='.')
plt.figure(figsize=(12,10))
plt.grid(visible=True)
plt.title('Certainty ratio vs. size of uncertain bounds region (log scale)')
plt.xlabel('Certainty ratio')
plt.ylabel('Size of uncertain bounds region')
plt.yscale('log')
plt.scatter(xs, ys, marker='.')
def hgvs_and_bounds_to_vep_identifier(all_hgvs, all_bounds):
for hgvs, bounds in zip(all_hgvs, all_bounds):
m = def_span_regex.match(hgvs)
if not m:
m = var_span_regex.match(hgvs)
if not m:
continue
seq = m.group(1)
# not everything accepted by VEP, for now we'll be lazy
if not (seq.startswith('NC') or seq.startswith('LRG') or seq.startswith('NW') or seq.startswith('AC')):
continue
variant_type = None
for r, s in variant_type_regex.items():
if r.match(hgvs):
variant_type = s
break
if not variant_type:
continue
# yield both inner and outer bounds
# include inner/outer in the identifier so we can connect them later
yield f'{seq} {bounds[0][1]} {bounds[1][0]} {variant_type} + {hgvs}###INNER'
yield f'{seq} {bounds[0][0]} {bounds[1][1]} {variant_type} + {hgvs}###OUTER'
# modified from previous notebook...
def grouper(iterable, n):
args = [iter(iterable)] * n
return [x for x in zip_longest(*args, fillvalue=None) if x is not None]
def get_vep_results(all_hgvs, all_bounds):
variants = [v for v in hgvs_and_bounds_to_vep_identifier(all_hgvs, all_bounds) if v]
print(f'{len(variants)} parsed into chrom/start/end/type')
# VEP only accepts batches of 200
vep_results = []
for group in grouper(variants, n=200):
vep_results.extend(query_vep(variants=group, search_distance=VEP_SHORT_QUERY_DISTANCE))
return vep_results
def extract_genes(vep_results):
results_by_variant = defaultdict(list)
for result in vep_results:
variant_identifier = result['id']
consequences = result.get('transcript_consequences', [])
results_by_variant[variant_identifier].extend({c['gene_id'] for c in consequences})
return results_by_variant
def gene_counts(all_hgvs, all_bounds, limit=None):
Return a map: hgvs -> (num affected genes inner, num affected genes outer)
if limit:
vep_results = get_vep_results(all_hgvs[:limit], all_bounds[:limit])
else:
vep_results = get_vep_results(all_hgvs, all_bounds)
identifiers_to_genes = extract_genes(vep_results)
print(f'{len(identifiers_to_genes)} successfully mapped by VEP')
result = defaultdict(dict)
for identifier, genes in identifiers_to_genes.items():
hgvs, inner_or_outer = identifier.split('###')
result[hgvs][inner_or_outer] = len(genes)
return result
result = gene_counts(all_hgvs, all_bounds)
def certainty_ratio_genes(genes_dict):
return genes_dict['INNER'] / genes_dict['OUTER']
# don't think the UBR size measurement makes sense
all_genes_ratios = [certainty_ratio_genes(x) for x in result.values()]
print(len(all_genes_ratios))
print(min(all_genes_ratios))
print(max(all_genes_ratios))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Variants per certainty ratio (target genes version)')
plt.hist(all_genes_ratios, bins=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Phenotypes
Step2: Summary for phenotypes
Step3: Precise
Step7: Uncertainty from spans
Step9: Uncertainty from genes
|
12,707
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv("../data/iris.data")
# convert to NumPy arrays because they are the easiest to handle in sklearn
variables = data.drop(["class"], axis=1).as_matrix()
classes = data[["class"]].as_matrix().reshape(-1)
# import cross-validation scorer and KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
train_X, test_X, train_Y, test_Y = train_test_split(variables, classes)
# initialize classifier object
classifier = KNeighborsClassifier()
# fit the object using training data and sample labels
classifier.fit(train_X, train_Y)
# evaluate the results for held-out test sample
classifier.score(test_X, test_Y)
# value is the mean accuracy
# if we wanted to predict values for unseen data, we would use the predict()-method
classifier.predict(test_X) # note no known Y-values passed
from sklearn.decomposition import PCA # pca is a subspace method that projects the data into a lower-dimensional space
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
pca = PCA(n_components=2)
knn = KNeighborsClassifier(n_neighbors=3)
from sklearn.pipeline import Pipeline
pipeline = Pipeline([("pca", pca), ("kneighbors", knn)])
parameters_grid = dict(
pca__n_components=[1,2,3,4],
kneighbors__n_neighbors=[1,2,3,4,5,6]
)
grid_search = GridSearchCV(pipeline, parameters_grid)
grid_search.fit(train_X, train_Y)
grid_search.best_estimator_
# you can now test agains the held out part
grid_search.best_estimator_.score(test_X, test_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
|
12,708
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
12,709
|
<ASSISTANT_TASK:>
Python Code:
import geosoft.gxpy.gx as gx
import geosoft.gxpy.utility as gxu
gxc = gx.GXpy()
url = 'https://github.com/GeosoftInc/gxpy/raw/9.3/examples/tutorial/Geosoft%20modules%20-%20gxapi%20and%20gxpy/'
gxu.url_retrieve(url + 'test.grd')
gxu.url_retrieve(url + 'test.grd.gi')
gxc = None
import geosoft.gxapi as gxapi
import geosoft.gxpy.utility as gxu
gxc = gxapi.GXContext.create('grid_dimension', '0.1')
# create an instance of the GXIMG class from grid file 'test.grd'
img = gxapi.GXIMG.create_file(gxapi.GS_FLOAT,
'test.grd(GRD)',
gxapi.IMG_FILE_READONLY)
# create reference items to support return immutable values from GXIMG.get_info()
x_sep = gxapi.float_ref()
y_sep = gxapi.float_ref()
x_origin = gxapi.float_ref()
y_origin = gxapi.float_ref()
rotation = gxapi.float_ref()
img.get_info(x_sep, y_sep, x_origin, y_origin, rotation)
# report
print('\n dimension (nx, ny): ({}, {})'.format(img.nx(), img.ny()),
'\n separation (x, y): ({}, {})'.format(x_sep.value, y_sep.value),
'\n origin (x, y): ({}, {})'.format(x_origin.value, y_origin.value),
'\n rotation: {}'.format(rotation.value))
gxc = None
import geosoft.gxpy as gxpy
gxc = gxpy.gx.GXpy()
grid = gxpy.grid.Grid.open('test.grd(GRD)')
print(' dimension (nx, ny): ({}, {})'.format(grid.nx, grid.ny),
'\n separation (x, y): ({}, {})'.format(grid.dx, grid.dy),
'\n origin (x, y): ({}, {})'.format(grid.x0, grid.y0),
'\n rotation: {}'.format(grid.rot))
gxc = None
import geosoft.gxpy as gxpy
import geosoft.gxapi as gxapi
gxc = gxpy.gx.GXpy()
grid = gxpy.grid.Grid.open('test.grd(GRD)')
cr = grid.gximg.query_double(gxapi.IMG_QUERY_rCOMPRESSION_RATIO)
print('compression ratio: {}'.format(cr))
gxc = None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: GX API (geosoft.gxapi)
Step2: GXPY (geosoft.gxpy)
Step3: You will find that many gxpy classes map closely to underlying gxapi classes, but with a simpler, more consistent and often more flexible interface. For example, instances of the geosoft.gxpy.grid class map directly to a single instance of the geosoft.gxapi.GXIMG class. In these cases, one attribute of the gxpy instance will be the gxapi instance, usually with the lower-case name of the gxapi class. For example, an instance of the geosoft.gxpy.Grid class will have attribute gximg, which is an instance of the geosoft.gxapi.GXIMG class. This can be used to directly call a gxapi method to handle situations that are not handled by the gxpy module. For example, to determine the compression ratio of a compressed grid, call the gxapi.GXIMG.query_double() method using the gximg attribute of the grid instance
|
12,710
|
<ASSISTANT_TASK:>
Python Code:
# Imports and directives
%matplotlib inline
import numpy as np
from math import log
import matplotlib.pyplot as plt
from matplotlib.mlab import PCA as mlabPCA
import javalang
import os, re, requests, zipfile, json, operator
from collections import Counter
import colorsys
import random
from StringIO import StringIO
from subprocess import Popen, PIPE
from sklearn.cluster import KMeans
from tabulate import tabulate
from sklearn import svm
# Variables
USER = 'apache' # github user of the repo that is analysed
REPO = 'tomcat' # repository to investigate
BASE_PATH = '/Users/philippepossemiers/Documents/Dev/Spark/data/analyzer/' # local expansion path
COMMENT_LINES = ['/*', '//', '*/', '* '] # remove comments from code
KEY_WORDS = ['abstract','continue','for','new','switch','assert','default','goto','synchronized',
'boolean','do','if','private','this','break','double','implements','protected','throw',
'byte','else','public','throws','case','enum','instanceof','return','transient',
'catch','extends','int','short','try','char','final','interface','static','void',
'class','finally','long','strictfp','volatile','const','float','native','super','while'
'true','false','null']
TOP = 25 # number of items to show in graphs
# list of operators to find in source code
OPERATORS = ['\+\+','\-\-','\+=','\-=','\*\*','==','!=','>=','<=','\+','=','\-','\*','/','%','!','&&', \
'\|\|','\?','instanceof','~','<<','>>','>>>','&','\^','<','>']
# list of variable types to find in source code
OPERANDS = ['boolean','byte','char','short','int','long','float','double','String']
GIT_COMMIT_FIELDS = ['author_name', 'committer name', 'date', 'message', 'name']
GIT_LOG_FORMAT = ['%an', '%cn', '%ad', '%s']
GIT_LOG_FORMAT = '%x1f'.join(GIT_LOG_FORMAT) + '%x1e'
# List of Apache Java projects on github
APACHE_PROJECTS = ['abdera', 'accumulo', 'ace', 'activemq', 'airavata', 'ambari', 'ant', 'ant-antlibs-antunit', \
'any23', 'archiva', 'aries', 'webservices-axiom', 'axis2-java', \
'bigtop', 'bookkeeper', 'bval', 'calcite', 'camel', 'cassandra', 'cayenne', \
'chainsaw', 'chukwa', 'clerezza', 'commons-bcel', \
'commons-beanutils', 'commons-bsf', 'commons-chain', 'commons-cli', 'commons-codec', \
'commons-collections', 'commons-compress', 'commons-configuration', 'commons-daemon', \
'commons-dbcp', 'commons-dbutils', 'commons-digester', 'commons-discovery', \
'commons-email', 'commons-exec', 'commons-fileupload', 'commons-functor', 'httpcomponents-client', \
'commons-io', 'commons-jci', 'commons-jcs', 'commons-jelly', 'commons-jexl', 'commons-jxpath', \
'commons-lang', 'commons-launcher', 'commons-logging', 'commons-math', \
'commons-net', 'commons-ognl', 'commons-pool', 'commons-proxy', 'commons-rng', 'commons-scxml', \
'commons-validator', 'commons-vfs', 'commons-weaver', 'continuum', 'crunch', \
'ctakes', 'curator', 'cxf', 'derby', 'directmemory', \
'directory-server', 'directory-studio', 'drill', 'empire-db', 'falcon', 'felix', 'flink', \
'flume', 'fop', 'directory-fortress-core', 'ftpserver', 'geronimo', 'giraph', 'gora', \
'groovy', 'hadoop', 'hama', 'harmony', 'hbase', 'helix', 'hive', 'httpcomponents-client', \
'httpcomponents-core', 'jackrabbit', 'jena', 'jmeter', 'lens', 'log4j', \
'lucene-solr', 'maven', 'maven-doxia', 'metamodel', 'mina', 'mrunit', 'myfaces', 'nutch', 'oozie', \
'openjpa', 'openmeetings', 'openwebbeans', 'orc', 'phoenix', 'pig', 'poi','rat', 'river', \
'shindig', 'sling', \
'sqoop', 'struts', 'synapse', 'syncope', 'tajo', 'tika', 'tiles', 'tomcat', 'tomee', \
'vxquery', 'vysper', 'whirr', 'wicket', 'wink', 'wookie', 'xmlbeans', 'zeppelin', 'zookeeper']
print len(APACHE_PROJECTS)
# Global dictionaries
joined = [] # list with all source files
commit_dict = {} # commits per class
reference_dict = {} # number of times a class is referenced
lines_dict = {} # number of lines per class
methods_dict = {} # number of functions per class
operators_dict = {} # number of operators per class
operands_dict = {} # number of operands per class
halstead_dict = {} # Halstead complexity measures
cyclomatic_dict = {} # cyclomatic complexity
# Utility functions
# TODO : check if we can use this
def sanitize(contents):
lines = contents.split('\n')
# remove stop lines
for stop_line in COMMENT_LINES:
lines = [line.lower().lstrip().replace(';', '') for line in lines if stop_line not in line and line <> '']
return '\n'.join(lines)
def find_whole_word(word):
return re.compile(r'\b({0})\b'.format(word), flags=re.IGNORECASE).search
def all_files(directory):
for path, dirs, files in os.walk(directory):
for f in files:
yield os.path.join(path, f)
def build_joined(repo):
src_list = []
repo_url = 'https://github.com/' + repo[0] + '/' + repo[1]
os.chdir(BASE_PATH)
os.system('git clone {}'.format(repo_url))
# get all java source files
src_files = [f for f in all_files(BASE_PATH + repo[1]) if f.endswith('.java')]
for f in src_files:
try:
# read contents
code = open(f, 'r').read()
# https://github.com/c2nes/javalang
tree = javalang.parse.parse(code)
# create tuple with package + class name and code + tree + file path
src_list.append((tree.package.name + '.' + tree.types[0].name, (code, tree, f)))
except:
pass
return src_list
def parse_git_log(repo_dir, src):
# first the dictionary with all classes
# and their commit count
total = 0
p = Popen('git log --name-only --pretty=format:', shell=True, stdout=PIPE, cwd=repo_dir)
(log, _) = p.communicate()
log = log.strip('\n\x1e').split('\x1e')
log = [r.strip().split('\n') for r in log]
log = [r for r in log[0] if '.java' in r]
log2 = []
for f1 in log:
for f2 in src:
if f2[1][2].find(f1) > -1:
log2.append(f2[0])
cnt_dict = Counter(log2)
for key, value in cnt_dict.items():
total += value
cnt_dict['total'] = total
# and then the list of commits as dictionaries
p = Popen('git log --format="%s"' % GIT_LOG_FORMAT, shell=True, stdout=PIPE, cwd=repo_dir)
(log, _) = p.communicate()
log = log.strip('\n\x1e').split("\x1e")
log = [row.strip().split("\x1f") for row in log]
log = [dict(zip(GIT_COMMIT_FIELDS, row)) for row in log]
# now get list of distinct committers
committers = len(set([x['committer name'] for x in log]))
cnt_dict['committers'] = committers
return cnt_dict
def count_inheritance(src):
count = 0
for name, tup in src:
if find_whole_word('extends')(tup[0]):
count += 1
return count
def count_references(src):
names, tups = zip(*src)
dict = {e : 0 for i, e in enumerate(names)}
total = 0
for name in names:
c_name = name[name.rfind('.'):]
for tup in tups:
if find_whole_word(c_name)(tup[0]):
dict[name] += 1
total += 1
dict['total'] = total
# sort by amount of references
return {k: v for k, v in dict.iteritems() if v > 1}
def count_lines(src):
dict = {e : 0 for i, e in enumerate(src)}
total = 0
for name, tup in src:
dict[name] = 0
lines = tup[0].split('\n')
for line in lines:
if line != '\n':
dict[name] += 1
total += 1
dict['total'] = total
# sort by amount of lines
return {k: v for k, v in dict.iteritems()}
# constructors not counted
def count_methods(src):
dict = {e : 0 for i, e in enumerate(src)}
total = 0
for name, tup in src:
dict[name] = len(tup[1].types[0].methods)
total += dict[name]
dict['total'] = total
# sort by amount of functions
return {k: v for k, v in dict.iteritems()}
def count_operators(src):
dict = {key: 0 for key in OPERATORS}
for name, tup in src:
for op in OPERATORS:
# if operator is in list, match it without anything preceding or following it
# eg +, but not ++ or +=
if op in ['\+','\-','!','=']:
# regex excludes followed_by (?!) and preceded_by (?<!)
dict[op] += len(re.findall('(?!\-|\*|&|>|<|>>)(?<!\-|\+|=|\*|&|>|<)' + op, tup[0]))
else:
dict[op] += len(re.findall(op, tup[0]))
# TODO : correct bug with regex for the '++'
dict['\+'] -= dict['\+\+']
total = 0
distinct = 0
for key in dict:
if dict[key] > 0:
total += dict[key]
distinct += 1
dict['total'] = total
dict['distinct'] = distinct
return dict
def count_operands(src):
dict = {key: 0 for key in OPERANDS}
for name, tup in src:
lines = tup[0].split('\n')
for line in lines:
for op in OPERANDS:
if op in line:
dict[op] += 1 + line.count(',')
total = 0
distinct = 0
for key in dict:
if dict[key] > 0:
total += dict[key]
distinct += 1
dict['total'] = total
dict['distinct'] = distinct
return dict
def calc_cyclomatic_complexity(src):
dict = {}
total = 0
for name, tup in src:
dict[name] = 1
dict[name] += len(re.findall('if|else|for|switch|while', tup[0]))
total += dict[name]
dict['total'] = total
# sort by amount of complexity
return {k: v for k, v in dict.iteritems()}
def make_hbar_plot(dictionary, title, x_label, top=TOP):
# show top classes
vals = sorted(dictionary.values(), reverse=True)[:top]
lbls = sorted(dictionary, key=dictionary.get, reverse=True)[:top]
# make plot
fig = plt.figure(figsize=(10, 7))
fig.suptitle(title, fontsize=15)
ax = fig.add_subplot(111)
# set ticks
y_pos = np.arange(len(lbls)) + 0.5
ax.barh(y_pos, vals, align='center', alpha=0.4, color='lightblue')
ax.set_yticks(y_pos)
ax.set_yticklabels(lbls)
ax.set_xlabel(x_label)
plt.show()
pass
# Clustering
def random_centroid_selector(total_clusters , clusters_plotted):
random_list = []
for i in range(0, clusters_plotted):
random_list.append(random.randint(0, total_clusters - 1))
return random_list
def plot_cluster(kmeansdata, centroid_list, names, num_cluster, title):
mlab_pca = mlabPCA(kmeansdata)
cutoff = mlab_pca.fracs[1]
users_2d = mlab_pca.project(kmeansdata, minfrac=cutoff)
centroids_2d = mlab_pca.project(centroid_list, minfrac=cutoff)
# make plot
fig = plt.figure(figsize=(20, 15))
fig.suptitle(title, fontsize=15)
ax = fig.add_subplot(111)
plt.xlim([users_2d[:, 0].min() - 3, users_2d[:, 0].max() + 3])
plt.ylim([users_2d[:, 1].min() - 3, users_2d[:, 1].max() + 3])
random_list = random_centroid_selector(num_cluster, 50)
for i, position in enumerate(centroids_2d):
if i in random_list:
plt.scatter(centroids_2d[i, 0], centroids_2d[i, 1], marker='o', c='red', s=100)
for i, position in enumerate(users_2d):
plt.scatter(users_2d[i, 0], users_2d[i, 1], marker='o', c='lightgreen')
for label, x, y in zip(names, users_2d[:, 0], users_2d[:, 1]):
ax.annotate(
label,
xy = (x, y), xytext=(-15, 15),
textcoords = 'offset points', ha='right', va='bottom',
bbox = dict(boxstyle='round,pad=0.5', fc='white', alpha=0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
pass
# first build list of source files
joined = build_joined((USER, REPO))
commit_dict = parse_git_log(BASE_PATH + REPO, joined)
make_hbar_plot(commit_dict, 'Commit frequency', 'Commits', TOP)
print 'Distinct committers : ' + str(commit_dict['committers'])
reference_dict = count_references(joined)
make_hbar_plot(reference_dict, 'Top 25 referenced classes', 'References', TOP)
inheritance_count = count_inheritance(joined)
print 'Inheritance count : ' + inheritance_count
lines_dict = count_lines(joined)
make_hbar_plot(lines_dict, 'Largest 25 classes', 'Lines of code', TOP)
methods_dict = count_methods(joined)
make_hbar_plot(methods_dict, 'Top 25 classes in nr of methods', 'Number of methods', TOP)
operators_dict = count_operators(joined)
make_hbar_plot(operators_dict, 'Top 25 operators', 'Number of operators', TOP)
operands_dict = count_operands(joined)
make_hbar_plot(operands_dict, 'Top 25 operand types', 'Number of operands', TOP)
halstead_dict['PROGRAM_VOCABULARY'] = operators_dict['distinct'] + operands_dict['distinct']
halstead_dict['PROGRAM_LENGTH'] = round(operators_dict['total'] + operands_dict['total'], 0)
halstead_dict['VOLUME'] = round(halstead_dict['PROGRAM_LENGTH'] * log(halstead_dict['PROGRAM_VOCABULARY'], 2), 0)
halstead_dict['DIFFICULTY'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct'])
halstead_dict['EFFORT'] = round(halstead_dict['VOLUME'] * halstead_dict['DIFFICULTY'], 0)
halstead_dict['TIME'] = round(halstead_dict['EFFORT'] / 18, 0)
halstead_dict['BUGS'] = round(halstead_dict['VOLUME'] / 3000, 0)
print halstead_dict
cyclomatic_dict = calc_cyclomatic_complexity(joined)
make_hbar_plot(cyclomatic_dict, 'Top 25 classes with cyclomatic complexity', 'Level of complexity', TOP)
# featurize all metrics
def make_features(repo, dict):
features = []
for key, value in dict.items():
features.append(int(value))
return features
# iterate all repos and build
# dictionary with all metrics
def make_rows(repos):
rows = []
try:
for repo in repos:
dict = {}
joined = build_joined(repo)
github_dict = parse_git_log(BASE_PATH + repo[1], joined)
dict['commits'] = github_dict['total']
#dict['committers'] = github_dict['committers'] Uncomment this line for the next run.
# Was added at the last minute
dict['references'] = count_references(joined)['total']
dict['inheritance'] = count_inheritance(joined)
dict['lines'] = count_lines(joined)['total']
dict['methods'] = count_methods(joined)['total']
operators_dict = count_operators(joined)
operands_dict = count_operands(joined)
dict['program_vocabulary'] = operators_dict['distinct'] + operands_dict['distinct']
dict['program_length'] = round(operators_dict['total'] + operands_dict['total'], 0)
dict['volume'] = round(dict['program_length'] * log(dict['program_vocabulary'], 2), 0)
dict['difficulty'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct'])
dict['effort'] = round(dict['volume'] * dict['difficulty'], 0)
dict['time'] = round(dict['effort'] / 18, 0)
dict['bugs'] = round(dict['volume'] / 3000, 0)
dict['cyclomatic'] = calc_cyclomatic_complexity(joined)['total']
rows.append(make_features(repo, dict))
except:
pass
return rows
def cluster_repos(arr, nr_clusters):
kmeans = KMeans(n_clusters=nr_clusters)
kmeans.fit(arr)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
return (centroids, labels)
repositories = [('apache', x) for x in APACHE_PROJECTS]
rows = make_rows(repositories[:5])
rows.extend(make_rows(repositories[5:10]))
rows.extend(make_rows(repositories[10:15]))
rows.extend(make_rows(repositories[15:20]))
rows.extend(make_rows(repositories[20:25]))
rows.extend(make_rows(repositories[25:30]))
rows.extend(make_rows(repositories[30:35]))
rows.extend(make_rows(repositories[35:40]))
rows.extend(make_rows(repositories[40:45]))
rows.extend(make_rows(repositories[45:50]))
rows.extend(make_rows(repositories[50:55]))
rows.extend(make_rows(repositories[55:60]))
rows.extend(make_rows(repositories[60:65]))
rows.extend(make_rows(repositories[65:70]))
rows.extend(make_rows(repositories[70:75]))
rows.extend(make_rows(repositories[75:80]))
rows.extend(make_rows(repositories[80:85]))
rows.extend(make_rows(repositories[85:90]))
rows.extend(make_rows(repositories[90:95]))
rows.extend(make_rows(repositories[95:100]))
rows.extend(make_rows(repositories[100:105]))
rows.extend(make_rows(repositories[105:110]))
rows.extend(make_rows(repositories[110:115]))
rows.extend(make_rows(repositories[115:120]))
rows.extend(make_rows(repositories[120:125]))
rows.extend(make_rows(repositories[125:130]))
rows.extend(make_rows(repositories[130:133]))
rows.extend(make_rows(repositories[133:134]))
print rows
# TWO clusters
NR_CLUSTERS = 2
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
# THREE clusters
NR_CLUSTERS = 3
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
# FOUR clusters
NR_CLUSTERS = 4
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
centroids = tup[0]
plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters')
names = [x[1] for x in repositories]
print names.index('synapse')
print names.index('tomcat')
print names.index('groovy')
print names.index('hama')
headers = ['Repo', 'Com', 'Ref', 'Inh', 'Line', 'Meth', 'Voc', \
'Len', 'Vol', 'Diff', 'Eff', 'Time', 'Bug','Cycl']
print tabulate([[names[118]] + [x for x in rows[118]], [names[123]] + [x for x in rows[123]], \
[names[82]] + [x for x in rows[82]], [names[84]] + [x for x in rows[84]]], headers=headers)
# THREE clusters
NR_CLUSTERS = 4
arr = np.array(rows)
tup = cluster_repos(arr, NR_CLUSTERS)
labels = tup[1]
clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(rows, labels)
print labels
print clf.predict(rows[3])
print clf.predict(rows[34])
#repositories = [('qos-ch', 'slf4j'), ('mockito', 'mockito'), ('elastic', 'elasticsearch')]
repositories = [('JetBrains', 'kotlin')]
rows = make_rows(repositories)
print clf.predict(rows[0])
print tabulate([['Kotlin'] + [x for x in rows[0]]], headers=headers)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analyzing one project
Step2: 1. Commit frequency
Step3: 2. Distinct committers
Step4: 3. Class reference count
Step5: 4. Inheritance count
Step6: 5. Lines of code
Step7: 6. Number of methods
Step8: 7. Halstead complexity measures
Step9: b) Number of operands
Step10: Complexity measures
Step11: 8. Cyclomatic complexity
Step12: Analyzing Apache Java projects
Step13: Construct model with Apache projects
Step14: We break the projects down in batches of five to make the analysis manageable
Step15: Clustering Apache Java projects
Step16: Clustering results
Step17: Tabulating groovy and synapse
Step18: Construct a prediction model with the Apache projects
Step19: Construct a Support Vector Classification model
Step20: Test it
Step21: Analyze JetBrains kotlin project
|
12,711
|
<ASSISTANT_TASK:>
Python Code:
workDir = '../../t/SIPSim_example/'
nprocs = 3
import os
# Note: you will need to install `rpy2.ipython` and the necessary R packages (see next cell)
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
workDir = os.path.abspath(workDir)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
genomeDir = os.path.join(workDir, 'genomes_rn')
%%bash
source activate SIPSim
# creating example config
SIPSim incorp_config_example \
--percTaxa 34 \
--percIncorpUnif 50 \
--n_reps 1 \
> incorp.config
!cat incorp.config
%%bash
source activate SIPSim
SIPSim communities \
--config incorp.config \
./genomes_rn/genome_index.txt \
> comm.txt
!cat comm.txt
%%bash
source activate SIPSim
SIPSim gradient_fractions \
--BD_min 1.67323 \
--BD_max 1.7744 \
comm.txt \
> fracs.txt
!head -n 6 fracs.txt
# primers = >515F
# GTGCCAGCMGCCGCGGTAA
# >806R
# GGACTACHVGGGTWTCTAAT
#
# F = os.path.join(workDir, '515F-806R.fna')
# with open(F, 'wb') as oFH:
# oFH.write(primers)
# print 'File written: {}'.format(F)
%%bash -s $genomeDir
source activate SIPSim
# skewed-normal
SIPSim fragments \
$1/genome_index.txt \
--fp $1 \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 1000 \
--debug \
--tbl \
> shotFrags.txt
!head -n 5 shotFrags.txt
!tail -n 5 shotFrags.txt
%%R -w 700 -h 350
df = read.delim('shotFrags.txt')
p = ggplot(df, aes(fragGC, fragLength, color=taxon_name)) +
geom_density2d() +
scale_color_discrete('Taxon') +
labs(x='Fragment G+C', y='Fragment length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
plot(p)
%%bash
source activate SIPSim
SIPSim fragment_KDE \
shotFrags.txt \
> shotFrags_kde.pkl
!ls -thlc shotFrags_kde.pkl
%%bash
source activate SIPSim
SIPSim diffusion \
shotFrags_kde.pkl \
--np 3 \
> shotFrags_kde_dif.pkl
!ls -thlc shotFrags_kde_dif.pkl
n = 100000
%%bash -s $n
source activate SIPSim
SIPSim KDE_sample -n $1 shotFrags_kde.pkl > shotFrags_kde.txt
SIPSim KDE_sample -n $1 shotFrags_kde_dif.pkl > shotFrags_kde_dif.txt
ls -thlc shotFrags_kde*.txt
%%R
df1 = read.delim('shotFrags_kde.txt', sep='\t')
df2 = read.delim('shotFrags_kde_dif.txt', sep='\t')
df1$data = 'no diffusion'
df2$data = 'diffusion'
df = rbind(df1, df2) %>%
gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528,
Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>%
mutate(Taxon = gsub('_(ATCC|DSM)', '\n\\1', Taxon))
df %>% head(n=3)
%%R -w 800 -h 300
p = ggplot(df, aes(BD, fill=data)) +
geom_density(alpha=0.25) +
facet_wrap( ~ Taxon) +
scale_fill_discrete('') +
theme_bw() +
theme(
text=element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.text.x = element_text(angle=50, hjust=1)
)
plot(p)
%%bash
source activate SIPSim
SIPSim DBL \
shotFrags_kde_dif.pkl \
--np 3 \
> shotFrags_kde_dif_DBL.pkl
# viewing DBL logs
!ls -thlc *pkl
%%bash
source activate SIPSim
SIPSim isotope_incorp \
--comm comm.txt \
--np 3 \
shotFrags_kde_dif_DBL.pkl \
incorp.config \
> shotFrags_KDE_dif_DBL_inc.pkl
!ls -thlc *.pkl
%%R
df = read.delim('BD-shift_stats.txt', sep='\t')
df
%%bash
source activate SIPSim
SIPSim OTU_table \
--abs 1e7 \
--np 3 \
shotFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
> OTU.txt
!head -n 7 OTU.txt
%%R -h 350 -w 750
df = read.delim('OTU.txt', sep='\t')
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
%%R -h 350 -w 750
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Shotgun fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
%%bash
source activate SIPSim
SIPSim OTU_PCR OTU.txt > OTU_PCR.txt
!head -n 5 OTU_PCR.txt
!tail -n 5 OTU_PCR.txt
%%bash
source activate SIPSim
SIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt
!head -n 5 OTU_PCR_sub.txt
%%R -h 350 -w 750
df = read.delim('OTU_PCR_sub.txt', sep='\t')
p = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Taxon relative abundances') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
plot(p)
%%bash
source activate SIPSim
SIPSim OTU_wide_long -w \
OTU_PCR_sub.txt \
> OTU_PCR_sub_wide.txt
!head -n 4 OTU_PCR_sub_wide.txt
%%bash
source activate SIPSim
SIPSim OTU_sample_data \
OTU_PCR_sub.txt \
> OTU_PCR_sub_meta.txt
!head OTU_PCR_sub_meta.txt
%%bash
source activate SIPSim
SIPSim -l
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Experimental design
Step3: Pre-fractionation communities
Step4: Note
Step6: Simulating fragments
Step7: Simulation
Step8: Plotting fragments
Step9: Note
Step10: Note
Step11: Plotting fragment distribution w/ and w/out diffusion
Step12: Plotting
Step13: Adding diffusive boundary layer (DBL) effects
Step14: Adding isotope incorporation
Step15: Note
Step16: Making an OTU table
Step17: Plotting fragment count distributions
Step18: Notes
Step19: Adding effects of PCR
Step20: Notes
Step21: Notes
Step22: Misc
Step23: SIP metadata
Step24: Other SIPSim commands
|
12,712
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from pandas import *
import matplotlib.pyplot as plt
%matplotlib inline
from ggplot import *
from numpy import random
plt.style.use('ggplot')
data = pd.read_csv("../Data/Histogram/pared_down.csv")
data
data.columns
table = pivot_table(data, index=['Tree'], columns=['Parameter'])
table
table.plot(kind='bar', width=.7, sort_columns=True).set_ylim(0,75)
plt.tight_layout()
plt.savefig('exTotal.svg', bbox_inches='tight', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read data using pandas.
Step2: Pivot the table to group the data by tree.
Step3: Plot using native pandas plotting.
|
12,713
|
<ASSISTANT_TASK:>
Python Code:
def quad_func (x):
return 5 * x ** 2 -23 * x + 47
# Training Set + Eval Set: 200 samples (70%, 30% split)
# Test Set: 60 samples
# Total: 260 samples
np.random.seed(5)
samples = 260
x_vals = pd.Series(np.random.rand(samples) * 20)
x2_vals = x_vals ** 2
y_vals = x_vals.map(quad_func)
y_noisy_vals = y_vals + np.random.randn(samples) * 50
df = pd.DataFrame({'x': x_vals,
'x2': x2_vals ,
'y': y_vals,
'y_noisy': y_noisy_vals})
df.head()
df.corr()
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df['x'],
y = df['y'],
color = 'r',
label = 'y',)
plt.scatter(x = df['x'],
y = df['y_noisy'],
color = 'b',
label = 'y noisy',
marker = '+')
plt.xlabel('x')
plt.ylabel('Target Attribute')
plt.grid(True)
plt.legend()
data_path = '..\Data\RegressionExamples\quadratic'
df.to_csv(os.path.join(data_path,'quadratic_example_all.csv'),
index = True,
index_label = 'Row')
df[df.index < 200].to_csv(os.path.join(data_path, 'quadratic_example_train_underfit.csv'),
index = True,
index_label = 'Row',
columns = ['x', 'y_noisy'])
df[df.index < 200].to_csv(os.path.join(data_path, 'quadratic_example_train_normal.csv'),
index = True,
index_label = 'Row',
columns= ['x', 'x2', 'y_noisy'])
df.to_csv(os.path.join(data_path, 'quadratic_example_test_all_underfit.csv'),
index = True,
index_label = 'Row',
columns = ['x'])
df.to_csv(os.path.join(data_path, 'quadratic_example_test_all_normal.csv'),
index = True,
index_label = 'Row',
columns = ['x', 'x2'])
# Pull Predictions
# Prediction without quadratic term
df = pd.read_csv(os.path.join(data_path,'quadratic_example_all.csv'),
index_col = 'Row')
df_predicted_underfit = pd.read_csv(os.path.join(data_path, 'output_underfit',
'bp-pNYIAR35aSV-quadratic_example_test_all_underfit.csv.gz'))
df_predicted_underfit.columns = ["Row", "y_predicted"]
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.x,
y = df.y_noisy,
color = 'b',
label = 'actual',
marker = '+')
plt.scatter(x = df.x,
y = df_predicted_underfit.y_predicted ,
color = 'g',
label = 'Fit (x)',
marker = '^')
plt.title('Quadratic - underfit')
plt.xlabel('x')
plt.ylabel('Target Attribute')
plt.grid(True)
plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df.y_noisy, df_predicted_underfit.y_predicted],
labels = ['actual','predicted-underfit'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('y')
plt.grid(True)
df.y_noisy.describe()
df_predicted_underfit.y_predicted.describe()
df_predicted_normal = pd.read_csv(os.path.join(data_path,'output_normal',
'bp-In6EUvWaCw2-quadratic_example_test_all_normal.csv.gz'))
df_predicted_normal.columns = ["Row", "y_predicted"]
fig = plt.figure(figsize = (12, 8))
plt.scatter(x = df.x,
y = df.y_noisy,
color = 'b',
label = 'actual',
marker ='+')
plt.scatter(x = df.x,
y = df_predicted_underfit.y_predicted,
color = 'g',
label = 'Fit (x)',
marker = '^')
plt.scatter(x = df.x ,
y = df_predicted_normal.y_predicted ,
color = 'r',
label = 'Fit (x,x^2)')
plt.title('Quadratic - normal fit')
plt.grid(True)
plt.xlabel('x')
plt.ylabel('Target Attribute')
#plt.legend()
fig = plt.figure(figsize = (12, 8))
plt.boxplot([df.y_noisy,df_predicted_underfit.y_predicted, df_predicted_normal.y_predicted],
labels = ['actual','predicted-underfit','predicted-normal'])
plt.title('Box Plot - Actual, Predicted')
plt.ylabel('y')
plt.grid(True)
df_predicted_underfit.head()
df_predicted_normal.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h4>Training and Evaluation Set</h4>
Step2: Test 1
Step3: Test 1
|
12,714
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.spatial
import scipy.optimize
points1 = np.array([(x, y) for x in np.linspace(-1,1,7) for y in np.linspace(-1,1,7)])
N = points1.shape[0]
points2 = 2*np.random.rand(N,2)-1
C = scipy.spatial.distance.cdist(points1, points2, metric='minkowski', p=1)
_, result = scipy.optimize.linear_sum_assignment(C)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,715
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image, display
Image('images/08_transfer_learning_flowchart.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
from datetime import timedelta
import os
# Functions and classes for loading and using the Inception model.
import inception
# We use Pretty Tensor to define the new classifier.
import prettytensor as pt
tf.__version__
import cifar10
from cifar10 import num_classes
# cifar10.data_path = "data/CIFAR-10/"
cifar10.maybe_download_and_extract()
class_names = cifar10.load_class_names()
class_names
images_train, cls_train, labels_train = cifar10.load_training_data()
images_test, cls_test, labels_test = cifar10.load_test_data()
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = images_test[0:9]
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=False)
# inception.data_dir = 'inception/'
inception.maybe_download()
model = inception.Inception()
from inception import transfer_values_cache
file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.npy')
file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.npy')
print("Processing Inception transfer-values for training-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_train * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_train = transfer_values_cache(file_path=file_path_cache_train,
images=images_scaled,
model=model)
print("Processing Inception transfer-values for test-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_test * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_test = transfer_values_cache(file_path=file_path_cache_test,
images=images_scaled,
model=model)
transfer_values_train.shape
transfer_values_test.shape
def plot_transfer_values(i):
print("Input image:")
# Plot the i'th image from the test-set.
plt.imshow(images_test[i], interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values_test[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()
plot_transfer_values(i=16)
plot_transfer_values(i=17)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
transfer_values = transfer_values_train[0:3000]
transfer_values.shape
transfer_values_reduced = pca.fit_transform(transfer_values)
transfer_values_reduced.shape
def plot_scatter(values):
# Create a color-map with a different color for each class.
import matplotlib.cm as cm
cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))
# Get the color for each sample.
colors = cmap[cls_train]
# Extract the x- and y-values.
x = values[:, 0]
y = values[:, 1]
# Plot it.
plt.scatter(x, y, color=colors)
plt.show()
plot_scatter(transfer_values_reduced)
from sklearn.manifold import TSNE
pca = PCA(n_components=50)
transfer_values_50d = pca.fit_transform(transfer_values)
tsne = TSNE(n_components=2)
transfer_values_reduced = tsne.fit_transform(transfer_values_50d)
transfer_values_reduced.shape
plot_scatter(transfer_values_reduced)
transfer_len = model.transfer_len
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
# Wrap the transfer-values as a Pretty Tensor object.
x_pretty = pt.wrap(x)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
fully_connected(size=1024, name='layer_fc1').\
softmax_classifier(class_count=num_classes, labels=y_true)
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)
y_pred_cls = tf.argmax(y_pred, dimension=1)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
train_batch_size = 64
def random_batch():
# Number of images (transfer-values) in the training-set.
num_images = len(transfer_values_train)
# Create a random index.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random x and y-values.
# We use the transfer-values instead of images as x-values.
x_batch = transfer_values_train[idx]
y_batch = labels_train[idx]
return x_batch, y_batch
def optimize(num_iterations):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images (transfer-values) and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch()
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
# We also want to retrieve the global_step counter.
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
# Print status to screen every 100 iterations (and last).
if (i_global % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-batch.
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
# Print status.
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = images_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
n = min(9, len(images))
# Plot the first n images.
plot_images(images=images[0:n],
cls_true=cls_true[0:n],
cls_pred=cls_pred[0:n])
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
# Append the class-name to each line.
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(transfer_values, labels, cls_true):
# Number of images.
num_images = len(transfer_values)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: transfer_values[i:j],
y_true: labels[i:j]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
def predict_cls_test():
return predict_cls(transfer_values = transfer_values_test,
labels = labels_test,
cls_true = cls_test)
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# model.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: Load Data for CIFAR-10
Step4: The data dimensions have already been defined in the cifar10 module, so we just need to import the ones we need.
Step5: Set the path for storing the data-set on your computer.
Step6: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
Step7: Load the class-names.
Step8: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
Step9: Load the test-set.
Step10: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
Step11: Helper-function for plotting images
Step12: Plot a few images to see if data is correct
Step13: Download the Inception Model
Step14: Download the data for the Inception model if it doesn't already exist in the directory. It is 85 MB.
Step15: Load the Inception Model
Step16: Calculate Transfer-Values
Step17: Set the file-paths for the caches of the training-set and test-set.
Step18: Check the shape of the array with the transfer-values. There are 50,000 images in the training-set and for each image there are 2048 transfer-values.
Step19: Similarly, there are 10,000 images in the test-set with 2048 transfer-values for each image.
Step20: Helper-function for plotting transfer-values
Step21: Analysis of Transfer-Values using PCA
Step22: Create a new PCA-object and set the target array-length to 2.
Step23: It takes a while to compute the PCA so the number of samples has been limited to 3000. You can try and use the full training-set if you like.
Step24: Check that the array has 3000 samples and 2048 transfer-values for each sample.
Step25: Use PCA to reduce the transfer-value arrays from 2048 to 2 elements.
Step26: Check that it is now an array with 3000 samples and 2 values per sample.
Step27: Helper-function for plotting the reduced transfer-values.
Step28: Plot the transfer-values that have been reduced using PCA. There are 10 different colors for the different classes in the CIFAR-10 data-set. The colors are grouped together but with very large overlap. This may be because PCA cannot properly separate the transfer-values.
Step29: Analysis of Transfer-Values using t-SNE
Step30: Another method for doing dimensionality reduction is t-SNE. Unfortunately, t-SNE is very slow so we first use PCA to reduce the transfer-values from 2048 to 50 elements.
Step31: Create a new t-SNE object for the final dimensionality reduction and set the target to 2-dim.
Step32: Perform the final reduction using t-SNE. The current implemenation of t-SNE in scikit-learn cannot handle data with many samples so this might crash if you use the full training-set.
Step33: Check that it is now an array with 3000 samples and 2 transfer-values per sample.
Step34: Plot the transfer-values that have been reduced to 2-dim using t-SNE, which shows better separation than the PCA-plot above.
Step35: New Classifier in TensorFlow
Step36: Now create a placeholder variable for inputting the transfer-values from the Inception model into the new network that we are building. The shape of this variable is [None, transfer_len] which means it takes an input array with an arbitrary number of samples as indicated by the keyword None and each sample has 2048 elements, equal to transfer_len.
Step37: Create another placeholder variable for inputting the true class-label of each image. These are so-called One-Hot encoded arrays with 10 elements, one for each possible class in the data-set.
Step38: Calculate the true class as an integer. This could also be a placeholder variable.
Step39: Neural Network
Step40: Optimization Method
Step41: Method for optimizing the new neural network.
Step42: Classification Accuracy
Step43: Create an array of booleans whether the predicted class equals the true class of each image.
Step44: The classification accuracy is calculated by first type-casting the array of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step45: TensorFlow Run
Step46: Initialize Variables
Step47: Helper-function to get a random training-batch
Step48: Function for selecting a random batch of transfer-values from the training-set.
Step49: Helper-function to perform optimization
Step50: Helper-Functions for Showing Results
Step51: Helper-function to plot confusion matrix
Step52: Helper-functions for calculating classifications
Step53: Calculate the predicted class for the test-set.
Step54: Helper-functions for calculating the classification accuracy
Step55: Helper-function for showing the classification accuracy
Step56: Results
Step57: Performance after 10,000 optimization iterations
Step58: Close TensorFlow Session
|
12,716
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import eland as el
import numpy as np
ES_URL = 'http://localhost:9200/'
df = el.read_es(ES_URL, 'ecs-search-metrics')
df.dtypes
print(df.info_es())
df.head()
df['SearchMetrics.click.result.rank'].describe()
df['SearchMetrics.click.result.rank'].hist()
df['source.user.id'].nunique()
df['event.action'].value_counts()
df_queries = df[df['event.action'] == 'SearchMetrics.query']
df_pages = df[df['event.action'] == 'SearchMetrics.page']
df_clicks = df[df['event.action'] == 'SearchMetrics.click']
df_queries[['SearchMetrics.results.size']].hist(figsize=[10,5], bins=10)
df_tf_query = el.read_es(ES_URL, 'ecs-search-metrics_transform_queryid')
df_tf_query.head()
df_tf_query.select_dtypes(include=[np.number])\
.drop(['query_event.SearchMetrics.query.page'], axis=1)\
.hist(figsize=[17,11], bins=10)
# queries that have no results
df_tf_query_without_results = df_tf_query[df_tf_query['query_event.SearchMetrics.results.size'] == 0]
# queries that have results
df_tf_query_with_results = df_tf_query[df_tf_query['query_event.SearchMetrics.results.size'] > 0]
# queries that have results but no clicks
df_tf_query_without_clicks = df_tf_query_with_results[df_tf_query_with_results['metrics.clicks.count'] == 0]
# queries that have results and clicks
df_tf_query_with_clicks = df_tf_query_with_results[df_tf_query_with_results['metrics.clicks.count'] > 0]
num_queries = df_tf_query.shape[0]
num_queries_without_results = df_tf_query_without_results.shape[0]
num_queries_with_results = df_tf_query_with_results.shape[0]
num_queries_without_clicks = df_tf_query_without_clicks.shape[0]
num_queries_with_clicks = df_tf_query_with_clicks.shape[0]
zero_result_rate = num_queries_without_results / num_queries * 100
print(f"Zero result rate: {round(zero_result_rate, 2)}%")
abandonment_rate = num_queries_without_clicks / num_queries * 100
print(f"Abandonment rate: {round(abandonment_rate, 2)}%")
mean_clicks_per_query = df_tf_query_with_results['metrics.clicks.count'].mean()
print(f"Clicks per Query: {round(mean_clicks_per_query, 2)}")
num_queries_with_clicks_at_3 = df_tf_query_with_clicks[df_tf_query_with_clicks['metrics.clicks.exist_at_3'] == True].shape[0]
ctr_at_3 = num_queries_with_clicks_at_3 / num_queries_with_clicks
print(f"CTR@3: {round(ctr_at_3, 2)}")
max_reciprocal_rank = df_tf_query_with_clicks['metrics.clicks.max_reciprocal_rank'].mean()
print(f"Max Reciprocal Rank: {round(max_reciprocal_rank, 2)}")
mean_mean_reciprocal_rank = df_tf_query_with_clicks['metrics.clicks.mean_reciprocal_rank'].mean()
print(f"Mean Per-Query Mean Reciprocal Rank: {round(mean_mean_reciprocal_rank, 2)}")
time_to_first_click = df_tf_query_with_clicks['metrics.clicks.time_to_first_click'].mean()
print(f"Time to First Click: {round(time_to_first_click / 1000, 2)} seconds")
time_to_last_click = df_tf_query_with_clicks['metrics.clicks.time_to_last_click'].mean()
print(f"Time to Last Click: {round(time_to_last_click / 1000, 2)} seconds")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Loading and Preparation
Step2: What is the distribution of ranks of results clicked on?
Step3: How many users are in the dataset?
Step4: How many of each event type?
Step5: Split dataset into two dataframes based on action type.
Step6: What is the distribution of search result sizes in query events?
Step7: Metrics Index
Step8: What are the distributions of the numeric fields?
Step9: Metrics
Step10: Provide basic counts for all datasets.
Step11: Zero Result Rate
Step12: Abandonment Rate
Step13: Clicks per Query
Step14: Click Through Rate at Position 3 (CTR@3)
Step15: Max Reciprocal Rank
Step16: Mean Reciprocal Rank
Step17: Time to First Click
Step18: Time to Last Click
|
12,717
|
<ASSISTANT_TASK:>
Python Code:
%run 'ipython_startup.py'
import seaborn as sns
dspr = pd.read_csv(os.path.join(PROJ, 'analysis_output/mmc/dsrp_sex_det_genes_for_mmc.csv'), index_col='_NAME_')
cegs = pd.read_csv(os.path.join(PROJ, 'analysis_output/mmc/cegsV_sex_det_gene_for_mmc.csv'), index_col='_NAME_')
dspr.drop('Rm62', inplace=True)
# Plot of Covaraince Matrix
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
d = sns.heatmap(dspr.T.cov(), vmin=-1, vmax=1, square=True, ax=ax1, cbar=False)
ax1.set_title('DSPR')
ax1.set_xlabel(''); ax1.set_ylabel('')
c = sns.heatmap(cegs.T.cov(), vmin=-1, vmax=1, square=True, ax=ax2, cbar=False)
ax2.set_title('CEGS')
ax2.set_xlabel(''); ax2.set_ylabel('')
plt.tight_layout()
plt.savefig(os.path.join(PROJ, 'analysis_output/correlation/covariance_dspr_cegs.png'), dpi=300)
# DSPR plot of distribution of covariance
p = dspr.T.cov().plot(kind='hist', subplots=True, layout=(5, 4), sharex=True,
sharey=True, figsize=(8, 8), rot=90,
title='DSPR Distribution of Covariances')
#plt.tight_layout(rect=[0, 0, 1, .98])
# CEGS plot of distribution of covariance
p = cegs.T.cov().plot(kind='hist', subplots=True, layout=(5, 4), sharex=True,
sharey=True, figsize=(8, 8), rot=90,
title='CEGS Distribution of Covariances')
plt.tight_layout(rect=[0, 0, 1, .98])
# Plot of Correlation Matrix
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
d = sns.heatmap(dspr.T.corr(), vmin=-1, vmax=1, square=True, ax=ax1, cbar=False)
ax1.set_title('DSPR')
ax1.set_xlabel(''); ax1.set_ylabel('')
c = sns.heatmap(cegs.T.corr(), vmin=-1, vmax=1, square=True, ax=ax2, cbar=False)
ax2.set_title('CEGS')
ax2.set_xlabel(''); ax2.set_ylabel('')
plt.tight_layout()
plt.savefig(os.path.join(PROJ, 'analysis_output/correlation/correlation_dspr_cegs.png'), dpi=300)
dspr.T.var()
cegs.T.var()
# DSPR plot of distribution of correlation
p = dspr.T.corr().plot(kind='hist', subplots=True, layout=(5, 4), sharex=True,
sharey=True, figsize=(8, 8), rot=90,
title='DSPR Distribution of Correlation')
#plt.tight_layout(rect=[0, 0, 1, .98])
# CEGS plot of distribution of correlation
p = cegs.T.corr().plot(kind='hist', subplots=True, layout=(5, 4), sharex=True,
sharey=True, figsize=(8, 8), rot=90,
title='CEGS Distribution of Correlation')
plt.tight_layout(rect=[0, 0, 1, .98])
print dspr.T.corr().max().max(), dspr.T.corr().min().min()
print cegs.T.corr().max().max(), cegs.T.corr().min().min()
covMax = max(dspr.T.cov().max().max(), cegs.T.cov().max().max())
covMax
# Plot of Covaraince Matrix
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
d = sns.heatmap(dspr.T.cov(), vmin=covMin, vmax=covMax, square=True, ax=ax1, cbar=False)
ax1.set_title('DSPR')
c = sns.heatmap(cegs.T.cov(), vmin=covMin, vmax=covMax, square=True, ax=ax2, cbar=False)
ax2.set_title('CEGS')
plt.tight_layout()
plt.savefig(os.path.join(PROJ, 'analysis_output/correlation/covariance_dspr_cegs.png'), dpi=300)
print dspr.T.corr().max().max(), dspr.T.corr().min().min()
print cegs.T.corr().max().max(), cegs.T.corr().min().min()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data
Step2: Variation among genes in sex hierarchy
Step3: Correlation
|
12,718
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-agcm3-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
12,719
|
<ASSISTANT_TASK:>
Python Code:
birthdays = dict()
print( birthdays )
birthdays['0704'] = 'Steve'
birthdays['0529'] = 'Tony'
print( birthdays )
print( birthdays['0529'] )
# Get the number of key-value pairs
print( len( birthdays ) )
# Get the values in the dictionary
print( birthdays.values() )
# Get the keys in the dictionary
print( birthdays.keys() )
for a_date in birthdays:
print( a_date )
def reverse_lookup( a_dict, value ):
for key in a_dict:
if a_dict[key] == value:
return key
raise ValueError
birthdays['0704'] = [ 'Steve', 'Nick' ]
def invert_dict( a_dict ):
inverted_dict = dict()
for key in a_dict:
value = a_dict[key]
if value not in inverted_dict:
inverted_dict[value] = [ key ]
else:
inverted_dict[value].append( key )
return inverted_dict
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To add an item to the dictionary, use square brackets like a list
Step2: Note that order isn't preserved in a dictionary (unlike a list)
Step3: The Python documentation for dictionaries is quite extensive
Step4: Dictionary as a set of counters
Step5: Reverse lookup
Step6: Why is this approach inefficient?
Step7: Sometimes you may want to invert a dictionary
|
12,720
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from collections import defaultdict
import json
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import rcParams
import matplotlib.cm as cm
import matplotlib as mpl
#colorbrewer2 Dark2 qualitative color table
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.color_cycle'] = dark2_colors
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'white'
rcParams['patch.facecolor'] = dark2_colors[0]
rcParams['font.family'] = 'StixGeneral'
def remove_border(axes=None, top=False, right=False, left=True, bottom=True):
Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks
The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn
ax = axes or plt.gca()
ax.spines['top'].set_visible(top)
ax.spines['right'].set_visible(right)
ax.spines['left'].set_visible(left)
ax.spines['bottom'].set_visible(bottom)
#turn off all ticks
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_ticks_position('none')
#now re-enable visibles
if top:
ax.xaxis.tick_top()
if bottom:
ax.xaxis.tick_bottom()
if left:
ax.yaxis.tick_left()
if right:
ax.yaxis.tick_right()
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
fulldf=pd.read_csv("bigdf.csv")
fulldf.head(2)
#your code here
urc=fulldf.groupby('user_id').review_id.count()
ax=urc.hist(bins=50, log=True)
remove_border(ax)
plt.xlabel("Reviews per user")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.title("Review Count per User");
#your code here
brc=fulldf.groupby('business_id').review_id.count()
ax=brc.hist(bins=50, log=True)
remove_border(ax)
plt.xlabel("Reviews per restaurant")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.title("Review Count per Restaurant");
#your code here
print "Number of Reviews",fulldf.shape[0]
print "Number of Users", fulldf.user_id.unique().shape[0], "Number of Businesses", fulldf.business_id.unique().shape[0]
#your code here
print "Mean stars over all reviews:",fulldf.stars.mean()
stars=fulldf.stars
ax=stars.hist(bins=5)
remove_border(ax)
plt.xlabel("Star rating")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.title("Star ratings over all reviews");
def recompute_frame(ldf):
takes a dataframe ldf, makes a copy of it, and returns the copy
with all averages and review counts recomputed
this is used when a frame is subsetted.
ldfu=ldf.groupby('user_id')
ldfb=ldf.groupby('business_id')
user_avg=ldfu.stars.mean()
user_review_count=ldfu.review_id.count()
business_avg=ldfb.stars.mean()
business_review_count=ldfb.review_id.count()
nldf=ldf.copy()
nldf.set_index(['business_id'], inplace=True)
nldf['business_avg']=business_avg
nldf['business_review_count']=business_review_count
nldf.reset_index(inplace=True)
nldf.set_index(['user_id'], inplace=True)
nldf['user_avg']=user_avg
nldf['user_review_count']=user_review_count
nldf.reset_index(inplace=True)
return nldf
#your code here
smallidf=fulldf[(fulldf.user_review_count > 60) & (fulldf.business_review_count > 150)]
smalldf=recompute_frame(smallidf)
#your code here
print "Total Number of Reviews", smalldf.shape[0]
print "Users in this set", smalldf.user_id.unique().shape[0], "Restaurants",smalldf.business_id.unique().shape[0]
plt.figure()
ax=smalldf.groupby('user_id').review_id.count().hist()
remove_border(ax)
plt.xlabel("Reviews per user")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.figure()
ax=smalldf.groupby('business_id').review_id.count().hist()
remove_border(ax)
plt.xlabel("Reviews per restaurant")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
#your code here
plt.figure()
avg_ratings_by_user=smalldf.groupby('user_id').stars.mean()
ax=avg_ratings_by_user.hist()
remove_border(ax)
plt.xlabel("Average review score")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.title("Average User Rating")
plt.figure()
avg_ratings_by_biz=smalldf.groupby('business_id').stars.mean()
ax=avg_ratings_by_biz.hist()
remove_border(ax)
plt.xlabel("Average review score")
plt.grid(False)
plt.grid(axis = 'y', color ='white', linestyle='-')
plt.title("Average Restaurant Rating")
plt.figure()
print smalldf.stars.mean()
plt.figure()
restaurants=smalldf.business_id.unique()
supports=[]
for i,rest1 in enumerate(restaurants):
for j,rest2 in enumerate(restaurants):
if i < j:
rest1_reviewers = smalldf[smalldf.business_id==rest1].user_id.unique()
rest2_reviewers = smalldf[smalldf.business_id==rest2].user_id.unique()
common_reviewers = set(rest1_reviewers).intersection(rest2_reviewers)
supports.append(len(common_reviewers))
print "Mean support is:",np.mean(supports)
plt.hist(supports)
from scipy.stats.stats import pearsonr
def pearson_sim(rest1_reviews, rest2_reviews, n_common):
Given a subframe of restaurant 1 reviews and a subframe of restaurant 2 reviews,
where the reviewers are those who have reviewed both restaurants, return
the pearson correlation coefficient between the user average subtracted ratings.
The case for zero common reviewers is handled separately. Its
ok to return a NaN if any of the individual variances are 0.
if n_common==0:
rho=0.
else:
diff1=rest1_reviews['stars']-rest1_reviews['user_avg']
diff2=rest2_reviews['stars']-rest2_reviews['user_avg']
rho=pearsonr(diff1, diff2)[0]
return rho
def get_restaurant_reviews(restaurant_id, df, set_of_users):
given a resturant id and a set of reviewers, return the sub-dataframe of their
reviews.
mask = (df.user_id.isin(set_of_users)) & (df.business_id==restaurant_id)
reviews = df[mask]
reviews = reviews[reviews.user_id.duplicated()==False]
return reviews
Function
--------
calculate_similarity
Parameters
----------
rest1 : string
The id of restaurant 1
rest2 : string
The id of restaurant 2
df : DataFrame
A dataframe of reviews, such as the smalldf above
similarity_func : func
A function like pearson_sim above which takes two dataframes of individual
restaurant reviews made by a common set of reviewers, and the number of
common reviews. This function returns the similarity of the two restaurants
based on the common reviews.
Returns
--------
A tuple
The first element of the tuple is the similarity and the second the
common support n_common. If the similarity is a NaN, set it to 0
#your code here
def calculate_similarity(rest1, rest2, df, similarity_func):
# find common reviewers
rest1_reviewers = df[df.business_id==rest1].user_id.unique()
rest2_reviewers = df[df.business_id==rest2].user_id.unique()
common_reviewers = set(rest1_reviewers).intersection(rest2_reviewers)
n_common=len(common_reviewers)
#get reviews
rest1_reviews = get_restaurant_reviews(rest1, df, common_reviewers)
rest2_reviews = get_restaurant_reviews(rest2, df, common_reviewers)
sim=similarity_func(rest1_reviews, rest2_reviews, n_common)
if np.isnan(sim):
return 0, n_common
return sim, n_common
class Database:
"A class representing a database of similaries and common supports"
def __init__(self, df):
"the constructor, takes a reviews dataframe like smalldf as its argument"
database={}
self.df=df
self.uniquebizids={v:k for (k,v) in enumerate(df.business_id.unique())}
keys=self.uniquebizids.keys()
l_keys=len(keys)
self.database_sim=np.zeros([l_keys,l_keys])
self.database_sup=np.zeros([l_keys, l_keys], dtype=np.int)
def populate_by_calculating(self, similarity_func):
a populator for every pair of businesses in df. takes similarity_func like
pearson_sim as argument
items=self.uniquebizids.items()
for b1, i1 in items:
for b2, i2 in items:
if i1 < i2:
sim, nsup=calculate_similarity(b1, b2, self.df, similarity_func)
self.database_sim[i1][i2]=sim
self.database_sim[i2][i1]=sim
self.database_sup[i1][i2]=nsup
self.database_sup[i2][i1]=nsup
elif i1==i2:
nsup=self.df[self.df.business_id==b1].user_id.count()
self.database_sim[i1][i1]=1.
self.database_sup[i1][i1]=nsup
def get(self, b1, b2):
"returns a tuple of similarity,common_support given two business ids"
sim=self.database_sim[self.uniquebizids[b1]][self.uniquebizids[b2]]
nsup=self.database_sup[self.uniquebizids[b1]][self.uniquebizids[b2]]
return (sim, nsup)
db=Database(smalldf)
db.populate_by_calculating(pearson_sim)
db.get("z3yFuLVrmH-3RJruPEMYKw", "zruUQvFySeXyEd7_rQixBg")
def shrunk_sim(sim, n_common, reg=3.):
"takes a similarity and shrinks it down by using the regularizer"
ssim=(n_common*sim)/(n_common+reg)
return ssim
Function
--------
knearest
Parameters
----------
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
set_of_restaurants : array
The set of restaurants from which we want to find the nearest neighbors
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A sorted list of the top k similar restaurants. The list is a list of tuples
(business_id, shrunken similarity, common support).
#your code here
from operator import itemgetter
def knearest(restaurant_id, set_of_restaurants, dbase, k=7, reg=3.):
Given a restaurant_id, dataframe, and database, get a sorted list of the
k most similar restaurants from the entire database.
similars=[]
for other_rest_id in set_of_restaurants:
if other_rest_id!=restaurant_id:
sim, nc=dbase.get(restaurant_id, other_rest_id)
ssim=shrunk_sim(sim, nc, reg=reg)
similars.append((other_rest_id, ssim, nc ))
similars=sorted(similars, key=itemgetter(1), reverse=True)
return similars[0:k]
testbizid="eIxSLxzIlfExI6vgAbn2JA"
testbizid2="L-uPZxooP_ziXCtRrWi8Pw"
def biznamefromid(df, theid):
return df['biz_name'][df['business_id']==theid].values[0]
def usernamefromid(df, theid):
return df['user_name'][df['user_id']==theid].values[0]
print testbizid, biznamefromid(smalldf,testbizid)
print testbizid2, biznamefromid(smalldf, testbizid2)
tops=knearest(testbizid, smalldf.business_id.unique(), db, k=7, reg=3.)
print "For ",biznamefromid(smalldf, testbizid), ", top matches are:"
for i, (biz_id, sim, nc) in enumerate(tops):
print i,biznamefromid(smalldf,biz_id), "| Sim", sim, "| Support",nc
tops2=knearest(testbizid2, smalldf.business_id.unique(), db, k=7, reg=3.)
print "For ",biznamefromid(smalldf, testbizid2), ", top matches are:"
for i, (biz_id, sim, nc) in enumerate(tops2):
print i,biznamefromid(smalldf,biz_id), "| Sim", sim, "| Support",nc
def get_user_top_choices(user_id, df, numchoices=5):
"get the sorted top 5 restaurants for a user by the star rating the user gave them"
udf=df[df.user_id==user_id][['business_id','stars']].sort(['stars'], ascending=False).head(numchoices)
return udf
testuserid="7cR92zkDv4W3kqzii6axvg"
print "For user", usernamefromid(smalldf,testuserid), "top choices are:"
bizs=get_user_top_choices(testuserid, smalldf)['business_id'].values
[biznamefromid(smalldf, biz_id) for biz_id in bizs]
Function
--------
get_top_recos_for_user
Parameters
----------
userid : string
The id of the user for whom we want the top recommendations
df : Dataframe
The dataframe of restaurant reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businesses. e.g. dbase.get(rid1,rid2)
n: int
the n top choices of the user by star rating
k : int
the number of nearest neighbors desired, default 8
reg: float
the regularization.
Returns
--------
A sorted list
of the top recommendations. The list is a list of tuples
(business_id, business_avg). You are combining the k-nearest recommendations
for each of the user's n top choices, removing duplicates and the ones the user
has already rated.
#your code here
def get_top_recos_for_user(userid, df, dbase, n=5, k=7, reg=3.):
bizs=get_user_top_choices(userid, df, numchoices=n)['business_id'].values
rated_by_user=df[df.user_id==userid].business_id.values
tops=[]
for ele in bizs:
t=knearest(ele, df.business_id.unique(), dbase, k=k, reg=reg)
for e in t:
if e[0] not in rated_by_user:
tops.append(e)
#there might be repeats. unique it
ids=[e[0] for e in tops]
uids={k:0 for k in list(set(ids))}
topsu=[]
for e in tops:
if uids[e[0]] == 0:
topsu.append(e)
uids[e[0]] =1
topsr=[]
for r, s,nc in topsu:
avg_rate=df[df.business_id==r].stars.mean()
topsr.append((r,avg_rate))
topsr=sorted(topsr, key=itemgetter(1), reverse=True)
if n < len(topsr):
return topsr[0:n]
else:
return topsr
print "For user", usernamefromid(smalldf,testuserid), "the top recommendations are:"
toprecos=get_top_recos_for_user(testuserid, smalldf, db, n=5, k=7, reg=3.)
for biz_id, biz_avg in toprecos:
print biznamefromid(smalldf,biz_id), "| Average Rating |", biz_avg
Function
--------
knearest_amongst_userrated
Parameters
----------
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
user_id : string
The id of the user, in whose reviewed restaurants we want to find the neighbors
df: Dataframe
The dataframe of reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A sorted list
of the top k similar restaurants. The list is a list of tuples
(business_id, shrunken similarity, common support).
#your code here
def knearest_amongst_userrated(restaurant_id, user_id, df, dbase, k=7, reg=3.):
dfuser=df[df.user_id==user_id]
bizsuserhasrated=dfuser.business_id.unique()
return knearest(restaurant_id, bizsuserhasrated, dbase, k=k, reg=reg)
Function
--------
rating
Parameters
----------
df: Dataframe
The dataframe of reviews such as smalldf
dbase : instance of Database class.
A database of similarities, on which the get method can be used to get the similarity
of two businessed. e.g. dbase.get(rid1,rid2)
restaurant_id : string
The id of the restaurant whose nearest neighbors we want
user_id : string
The id of the user, in whose reviewed restaurants we want to find the neighbors
k : int
the number of nearest neighbors desired, default 7
reg: float
the regularization.
Returns
--------
A float
which is the impued rating that we predict that user_id will make for restaurant_id
#your code here
def rating(df, dbase, restaurant_id, user_id, k=7, reg=3.):
mu=df.stars.mean()
users_reviews=df[df.user_id==user_id]
nsum=0.
scoresum=0.
nears=knearest_amongst_userrated(restaurant_id, user_id, df, dbase, k=k, reg=reg)
restaurant_mean=df[df.business_id==restaurant_id].business_avg.values[0]
user_mean=users_reviews.user_avg.values[0]
scores=[]
for r,s,nc in nears:
scoresum=scoresum+s
scores.append(s)
r_reviews_row=users_reviews[users_reviews['business_id']==r]
r_stars=r_reviews_row.stars.values[0]
r_avg=r_reviews_row.business_avg.values[0]
rminusb=(r_stars - (r_avg + user_mean - mu))
nsum=nsum+s*rminusb
baseline=(user_mean +restaurant_mean - mu)
#we might have nears, but there might be no commons, giving us a pearson of 0
if scoresum > 0.:
val = nsum/scoresum + baseline
else:
val=baseline
return val
rating(smalldf,db, '53YGfwmbW73JhFiemNeyzQ', '7cR92zkDv4W3kqzii6axvg', k =7, reg = 3 )
print "User Average", smalldf[smalldf.user_id==testuserid].stars.mean(),"for",usernamefromid(smalldf,testuserid)
print "Predicted ratings for top choices calculated earlier:"
for biz_id,biz_avg in toprecos:
print biznamefromid(smalldf, biz_id),"|",rating(smalldf, db, biz_id, testuserid, k=7, reg=3.),"|","Average",biz_avg
def get_other_ratings(restaurant_id, user_id, df):
"get a user's rating for a restaurant and the restaurant's average rating"
choice=df[(df.business_id==restaurant_id) & (df.user_id==user_id)]
users_score=choice.stars.values[0]
average_score=choice.business_avg.values[0]
return users_score, average_score
print "for user",usernamefromid(smalldf,testuserid), 'avg', smalldf[smalldf.user_id==testuserid].stars.mean()
for biz_id in bizs:
print "----------------------------------"
print biznamefromid(smalldf, biz_id)
print "Predicted Rating:",rating(smalldf, db, biz_id, testuserid, k=7, reg=3.)
u,a=get_other_ratings(biz_id, testuserid, smalldf)
print "Actual User Rating:",u,"Avg Rating",a
def compare_results(stars_actual, stars_predicted, ylow=-10, yhigh=15, title=""):
plot predicted results against actual results. Takes 2 arguments: a
numpy array of actual ratings and a numpy array of predicted ratings
scatterplots the predictions, a unit slope line, line segments joining the mean,
and a filled in area of the standard deviations."
fig=plt.figure()
df=pd.DataFrame(dict(actual=stars_actual, predicted=stars_predicted))
ax=plt.scatter(df.actual, df.predicted, alpha=0.2, s=30, label="predicted")
plt.ylim([ylow,yhigh])
plt.plot([1,5],[1,5], label="slope 1")
xp=[1,2,3,4,5]
yp=df.groupby('actual').predicted.mean().values
plt.plot(xp,yp,'k', label="means")
sig=df.groupby('actual').predicted.std().values
plt.fill_between(xp, yp - sig, yp + sig,
color='k', alpha=0.2)
plt.xlabel("actual")
plt.ylabel("predicted")
plt.legend(frameon=False)
remove_border()
plt.grid(False)
plt.title(title)
print "fraction between -15 and 15 rating", np.mean(np.abs(df.predicted) < 15)
#your code here
def make_results_plot(df,k,reg):
uid=smalldf.user_id.values
bid=smalldf.business_id.values
actual=smalldf.stars.values
predicted=np.zeros(len(actual))
counter=0
for user_id, biz_id in zip(uid,bid):
predicted[counter]=rating(smalldf, db, biz_id, user_id, k=k, reg=reg)
counter=counter+1
compare_results(actual, predicted)
#your code here
print "k=3, reg=3."
make_results_plot(smalldf,3,3.)
plt.title("k=3, reg=3.")
print "k=3, reg=15."
make_results_plot(smalldf,3,15.,)
plt.title("k=3, reg=15.")
print "k=10, reg=3."
make_results_plot(smalldf,10,3.)
plt.title("k=10, reg=3.")
print "k=10, reg=15."
make_results_plot(smalldf,10,15.,)
plt.title("k=10, reg=15.")
def knearest_pos(restaurant_id, set_of_restaurants, dbase, k=7, reg=3.):
Given a restaurant_id, dataframe, and database, get a sorted list of the
k most similar restaurants from the entire database.
similars=[]
for other_rest_id in set_of_restaurants:
if other_rest_id!=restaurant_id:
sim, nc=dbase.get(restaurant_id, other_rest_id)
ssim=shrunk_sim(sim, nc, reg=reg)
similars.append((other_rest_id, ssim/2.0 + float(nc)/(float(nc)+reg), nc ))
similars=sorted(similars, key=itemgetter(1), reverse=True)
return similars[0:k]
def knearest_amongst_userrated_pos(restaurant_id, user_id, df, dbase, k=7, reg=3.):
dfuser=df[df.user_id==user_id]
bizsuserhasrated=dfuser.business_id.unique()
return knearest_pos(restaurant_id, bizsuserhasrated, dbase, k=k, reg=reg)
def rating_pos(df, dbase, restaurant_id, user_id, k=7, reg=3.):
mu=df.stars.mean()
users_reviews=df[df.user_id==user_id]
nsum=0.
scoresum=0.
nears=knearest_amongst_userrated_pos(restaurant_id, user_id, df, dbase, k=k, reg=reg)
restaurant_mean=df[df.business_id==restaurant_id].business_avg.values[0]
user_mean=users_reviews.user_avg.values[0]
scores=[]
for r,sold,nc in nears:
s=sold/2.0
shrink_factor=float(nc)/(float(nc)+reg)
s=s+shrink_factor/2.0
scoresum=scoresum+s
scores.append(s)
r_reviews_row=users_reviews[users_reviews['business_id']==r]
r_stars=r_reviews_row.stars.values[0]
r_avg=r_reviews_row.business_avg.values[0]
rminusb=(r_stars - (r_avg + user_mean - mu))
nsum=nsum+s*rminusb
baseline=(user_mean +restaurant_mean - mu)
#we might have nears, but there might be no commons, giving us a pearson of 0
if scoresum > 0.:
val = nsum/scoresum + baseline
else:
val=baseline
return val
def make_results_plot_pos(df,k,reg):
uid=smalldf.user_id.values
bid=smalldf.business_id.values
actual=smalldf.stars.values
predicted=np.zeros(len(actual))
counter=0
for user_id, biz_id in zip(uid,bid):
predicted[counter]=rating_pos(smalldf, db, biz_id, user_id, k=k, reg=reg)
counter=counter+1
compare_results(actual, predicted, ylow=1, yhigh=5)
print "k=2, reg=1."
make_results_plot_pos(smalldf,2,1.)
plt.title("k=2, reg=1.")
print "k=2, reg=15."
make_results_plot_pos(smalldf,2,15.,)
plt.title("k=2, reg=15.")
print "k=15, reg=1."
make_results_plot_pos(smalldf,15,1.)
plt.title("k=15, reg=1.")
print "k=15, reg=15."
make_results_plot_pos(smalldf,15,15.,)
plt.title("k=15, reg=15.")
Function
--------
gamma_m_draw
Draw a single sample from the conditional posterior distribution
of gamma_m.
Inputs
-------
X_m: A g-by-L+1 matrix, defined above.
Y_m: A 1D vector of length g, defined above.
sig2: Residual _variance_, as defined above.
Lambda_gamma: Prior precision matrix.
Outputs
--------
Single draw from conditional posterior, defined above.
#Item-specific parameters given all else
#your code here
def gamma_m_draw(X_m, Y_m, sig2, Lambda_gamma):
#Compute matrices that define conditional posterior.
Q_m_inv = np.linalg.inv(np.dot(X_m.T, X_m)/sig2+Lambda_gamma)
XtY = np.dot(X_m.T, Y_m)
#Draw item-specific parameters.
return np.random.multivariate_normal(np.dot(Q_m_inv, XtY)/sig2, Q_m_inv)
Function
--------
theta_u_draw
Draw a single sample from the conditional posterior distribution
of gamma_m.
Inputs
-------
X_u: A g-by-L+1 matrix, defined above.
Y_u: A 1D vector of length g, defined above.
sig2: Residual _variance_, as defined above.
Lambda_theta: Prior precision matrix.
Outputs
--------
Single draw from conditional posterior, defined above.
#User-specific parameters given all else
#your code here
def theta_u_draw(X_u, Y_u, sig2, Lambda_theta):
#Compute matrices that define conditional posterior.
Q_u_inv = np.linalg.inv(np.dot(X_u.T, X_u)/sig2+Lambda_theta)
XtY = np.dot(X_u.T, Y_u)
#Draw the user-specific parameters
return np.random.multivariate_normal(np.dot(Q_u_inv, XtY)/sig2, Q_u_inv)
Function
--------
factor_gibbs
Runs a gibbs sampler to infer mean, variance, user-specific, and item-specific
parameters.
Inputs
-------
data: A dataframe containing ratings data.
L: Dimension of latent factors.
maxit: Number of samples to draw from posterior.
Lambda_theta_diag: Hyperparameter controlling regularization of Theta.
Lambda_gamma_diag: Hyperparameter controlling regularization of Gamma.
progress: if true, print iteration number every 100 iterations.
Outputs
--------
Dictionary with elements
mu: Draws of mu. 1D array of length maxiter.
sig2: Draws of sig2, residual _variance_. 1D array of length maxiter.
theta: Draws of Theta. U-by-L-by-maxiter array.
gamma: Draws of Gamma. M-by-L-by-maxiter array.
EY: Draws of fitted values of Y. N-by-maxiter array.
def factor_gibbs(data, L, maxit, Lambda_theta_diag, Lambda_gamma_diag, progress=True):
data = data.copy()
N = data.shape[0]
#Create indices that allow us to map users and restaurants to rows
#in parameter vectors.
uusers, uidx = np.unique(data.user_id, return_inverse=True)
uitems, midx = np.unique(data.business_id, return_inverse=True)
nusers = uusers.size
nitems = uitems.size
#Add numerical indices to dataframe.
data["uidx"] = uidx
data["midx"] = midx
#Group observations by user and by business.
ugroups = data.groupby("uidx")
mgroups = data.groupby("midx")
all_avg = data.stars.mean()
u_avg = ugroups.stars.mean()
m_avg = mgroups.stars.mean()
#Initialize parameters and set up data structures for
#holding draws.
#Overall mean
mu = all_avg
mu_draws = np.zeros(maxit)
#Residual variance
sig2 = 0.5
sig2_draws = np.zeros(maxit)
#Matrix of user-specific bias and L latent factors.
theta = np.zeros([nusers, L+1])
theta[:,0] = u_avg-all_avg
theta_draws = np.zeros([nusers, L+1, maxit])
#Matrix of item-specific bias and L latent factors.
gamma = np.zeros([nitems, L+1])
gamma[:,0] = m_avg-all_avg
gamma_draws = np.zeros([nitems, L+1, maxit])
#Matrix for holding the expected number of stars
#for each observation at each draw from the posterior.
EY_draws = np.zeros([data.shape[0], maxit])
#Inverse covariance matrices from the prior on each theta_u
#and gamma_b. These are diagonal, like Ridge regression.
Lambda_theta = np.eye(L+1)*Lambda_theta_diag
Lambda_gamma = np.eye(L+1)*Lambda_gamma_diag
#Main sampler code
for i in range(maxit):
if i%100==0 and progress:
print i
#The entire regression equation except for the overall mean.
nomu = np.sum(theta[data.uidx,1:]*gamma[data.midx,1:], axis=1) +\
theta[data.uidx,0] + gamma[data.midx,0]
#Compute the expectation of each observation given the current
#parameter values.
EY_draws[:,i]=mu+nomu
#Draw overall mean from a normal distribution
mu = np.random.normal(np.mean(data.stars-nomu), np.sqrt(sig2/N))
#Draw overall residual variance from a scaled inverse-Chi squared distribution.
sig2 = np.sum(np.power(data.stars-nomu-mu,2))/np.random.chisquare(N-2)
#For each item
for mi,itemdf in mgroups:
#Gather relevant observations, and subtract out overall mean and
#user-specific biases, which we are holding fixed.
Y_m = itemdf.stars-mu-theta[itemdf.uidx,0]
#Build the regression design matrix implied by holding user factors
#fixed.
X_m = np.hstack((np.ones([itemdf.shape[0],1]),
theta[itemdf.uidx,1:]))
gamma[mi,:] = gamma_m_draw(X_m, Y_m, sig2, Lambda_gamma)
#For each user
for ui,userdf in ugroups:
#Gather relevant observations, and subtract out overall mean and
#business-specific biases, which we are holding fixed.
Y_u = userdf.stars-mu-gamma[userdf.midx,0]
#Build the regression design matrix implied by holding business factors
#fixed.
X_u = np.hstack((np.ones([userdf.shape[0],1]),
gamma[userdf.midx,1:]))
theta[ui,:] = theta_u_draw(X_u, Y_u, sig2, Lambda_theta)
#Record draws
mu_draws[i] = mu
sig2_draws[i] = sig2
theta_draws[:,:,i] = theta
gamma_draws[:,:,i] = gamma
return {"mu": mu_draws, "sig2": sig2_draws,
"theta": theta_draws, "gamma": gamma_draws,
"EY": EY_draws}
#your code here
gibbs_out = factor_gibbs(smalldf, 2, 1000, 0.1, 0.1)
burnin = 200
predicted=np.mean(gibbs_out['EY'][:,burnin:], axis=1)
#your code here
compare_results(smalldf.stars.values, predicted, ylow=1, yhigh=5, title="From Gibbs Sampler")
gibbs_out = factor_gibbs(smalldf, 15, 1000, 0.1, 0.1)
burnin = 200
predicted=np.mean(gibbs_out['EY'][:,burnin:], axis=1)
compare_results(smalldf.stars.values, predicted, ylow=1, yhigh=5, title="From Gibbs Sampler")
subsetoffull=fulldf[['user_id','business_id', 'stars','business_avg','user_avg']]
subsetoffull.to_csv("subset-full.csv", index=False, header=False)
subsetofsmall=smalldf[['user_id','business_id', 'stars','business_avg','user_avg']]
subsetofsmall.to_csv("subset-small.csv", index=False, header=False)
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
from IPython.display import HTML
import urllib
skelcode = urllib.urlopen("https://raw.github.com/cs109/content/master/skeleton.py").read()
skelhtml=highlight(skelcode, PythonLexer(), HtmlFormatter())
HTML(skelhtml)
def upper_generator(words):
for word in words:
yield word.upper()
words = ['a', 'couple', 'of', 'words', 'to', 'process']
print upper_generator(words)
print list(upper_generator(words))
for u in upper_generator(words):
print u
thecode = open("computesim.py").read()
thehtml=highlight(thecode, PythonLexer(), HtmlFormatter())
HTML(thehtml)
output_small_local=[[json.loads(j) for j in line.strip().split("\t")] for line in open("./output.small.local.txt")]
output_small_local[0]
def make_database_from_pairs(df, bizpairs):
make the database from the pairs returned from mrjob.
df is the dataframe, smalldf or fulldf.
bizpairs are a list of elements, each of which is a list of two
lists. The first of these lists has the two business id's, while
the second has the similarity and the common support
Returns an instance of the Database class.
dbase=Database(df)
cache={}
for bp,corrs in bizpairs:
b1,b2=bp
i1=dbase.uniquebizids[b1]
i2=dbase.uniquebizids[b2]
sim,nsup=corrs
dbase.database_sim[i1][i2]=sim
dbase.database_sim[i2][i1]=sim
dbase.database_sup[i1][i2]=nsup
dbase.database_sup[i2][i1]=nsup
if cache.has_key(b1):
nsup1=cache[b1]
else:
nsup1=dbase.df[dbase.df.business_id==b1].user_id.count()
cache[b1]=nsup1
if cache.has_key(b2):
nsup2=cache[b2]
else:
nsup2=dbase.df[dbase.df.business_id==b2].user_id.count()
cache[b2]=nsup2
dbase.database_sim[i1][i1]=1.0
dbase.database_sim[i2][i2]=1.0
dbase.database_sup[i1][i1]=nsup1
dbase.database_sup[i2][i2]=nsup2
return dbase
db_mrjob_local=make_database_from_pairs(smalldf, output_small_local)
print db.get("zruUQvFySeXyEd7_rQixBg", "z3yFuLVrmH-3RJruPEMYKw")
print db_mrjob_local.get("zruUQvFySeXyEd7_rQixBg", "z3yFuLVrmH-3RJruPEMYKw")
sums=0.
count=0
for k in db.uniquebizids.keys():
for k2 in db.uniquebizids.keys():
count=count+1
sums=sums+db.get(k,k2)[0]-db_mrjob_local.get(k,k2)[0]
print sums, count
output_full_emr=[[json.loads(j) for j in l.strip().split("\t")] for l in open("./output.full.emr.txt")]
dbfull=make_database_from_pairs(fulldf, output_full_emr)
#your code here
print "for user",usernamefromid(fulldf,testuserid), 'avg', fulldf[fulldf.user_id==testuserid].stars.mean()
for i in bizs:
print "========="
print biznamefromid(fulldf, i), i
print rating(fulldf, dbfull, i, testuserid, k=7, reg=3.)
u,a=get_other_ratings(i, testuserid, fulldf)
print "User Score:",u,"Avg score",a
thecode = open("computesim2.py").read()
thehtml=highlight(thecode, PythonLexer(), HtmlFormatter())
HTML(thehtml)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HW4
Step2: Description of the data set
Step3: The data frame is a frame of reviews. We have joined in information about users and businesses into this frame so that you have only one frame to work with.
Step4: your answer here
Step6: The following function is used to re-compute review counts and averages whenever you subset a reviews data frame. We'll use it soon to construct a smaller, more computationally tractable data frame.
Step7: 1.3 Create a smaller data set in dataframe smalldf by looking for those businesses with more than 150 reviews and those users with more than 60 reviews. Include all the columns that were there in the parent dataframe. Since you have created a subset of the data set, use the method provided above to recalculate the averages. Print the number of unique users and items in this data set.
Step8: How does this compare to the parent data set, in terms of size and sparsity? Once again, plot histograms of the review count grouped by user, and by the review count grouped by business, respectively, and describe the results
Step9: your answer here
Step10: Common Support
Step12: As you can see, even though we chose a subset of the dataframe in which every restaurant had 150 reviews and every user had atleast made 60, the common support of most pairs of restaurants is really low, indeed less than 10!.
Step14: The function get_restaurant_reviews defined below takes a restaurant business_id and a set of users, and returns the reviews of that restaurant by those users. You will use this function in calculating a similarity function, in 1.5.
Step16: 1.5 Write a function calculate_similarity that operates between two restaurants and calculates a similarity for them, taking a dataframe and a similarity function similarity_func. An example of the similarity_func is the pearson_sim we defined above. calculate_similarity operates as follows
Step18: Making a database of similarities
Step19: Lets run make_database and store the result in the global variable db. Lets print out an example entry. Running this function will take a bit of time.
Step20: K-Nearest restaurants (in similarity)
Step23: 1.6 Now we can move to writing a knearest function, which finds the k nearest neighbors of a given restaurant based on the shrunk similarities we calculate. Note that as defined here, the nearest neighbors are global over the entire set of restaurants, as opposed to being restricted to the restaurants a user has reviewed(we shall do that in the next problem). Thus, this is an expensive function!
Step24: Ok it's time to recommend!
Step25: We provide functions to look up a business name given a business id, and a username given a user id.
Step26: Get top matches
Step27: We can see that these two restaurants are in somewhat different orbits
Step29: Get top recommendations for user.
Step30: Lets print the top recommendations for testuserid, with a regularization of 3.
Step32: Problem 2
Step34: 2.2 Now write a function that returns the predicted rating for a user and an item using the formula at the beginning of this problem. Include code to deal with the possibility that the sum of scores that goes in the denominator is 0
Step35: For the top-recommendations in the variable toprecos from the previous section, we compute the predicted rating and compare it with the average rating over all users available inside the tuples that make up toprecos. We use a k of 7 and regularization 3. For comparision we also print this users' average rating. Do you notice anything interesting about how the order has changed from when we did this with the global similarities? (for you to think, not to answer)
Step36: Testing the ratings
Step37: For the user testuserid, we loop over the variable bizs (which is a set of restaurants the user has rated) and print the predicted rating, and the actual rating and restaurant average rating obtained using the function above. We again use k=7 and a regularization of 3.
Step39: 2.3 Explain in words why the predicted ratings are lower than the actual ratings. How do the user average rating and restaurant average rating affect this? How does sparsity affect the predicted ratings?
Step40: 2.4 For each review in the data set, obtain a prediction from the entire dataframe smalldf. Use the function compare_results above to plot the predicted ratings against the observed ones. Make 4 such graphs, at k=3 and k=10, and for reg=3. and reg=15.
Step42: your answer here
Step43: NOTICE
Step46: Notice firstly, that at low regularization, at low k (variance limit), the predictions follow the green line more than at high k ( bias limit). This is, as mentioned earlier, due to the bias limit pulling the extreme groups of ratings in towards the mean of all the reviews. Note that increasing bias decreases the variation between the rating groups, resulting in a flatter black curve.
Step48: Here is the Gibbs sampler skeleton that your functions fit into. Look over the structure to see how for each draw from the posterior, the sampler iterates through $\mu$, $\sigma$, $\gamma_m$ for each item, and $\theta_u$ for each user.
Step49: Posterior Summaries
Step50: Plot the predictions against the observed data.You can use the compare_results function defined in the previous section. How do the fitted values compare to those from the KNN procedure?
Step51: your answer here
Step52: The fit here looks much better, both in terms of tracking the green line and in terms of the within-group precision (as measured by the grey band). A model at the variance limit (that is, one that is extremely overfit) with low bias would have a mean line that tracks exactly with the green line and an almost non-existent gray area. To tell whether this plot represents simply a good prediction model or a one that is woefully overfit, we would need to look at out-of-sample data.
Step53: Running mrjob locally
Step54: Explanation for those funny yield keywords
Step55: You can read more here. Also see Thu Oct 17th's class video for information about classes and generators.
Step56: Checking the results
Step58: We will Implement a function make_database_from_pairs which takes a dataframe of restaurants smalldf and the output parsed in the previous command to create the database like before. By the nature of the map-reduce algorithms these only contain those restaurant pairs with common support. The Database constructor initializes the remaining similarities to 0.
Step59: We will store the output in variable db_mrjob_local.
Step60: We print a pair to see that our answers are identical.
Step61: 4.2 Lets test that our results are overall the same as before
Step62: Running on Amazon Elastic Map Reduce(EMR)
Step63: This function will take a very long time to run, on the order of 5 minutes or more, depending on your computer
Step64: 4.4 For testuserid, once again, print out the ratings using the bizs list as before. How have they changed with respect to Question 2? Why might this be?
Step65: your answer here
|
12,721
|
<ASSISTANT_TASK:>
Python Code:
# Data path/filename
t_ind = 38
data_path = '../data/'
file_name = data_path + 'data_sim_low.hdf5'
data_options = {'flag_cell': True, 'flag_electode': False}
data = data_in(file_name, **data_options)
localization_options = {'p_vres':20, 'p_jlen':0, 'p_erad': 5, 't_ind': 38, 'flag_depthweighted': False}
loc = data_out(data, **localization_options)
loc.cmp_sloreta()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And chose the localization parameters. You can check the parameters necessary in the documentation.
Step2: You can see the different functions of of the inverse problem object with iPython easily
|
12,722
|
<ASSISTANT_TASK:>
Python Code:
#!pip install --user miepython
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
def rayleigh(m,x):
Calculate the efficiencies for a small sphere.
Based on equations 5.7 - 5.9 in Bohren and Huffman
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
Returns:
qext: the total extinction efficiency
qsca: the scattering efficiency
qback: the backscatter efficiency
g: the average cosine of the scattering phase function
ratio = (m**2-1)/(m**2+2)
qsca = 8/3*x**4*abs(ratio)**2
qext = 4*x*ratio*(1+x**2/15*ratio*(m**4+27*m**2+38)/(2*m**2+3))
qext = abs(qext.imag + qsca)
qback = 4*x**4*abs(ratio)**2
g = 0
return qext, qsca, qback, g
def rayleigh_S1_S2(m,x,mu):
Calculate the scattering amplitude functions for small spheres.
Based on equation 5.4 in Bohren and Huffman
The amplitude functions are normalized so that when integrated
over all 4*pi solid angles, the integral will be qext*pi*x**2.
The units are weird, sr**(-0.5)
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
mu: the angles, cos(theta), to calculate scattering amplitudes
Returns:
S1, S2: the scattering amplitudes at each angle mu [sr**(-0.5)]
a1 = (2*x**3)/3 * (m**2-1)/(m**2+2)*1j
a1 += (2*x**5)/5 * (m**2-2)*(m**2-1)/(m**2+2)**2 *1j
s1 = (3/2)*a1*np.ones_like(mu)
s2 = (3/2)*a1*mu
## scale so integral over all angles is single scattering albedo
qext, qsca, qback, g = rayleigh(m,x)
factor = np.sqrt(np.pi*qext)*x
return s1/factor, s2/factor
def rayleigh_unpolarized(m,x,mu):
Return the unpolarized scattered intensity for small spheres.
This is the average value for randomly polarized incident light.
The intensity is normalized so the integral of the unpolarized
intensity over 4pi steradians is equal to the single scattering albedo.
Args:
m: the complex index of refraction of the sphere
x: the size parameter
mu: the cos(theta) of each direction desired
Returns
The intensity at each angle in the array mu. Units [1/sr]
s1, s2 = rayleigh_S1_S2(m,x,mu)
return (abs(s1)**2+abs(s2)**2)/2
for x in [0.1,0.2,0.3,0.4]:
m = 1.5-1j
theta = np.linspace(-180,180,180)
mu = np.cos(theta*np.pi/180)
rscat = rayleigh_unpolarized(m,x,mu)
mscat = miepython.i_unpolarized(m,x,mu)
plt.plot(theta,rscat,'--b')
plt.plot(theta,mscat,'r')
plt.annotate('x=%.1f '%x,(theta[-20],mscat[-20]),ha='right',va='bottom')
plt.xlim(-180,180)
plt.xlabel('Angle [degrees]')
plt.ylabel('Scattered Light [1/sr]')
plt.title('Solid Mie, Dashed Rayleigh')
plt.show()
m = 1.5
x = 0.1
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
unp = rayleigh_unpolarized(m,x,mu)
s1,s2 = rayleigh_S1_S2(m,x,mu)
par = abs(s1)**2
per = abs(s2)**2
fig,ax = plt.subplots(1,2,figsize=(12,5))
ax=plt.subplot(121, projection='polar')
ax.plot(theta/180*np.pi,unp)
ax.plot(theta/180*np.pi,par)
ax.plot(theta/180*np.pi,per)
ax.set_rticks([0.05, 0.1,0.15])
plt.subplot(122)
#plt.plot(theta,scat)
plt.plot(theta,unp)
plt.plot(theta,par)
plt.plot(theta,per)
plt.xlabel('Exit Angle [degrees]')
plt.ylabel('Unpolarized Scattered light [1/sr]')
plt.title("m=1.5, x = %.2f"%x)
plt.ylim(0.00,0.2)
plt.xlim(0,180)
plt.show()
m = 1.5
x = 0.1
qext, qsca, qback, g = miepython.mie(m,x)
rext, rsca, rback, rg = rayleigh(m,x)
print('Qext Qsca Qback g')
print("%.5e %.5e %.5e %.5f Mie"%(qext, qsca, qback, g))
print("%.5e %.5e %.5e %.5f Rayleigh"%(rext, rsca, rback, rg))
m = 1.5
x = 0.1
theta = np.linspace(-180,180,19)
mu = np.cos(np.deg2rad(theta))
s1,s2 = miepython.mie_S1_S2(m,x,mu)
rs1, rs2 = rayleigh_S1_S2(m,x,mu)
# the real part of the Rayleigh scattering is always zero
print(" Mie Rayleigh | Mie Rayleigh")
print(" angle | S1.imag S1.imag | S2.imag S2.imag")
print("------------------------------------------------")
for i,angle in enumerate(theta):
print("%7.2f | %8.5f %8.5f | %8.5f %8.5f " % (angle,s1[i].imag,rs1[i].imag, s2[i].imag ,rs2[i].imag))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Goals for this notebook
Step5: Mie scattering describes the special case of the interaction of light passing through a non-absorbing medium with a single embedded spherical object. The sphere itself can be non-absorbing, moderately absorbing, or perfectly absorbing.
Step6: Polar plots for fun
Step7: Compare Rayleigh and Mie efficiencies
Step8: Compare scattering amplitudes S1 and S2
|
12,723
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from keras.applications import vgg16
from keras.layers import Input
from dream import *
from scipy.misc import imread
img_dir = '../images/dream/sky1024px.jpg'
I = imread(img_dir)
plt.imshow(I)
plt.axis('off')
plt.show()
settings = {'features': {'block5_conv1': 0.05,
'block5_conv2': 0.1},
'continuity': 0.1,
'dream_l2': 0.02}
from keras.preprocessing.image import load_img
width, height = load_img(img_dir).size
img_height = 224
img_width = int(width * img_height / height)
img_size = (img_height, img_width, 3)
dream_in = Input(batch_shape=(1,) + img_size)
model = vgg16.VGG16(input_tensor=dream_in,weights='imagenet', include_top=False)
# dictionary with all layers
layer_dict = dict([(layer.name, layer) for layer in model.layers])
# define the loss
loss = K.variable(0.)
for layer_name in settings['features']:
assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.'
coeff = settings['features'][layer_name]
x = layer_dict[layer_name].output
shape = layer_dict[layer_name].output_shape
# Maximize L2 norm of activations: loss is -activations
# we avoid border artifacts by only involving non-border pixels in the loss
loss -= coeff * K.sum(K.square(x[:, 2: shape[1] - 2, 2: shape[2] - 2, :])) / np.prod(shape[1:])
# add continuity loss (gives image local coherence, can result in an artful blur)
loss += settings['continuity'] * continuity_loss(dream_in,img_height, img_width) / np.prod(img_size)
# add image L2 norm to loss (prevents pixels from taking very high values, makes image darker)
loss += settings['dream_l2'] * K.sum(K.square(dream_in)) / np.prod(img_size)
# compute the gradients of the dream wrt the loss
grads = K.gradients(loss, dream_in)
outputs = [loss]
if isinstance(grads, (list, tuple)):
outputs += grads
else:
outputs.append(grads)
f_outputs = K.function([dream_in], outputs)
import time
evaluator = Evaluator(img_size,f_outputs)
# run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the loss
ims = []
iterations = 5
x = preprocess_image(img_dir,img_height, img_width)
for i in range(iterations):
t = time.time()
# run L-BFGS
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),
fprime=evaluator.grads, maxfun=7)
print(i,'Current loss value:', min_val,time.time()-t,'seconds.')
# decode the dream and save it
x = x.reshape(img_size)
img = deprocess_image(np.copy(x),img_height, img_width)
ims.append(img)
f, axarr = plt.subplots(1, len(ims[:5]),figsize=(20,20))
for i,im in enumerate(ims[:5]):
axarr[i].imshow(im)
axarr[i].axis('off')
plt.show()
plt.figure(figsize=(20,20))
plt.imshow(ims[-1])
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use the same image for the example.
Step2: Here are the settings we will use, including the layers of the network we want to "dream" and the weights for each loss term.
Step3: We load the pretrained network
Step4: Deep Dream is a gradient ascent process that tries to maximize the L2 norm of activations of certain layer(s) of the network. Let's define the loss
Step5: Some additional loss terms are added to make the image look nicer
Step6: We define the function that will compute the gradients grads of the image in dream_in based on the loss we just defined. This function is the one that will be used iteratively to update the image based on the gradients.
Step7: Let's run it. We will run 5 iterations, in which we will forward the image, compute the gradients based on the loss and apply the gradients to the image.
Step8: We can display the image for the last 5 iterations
Step9: And let's display the final image with higher resolution.
|
12,724
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
from __future__ import division
import os
import nibabel as nib
import numpy as np
from neuropower import peakdistribution
import scipy.integrate as integrate
import pandas as pd
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
if not 'FSLDIR' in os.environ.keys():
raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set')
# From smoothness + mask to ReselCount
FWHM = 3
ReselSize = FWHM**3
MNI_mask = nib.load(os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')).get_data()
Volume = np.sum(MNI_mask)
ReselCount = Volume/ReselSize
print("ReselSize: "+str(ReselSize))
print("Volume: "+str(Volume))
print("ReselCount: "+str(ReselCount))
print("------------")
# From ReselCount to FWE treshold
FweThres_cmd = 'ptoz 0.05 -g %s' %ReselCount
FweThres = os.popen(FweThres_cmd).read()
print("FWE voxelwise GRF threshold: "+str(FweThres))
Power = 0.8
muRange = np.arange(1.8,5,0.01)
muSingle = []
for muMax in muRange:
# what is the power to detect a maximum
power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0]
if power>Power:
muSingle.append(muMax)
break
print("The power is sufficient for one region if mu equals: "+str(muSingle[0]))
# Read in data
Data = pd.read_csv("../SampleSize/neurosynth_sampsizedata.txt",sep=" ",header=None,names=['year','n'])
Data['source']='Tal'
Data=Data[Data.year!=1997] #remove year with 1 entry
David = pd.read_csv("../SampleSize/david_sampsizedata.txt",sep=" ",header=None,names=['year','n'])
David['source']='David'
Data=Data.append(David)
# add detectable effect
Data['deltaSingle']=muSingle[0]/np.sqrt(Data['n'])
# add jitter for figure
stdev = 0.01*(max(Data.year)-min(Data.year))
Data['year_jitter'] = Data.year+np.random.randn(len(Data))*stdev
# Compute medians per year (for smoother)
Medians = pd.DataFrame({'year':
np.arange(start=np.min(Data.year),stop=np.max(Data.year)+1),
'TalMdSS':'nan',
'DavidMdSS':'nan',
'TalMdDSingle':'nan',
'DavidMdDSingle':'nan',
'MdSS':'nan',
'DSingle':'nan'
})
for yearInd in (range(len(Medians))):
# Compute medians for Tal's data
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==Medians.year[yearInd])])
Medians.TalMdSS[yearInd] = np.median(Data.n[yearBoolTal])
Medians.TalMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolTal])
# Compute medians for David's data
yearBoolDavid = np.array([a and b for a,b in zip(Data.source=="David",Data.year==Medians.year[yearInd])])
Medians.DavidMdSS[yearInd] = np.median(Data.n[yearBoolDavid])
Medians.DavidMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolDavid])
# Compute medians for all data
yearBool = np.array(Data.year==Medians.year[yearInd])
Medians.MdSS[yearInd] = np.median(Data.n[yearBool])
Medians.DSingle[yearInd] = np.median(Data.deltaSingle[yearBool])
Medians[0:5]
# add logscale
Medians['MdSSLog'] = [np.log(x) for x in Medians.MdSS]
Medians['TalMdSSLog'] = [np.log(x) for x in Medians.TalMdSS]
Medians['DavidMdSSLog'] = [np.log(x) for x in Medians.DavidMdSS]
Data['nLog']= [np.log(x) for x in Data.n]
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data['nLog'][Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Data.year_jitter[Data.source=="David"],Data['nLog'][Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.TalMdSSLog,color=twocol[1],lw=3,label="Neurosynth")
axs[0].plot(Medians.year,Medians.DavidMdSSLog,color=twocol[3],lw=3,label="David et al.")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,8])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Median Sample Size")
axs[0].legend(loc="upper left",frameon=False)
#labels=[1,5,10,20,50,150,500,1000,3000]
labels=[1,4,16,64,256,1024,3000]
axs[0].set_yticks(np.log(labels))
axs[0].set_yticklabels(labels)
axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaSingle[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaSingle[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.TalMdDSingle,color=twocol[1],lw=3,label="Neurosynth")
axs[1].plot(Medians.year,Medians.DavidMdDSingle,color=twocol[3],lw=3,label="David et al.")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.savefig('Figure1.svg',dpi=600)
plt.show()
Medians.loc[:, lambda df: ['year', 'TalMdSS', 'TalMdDSingle']]
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year>2010)])
print('Median sample size (2011-2015):',np.median(Data.n[yearBoolTal]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. What is the voxelwise threshold?
Step2: 2. Definition of alternative
Step3: 3. How large statistic in a field be to exceed the threshold with power 0.80?
Step4: 5. From the required voxel statistic to Cohen's D for a given sample size
Step5: The figure per List (Tal or David)
Step6: Print median sample size and power for Neurosynth data
Step7: Compute median of sample sizes over last 5 years, for use in correlation simulation notebook.
|
12,725
|
<ASSISTANT_TASK:>
Python Code:
# Import modules needed to reproduce results
import os
import plotnine
from plotnine import *
import pandas as pd
from scipy import stats
import numpy as np
from statsmodels.stats.proportion import proportion_confint as prop_CI
def tdist_2dist(mu1, mu2, se1, se2, n1, n2, var_eq=False):
var1, var2 = se1**2, se2**2
num = mu1 - mu2
if var_eq:
nu = n1 + n2 - 2
sp2 = ((n1-1)*var1 + (n2-1)*var2) / nu
den = np.sqrt(sp2*(1/n1 + 1/n2))
else:
nu = (var1/n1 + var2/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2/n2)**2/(n2-1) )
den = np.sqrt(var1/n1 + var2/n2)
dist_null = stats.t(df=nu)
tstat = num / den
pvals = 2*np.minimum(dist_null.sf(tstat), dist_null.cdf(tstat))
return tstat, pvals
# Useful short wrappers for making row or columns vectors
def rvec(x):
return np.atleast_2d(x)
def cvec(x):
return rvec(x).T
# Parameters of simulations
nsim = 100000
alpha = 0.05
nlow, nhigh = 25, 75
n1, n2 = np.random.randint(nlow, nhigh+1, nsim), np.random.randint(nlow, nhigh+1, nsim)
se1, se2 = np.exp(np.random.randn(nsim)), np.exp(np.random.randn(nsim))
mu_seq = np.arange(0,0.21,0.01)
tt_seq, method_seq = np.repeat(['eq','neq'],2), np.tile(['neq','eq'],2)
holder = []
np.random.seed(1234)
for mu in mu_seq:
# Generate random data
x1 = mu + se1*np.random.randn(nhigh, nsim)
x2a = se1 * np.random.randn(nhigh, nsim)
x2b = se2 * np.random.randn(nhigh, nsim)
idx = np.tile(np.arange(nhigh),[nsim,1]).T
# Find which rows to set to missing
idx1, idx2 = idx < rvec(n1), idx < rvec(n2)
x1, x2a, x2b = np.where(idx1, x1, np.nan), np.where(idx2, x2a, np.nan), np.where(idx2, x2b, np.nan)
mu_hat1, mu_hat2a, mu_hat2b = np.nanmean(x1, 0), np.nanmean(x2a, 0), np.nanmean(x2b, 0)
se_hat1, se_hat2a, se_hat2b = np.nanstd(x1, 0, ddof=1), np.nanstd(x2a, 0, ddof=1), np.nanstd(x2b, 0, ddof=1)
# Calculate statistics and p-values
tstat_neq_a, pval_neq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, False)
tstat_eq_a, pval_eq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, True)
tstat_neq_b, pval_neq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, False)
tstat_eq_b, pval_eq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, True)
# Find hypothesis rejection probability
power_neq_a, power_eq_a = np.mean(pval_neq_a < alpha), np.mean(pval_eq_a < alpha)
power_neq_b, power_eq_b = np.mean(pval_neq_b < alpha), np.mean(pval_eq_b < alpha)
power_seq = np.array([power_neq_a, power_eq_a, power_neq_b, power_eq_b])
holder.append(pd.DataFrame({'mu':mu,'tt':tt_seq,'method':method_seq, 'power':power_seq}))
# Power comparison
di_method = {'eq':'Equal','neq':'Not Equal'}
res_power = pd.concat(holder).assign(nsim=nsim)
res_power[['tt','method']] = res_power[['tt','method']].apply(lambda x: x.map(di_method))
res_power = res_power.rename(columns={'tt':'Variance'}).assign(nreject=lambda x: (x.power*x.nsim).astype(int))
res_power = pd.concat([res_power.drop(columns=['nsim','nreject']),
pd.concat(prop_CI(count=res_power.nreject,nobs=nsim,method='beta'),1)],1)
res_power.rename(columns={0:'lb',1:'ub'}, inplace=True)
plotnine.options.figure_size = (8, 3.5)
gg_power_ttest = (ggplot(res_power,aes(x='mu',y='power',color='method')) +
theme_bw() + geom_line() +
geom_hline(yintercept=0.05,linetype='--') +
scale_color_discrete(name='Variance assumption') +
geom_linerange(aes(ymin='lb',ymax='ub')) +
ggtitle('Vertical lines show 95% CI') +
labs(y='Prob. of rejecting null',x='Mean difference') +
facet_wrap('~Variance',labeller=label_both) +
theme(legend_position=(0.5,-0.1),legend_direction='horizontal'))
gg_power_ttest
n1, n2 = 25, 75
se1 = 1
se2a, se2b = se1, se1 + 1
var1, var2a, var2b = se1**2, se2a**2, se2b**2
# ddof under different assumptions
nu_a = n1 + n2 - 2
nu_b = (var1/n1 + var2b/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2b/n2)**2/(n2-1) )
mu_seq = np.round(np.arange(0, 1.1, 0.1),2)
# Pre-calculate power
crit_ub_a, crit_lb_a = stats.t(df=nu_a).ppf(1-alpha/2), stats.t(df=nu_a).ppf(alpha/2)
crit_ub_b, crit_lb_b = stats.t(df=nu_b).ppf(1-alpha/2), stats.t(df=nu_b).ppf(alpha/2)
lam_a = np.array([mu/np.sqrt(var1*(1/n1 + 1/n2)) for mu in mu_seq])
lam_b = np.array([mu/np.sqrt((var1/n1 + var2b/n2)) for mu in mu_seq])
dist_alt_a, dist_alt_b = stats.nct(df=nu_a, nc=lam_a), stats.nct(df=nu_b, nc=lam_b)
power_a = (1-dist_alt_a.cdf(crit_ub_a)) + dist_alt_a.cdf(crit_lb_a)
power_b = (1-dist_alt_b.cdf(crit_ub_b)) + dist_alt_b.cdf(crit_lb_b)
dat_theory = pd.concat([pd.DataFrame({'mu':mu_seq,'theory':power_a,'method':'eq'}),
pd.DataFrame({'mu':mu_seq,'theory':power_b,'method':'neq'})])
# Run simulations to confirm
np.random.seed(1234)
holder = []
for mu in mu_seq:
x1 = mu + se1 * np.random.randn(n1, nsim)
x2a = se2a * np.random.randn(n2, nsim)
x2b = se2b * np.random.randn(n2, nsim)
mu_hat1, mu_hat2a, mu_hat2b = x1.mean(0), x2a.mean(0), x2b.mean(0)
se_hat1, se_hat2a, se_hat2b = x1.std(0,ddof=1), x2a.std(0, ddof=1), x2b.std(0, ddof=1)
stat_a, pval_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, var_eq=True)
stat_b, pval_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, var_eq=False)
reject_a, reject_b = np.mean(pval_a < 0.05), np.mean(pval_b < 0.05)
holder.append(pd.DataFrame({'mu': mu,'method':['eq','neq'], 'power': [reject_a, reject_b]}))
res_theory = pd.concat(holder).merge(dat_theory).sort_values(['method','mu']).reset_index(None, True)
res_theory = res_theory.assign(nreject=lambda x: (x.power*nsim).astype(int))
res_theory = pd.concat([res_theory.drop(columns='nreject'),
pd.concat(prop_CI(count=res_theory.nreject,nobs=nsim,method='beta'),1)],1)
res_theory.rename(columns={0:'lb',1:'ub','method':'Variance'}, inplace=True)
res_theory = res_theory.assign(Variance=lambda x: x.Variance.map(di_method))
plotnine.options.figure_size = (8, 3.5)
gg_power_theory = (ggplot(res_theory,aes(x='theory',y='power')) +
theme_bw() + geom_point() +
geom_linerange(aes(ymin='lb',ymax='ub')) +
facet_wrap('~Variance', labeller=label_both) +
theme(legend_position=(0.5, -0.1), legend_direction='horizontal') +
labs(x='Expected power',y='Actual power') +
scale_y_continuous(limits=[0,1]) + scale_x_continuous(limits=[0,1]) +
geom_abline(slope=1,intercept=0,color='blue',linetype='--'))
gg_power_theory
def fdist_anova(mus, ses, ns, var_eq=False):
lshape = len(mus.shape)
assert lshape <= 2
assert mus.shape == ses.shape
if len(ns.shape) == 1:
ns = cvec(ns.copy())
else:
assert ns.shape == mus.shape
if lshape == 1:
mus = cvec(mus.copy())
ses = cvec(ses.copy())
vars = ses ** 2 # variance
n, k = ns.sum(0), len(ns) # Total samples and groups
df1, df2 = (k - 1), (n - k)
if var_eq: # classical anova
xbar = np.atleast_2d(np.sum(mus * ns, 0) / n)
vb = np.sum(ns*(xbar - mus)**2,0) / df1 # numerator is variance between
vw = np.sum((vars * (ns - 1)), 0) / df2 # den is variance within
fstat = vb / vw
pval = stats.f(dfn=df1,dfd=df2).sf(fstat)
else:
w = ns / vars
xbar = np.sum(w * mus, 0) / np.sum(w,0)
num = np.sum(w * (xbar - mus) ** 2,0) / df1
v = 3*np.sum((1-w/w.sum(0))**2 / (ns-1),0) / (k**2 - 1)
den = 1 + 2*((k-2)*v)/3
fstat = num / den
pval = stats.f(dfn=df1, dfd=1/v).sf(fstat)
return fstat, pval
nlow, niter = 25, 5
k_seq = [5, 7, 9]
disp_seq = np.round(np.arange(0, 0.51, 0.1),2)
dgp_seq = np.repeat(['eq', 'neq'], 2)
method_seq = np.tile(['eq', 'neq'], 2)
holder = []
np.random.seed(1)
for k in k_seq:
n_seq = np.arange(nlow, nlow+k * niter, niter)
n_seq = np.tile(n_seq, [nsim, 1]).T
nhigh = np.max(n_seq)
dim_3d = [1, 1, k]
for disp in disp_seq:
mu_k = np.linspace(-disp, disp, num=k)
se_k1 = np.repeat(1,k).reshape(dim_3d)
se_k2 = np.exp(np.random.randn(k)).reshape(dim_3d)
X1 = mu_k + se_k1 * np.random.randn(nhigh,nsim,k)
X2 = mu_k + se_k2 * np.random.randn(nhigh, nsim, k)
idx = np.tile(np.arange(nhigh),[k,nsim,1]).T <= np.atleast_3d(n_seq).T
X1, X2 = np.where(idx, X1, np.nan), np.where(idx, X2, np.nan)
# Calculate means and variance : (k x nsim)
mu_X1, mu_X2 = np.nanmean(X1, 0).T, np.nanmean(X2, 0).T
se_X1, se_X2 = np.nanstd(X1, 0, ddof=1).T, np.nanstd(X2, 0, ddof=1).T
assert n_seq.shape == mu_X1.shape == se_X1.shape
# Calculate significance
fstat_eq1, pval_eq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=True)
fstat_neq1, pval_neq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=False)
fstat_eq2, pval_eq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=True)
fstat_neq2, pval_neq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=False)
reject_eq1, reject_neq1 = np.mean(pval_eq1 < alpha), np.mean(pval_neq1 < alpha)
reject_eq2, reject_neq2 = np.mean(pval_eq2 < alpha), np.mean(pval_neq2 < alpha)
reject_seq = [reject_eq1, reject_neq1, reject_eq2, reject_neq2]
tmp = pd.DataFrame({'k':k,'disp':disp,'dgp':dgp_seq,'method':method_seq,'reject':reject_seq})
# print(tmp)
holder.append(tmp)
res_f = pd.concat(holder).reset_index(None,True)
res_f[['dgp','method']] = res_f[['dgp','method']].apply(lambda x: x.map(di_method),0)
res_f.rename(columns={'dgp':'Variance'}, inplace=True)
plotnine.options.figure_size = (8, 6)
gg_fdist = (ggplot(res_f, aes(x='disp',y='reject',color='method.astype(str)')) +
theme_bw() + geom_line() + geom_point() +
facet_grid('k~Variance',labeller=label_both) +
labs(x='Mean dispersion',y='Prob. of rejecting null') +
geom_hline(yintercept=0.05,linetype='--') +
scale_y_continuous(limits=[0,1]) +
scale_color_discrete(name='Variance assumption'))
gg_fdist
from sklearn import datasets
ix, iy = datasets.load_iris(return_X_y=True)
v1, v2 = ix[:,0], ix[:,1]
k = 1
all_stats = [stats.ttest_ind(v1, v2, equal_var=True)[k],
tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=True)[k],
stats.ttest_ind(v1, v2, equal_var=False)[k],
tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=False)[k]]
pd.DataFrame({'test':'t-test',
'method':np.tile(['scipy','custom'],2),
'pval':all_stats})
import rpy2.robjects as robjects
moments_x = pd.DataFrame({'x':ix[:,0],'y':iy}).groupby('y').x.describe()[['mean','std','count']]
all_stats = [np.array(robjects.r('summary(aov(Sepal.Length~Species,iris))[[1]][1, 5]'))[0],
fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=True)[1][0],
np.array(robjects.r('oneway.test(Sepal.Length~Species,iris)$p.value'))[0],
fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=False)[1][0]]
pd.DataFrame({'test':'F-test',
'method':np.tile(['R','custom'],2),
'pval':all_stats})
n1, n0 = 100, 200
n = n1 + n0
n1n0 = n1 * n0
mu_seq = np.round(np.arange(0, 1.01, 0.1),2)
def se_auroc_hanley(auroc, n1, n0):
q1 = (n1 - 1) * ((auroc / (2 - auroc)) - auroc ** 2)
q0 = (n0 - 1) * ((2 * auroc ** 2) / (1 + auroc) - auroc ** 2)
se_auroc = np.sqrt((auroc * (1 - auroc) + q1 + q0) / (n1 * n0))
return se_auroc
def se_auroc_normal(n1, n0):
return np.sqrt( (n1 + n0 + 1) / (12 * n1 * n0) )
np.random.seed(1)
holder = []
for mu in mu_seq:
x1_null, x0 = np.random.randn(n1, nsim), np.random.randn(n0, nsim)
x1 = mu + np.random.randn(n1, nsim)
x, x_null = np.concatenate((x1, x0)), np.concatenate((x1_null, x0))
auc = (np.sum(stats.rankdata(x, axis=0)[:n1],0) - n1*(n1+1)/2) / n1n0
auc_null = (np.sum(stats.rankdata(x_null, axis=0)[:n1], 0) - n1 * (n1 + 1) / 2) / n1n0
se_HM, se_null_HM = se_auroc_hanley(auc, n1, n0), se_auroc_hanley(auc_null, n1, n0)
se_N = se_auroc_normal(n1, n0)
# Do pairwise t-test
dauc = auc - auc_null
t_score_HM = dauc / np.sqrt(se_HM**2 + se_null_HM**2)
t_score_N = dauc / np.sqrt(2 * se_N**2)
dist_null = stats.t(df=2*n - 2)
pval_HM = 2 * np.minimum(dist_null.sf(t_score_HM), dist_null.cdf(t_score_HM))
pval_N = 2 * np.minimum(dist_null.sf(t_score_N), dist_null.cdf(t_score_N))
reject_HM, reject_N = np.mean(pval_HM < alpha), np.mean(pval_N < alpha)
tmp = pd.DataFrame({'method':['HM','N'],'mu':mu, 'reject':[reject_HM, reject_N]})
holder.append(tmp)
# Merge and analyse
res_auc = pd.concat(holder).reset_index(None, True)
res_auc = res_auc.assign(auc=lambda x: stats.norm.cdf(x.mu/np.sqrt(2)),
nreject=lambda x: (x.reject*nsim).astype(int))
res_auc = pd.concat([res_auc.drop(columns='nreject'),
pd.concat(prop_CI(count=res_auc.nreject,nobs=nsim,method='beta'),1)],1)
res_auc.rename(columns={0:'lb',1:'ub'},inplace=True)
# plot
plotnine.options.figure_size = (5, 4)
gg_auc = (ggplot(res_auc,aes(x='auc',y='reject',color='method')) + theme_bw() +
geom_line() +
labs(x='Alternative hypothesis AUROC',y='Prob. of rejecting null') +
geom_hline(yintercept=0.05,linetype='--') +
geom_linerange(aes(ymin='lb',ymax='ub')) +
scale_color_discrete(name='Method',labels=['Hanley-McNeil','Normal']))
gg_auc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As a rule, I always conduct statistical simulations to make sure the functions I have written actually perform the way I expect them to when the null is known. If you can't get your method to work on a data generating procedure of your choosing, it should not leave the statistical laboratory! In the simulations below, $\mu_y = 0$, and $\mu_x$ will vary from zero to 0.2. At the same time, both variance homoskedasticity ($\sigma_y = \sigma_x$) and heteroskedasticity ($\sigma_y \neq \sigma_x$) will be assessed. To further ensure the approach works, the respective sample sizes, $n$ and $m$, for each of the nsim=100K experiments will be a random integer between 25 and 75. In order to avoid an inner loop and rely of pure numpy vectorization, a data matrix of dimension 75 x 100000 will be generated. To account for the different sample sizes, if $n$ or $m$ is less than 75, the corresponding difference in rows will be set as a missing value np.NaN. The np.nanmean and np.nanstd functions will be used to handle missing values.
Step2: Figure 1 above shows that the tdist_2dist function is working as expected. When the variances of $x$ and $y$ are equivalent, there is no difference in performance between approaches. When the mean difference is zero, the probability of rejecting the null is exactly equivalent to the level of the test (5%). However, when the variances differ, using the degrees of freedom calculation assuming they are equal leads to an inflated type-I error rate. Whereas using the adjustment from Welch's t-test gets to the right nominal level.
Step3: Figure 2 shows that the power calculations line up exactly with the analytical expectations for both equal and unequal variances. Having thoroughly validated the type-I and type-II errors of this function we can now move onto testing whether the means from multiple normal distributions are equal.
Step4: The simulations in Figure 3 show a similar finding to the that of t-test
Step5: So far so good. Next, we'll use rpy2 to get the results in R which supports equal and unequal variances with two different functions.
Step6: Once again the results are identical to the benchmark functions.
|
12,726
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
import pyprind
import pandas as pd
import os
labels = {'pos':1, 'neg':0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path ='./aclImdb/%s/%s' % (s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1) )
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) + \
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop]
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5, verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
import numpy as np
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br>
Step2: Shuffling the DataFrame
Step3: Optional
Step4: <br>
Step5: Assessing word relevancy via term frequency-inverse document frequency
Step6: Cleaning text data
Step7: Processing documents into tokens
Step8: <br>
Step9: <br>
|
12,727
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
#imshow(C.get_optical_path_map())
#colorbar()
#poly,error=C.get_optical_path_map_lsq(order=2)
#print(error)
#print(poly)
def opsystem(lp):
L=library.Edmund.get("32494")
C=CCD()
S=System(complist=[(L,(0,0,lp),(0,-pi,0)),(C,(0,0,570),(0,0,0))],n=1.)
R=point_source_c(span=(0.06,0.06),wavelength=.65)
S.ray_add(R)
S.propagate()
X,Y,Z=C.get_optical_path_data()
return array(Z).std()
#opsystem(158)
#from scipy.optimize import fmin
#fmin(opsystem,158)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejercicio
Step2: Utilizando otras librerias de python para optimizar el sistema
|
12,728
|
<ASSISTANT_TASK:>
Python Code:
# Download example dataset
from msmbuilder.example_datasets import FsPeptide
fs_peptide = FsPeptide()
fs_peptide.cache()
# Work in a temporary directory
import tempfile
import os
os.chdir(tempfile.mkdtemp())
from msmbuilder.dataset import dataset
xyz = dataset(fs_peptide.data_dir + "/*.xtc",
topology=fs_peptide.data_dir + '/fs_peptide.pdb')
len(xyz)
from msmbuilder.featurizer import DihedralFeaturizer
featurizer = DihedralFeaturizer(types=['phi', 'psi'])
diheds = xyz.fit_transform_with(featurizer, 'diheds/', fmt='dir-npy')
print(xyz[0].xyz.shape)
print(diheds[0].shape)
from msmbuilder.decomposition import tICA
tica_model = tICA(lag_time=1, n_components=4)
# fit and transform can be done in seperate steps:
tica_model = diheds.fit_with(tica_model)
tica_trajs = diheds.transform_with(tica_model, 'ticas/', fmt='dir-npy')
print(diheds[0].shape)
print(tica_trajs[0].shape)
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
txx = np.concatenate(tica_trajs)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1)
from msmbuilder.cluster import MiniBatchKMeans
clusterer = MiniBatchKMeans(n_clusters=100)
clustered_trajs = tica_trajs.fit_transform_with(clusterer,
'kmeans/',
fmt='dir-npy')
print(tica_trajs[0].shape)
print(clustered_trajs[0].shape)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1)
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=100, c='w')
from msmbuilder.msm import MarkovStateModel
from msmbuilder.utils import dump
msm = MarkovStateModel(lag_time=5)
msm.fit(clustered_trajs)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:,1], # color by eigenvector
cmap="RdBu")
plt.colorbar(label='First dynamical eigenvector')
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
plt.tight_layout()
from msmbuilder.lumping import PCCAPlus
pcca = PCCAPlus.from_msm(msm, n_macrostates=5)
macro_trajs = pcca.transform(clustered_trajs)
plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys")
plt.scatter(clusterer.cluster_centers_[:,0],
clusterer.cluster_centers_[:,1],
s=100,
c=pcca.microstate_mapping_,
)
plt.xlabel('tIC 1')
plt.ylabel('tIC 2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset object
Step2: Featurization
Step3: Intermediate kinetic model
Step4: tICA Heatmap
Step5: Clustering
Step6: MSM
Step7: Macrostate Model
|
12,729
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from pandas import Series, DataFrame
weather = pd.read_table('daily_weather.tsv')
weather.groupby('season_desc').agg({'temp': np.mean})
fix = weather.replace("Fall", "Summer_").replace("Summer", "Spring_").replace("Winter", "Fall_").replace("Spring", "Winter_")
weather.groupby('season_desc').agg({'temp': np.mean})
weather['months'] = pd.DatetimeIndex(weather.date).month
weather.groupby('months').agg({'total_riders': np.sum})
weather[['total_riders', 'temp', 'months']].groupby('months').corr()
weather[['no_casual_riders', 'no_reg_riders', 'temp']].corr()
weather[['no_casual_riders', 'no_reg_riders']].corr()
weather[['is_holiday', 'total_riders']].sum()
weather[['is_holiday', 'total_riders']].corr()
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(weather['months'], weather['temp'])
plt.xlabel("This is just an x-axis")
plt.ylabel("This is just a y-axis")
plt.show()
x = weather.groupby('months').agg({"humidity":np.mean})
plt.bar([n for n in range(1, 13)], x['humidity'])
plt.title("weather and humidity by months")
plt.show()
xs = range(10)
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='r', marker='*', label='series1')
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='g', marker='o', label='series2')
plt.title("A scatterplot with two series")
plt.legend(loc=9)
plt.show()
w = weather[['season_desc', 'temp', 'total_riders']]
fall = w.loc[w['season_desc'] == 'Fall']
winter = w.loc[w['season_desc'] == 'Winter']
spring = w.loc[w['season_desc'] == 'Spring']
summer = w.loc[w['season_desc'] == 'Summer']
plt.scatter(fall['temp'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)
plt.scatter(winter['temp'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)
plt.scatter(spring['temp'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)
plt.scatter(summer['temp'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)
plt.legend(loc='lower right')
plt.xlabel('temperature')
plt.ylabel('rental volume')
plt.show()
w = weather[['season_desc', 'windspeed', 'total_riders']]
fall = w.loc[w['season_desc'] == 'Fall']
winter = w.loc[w['season_desc'] == 'Winter']
spring = w.loc[w['season_desc'] == 'Spring']
summer = w.loc[w['season_desc'] == 'Summer']
plt.scatter(fall['windspeed'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)
plt.scatter(winter['windspeed'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)
plt.scatter(spring['windspeed'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)
plt.scatter(summer['windspeed'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)
plt.legend(loc='lower right')
plt.xlabel('windspeed x1000 mph')
plt.ylabel('rental volume')
usage = pd.read_table('usage_2012.tsv')
stations = pd.read_table('stations.tsv')
stations.head()
c = DataFrame(counts.index, columns=['station'])
c['counts'] = counts.values
s = stations[['station','lat','long']]
u = pd.concat([usage['station_start']], axis=1, keys=['station'])
counts = u['station'].value_counts()
m = pd.merge(s, c, on='station')
plt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.2)
plt.legend(loc='lower right')
plt.xlabel('longitude')
plt.ylabel('latitude')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
Step2: weather[['total_riders', 'temp']].corr()
Step3: weather[['total_riders', 'temp', 'season_desc']].groupby('season_desc').corr()
Step4: 4.There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
Step5: Part 2
Step6: Plot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month.
Step7: Use a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season.
Step8: Create another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season.
Step9: How do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude.
|
12,730
|
<ASSISTANT_TASK:>
Python Code::
model = Sequential()
model.add(Embedding(vocab_size, 10, input_length=1))
model.add(LSTM(1000, return_sequences=True))
model.add(LSTM(1000))
model.add(Dense(1000, activation="relu"))
model.add(Dense(vocab_size, activation="softmax"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,731
|
<ASSISTANT_TASK:>
Python Code:
# Für die Standardausgabe benutzen wir die print() Funktion
print("Hallo Welt!")
# Wir können mit Kommata getrennt auch mehrere Werte ausgeben:
print("foo", "bar")
# Mit der help() Funktionen zeigen wir uns
# die Hilfe der print() Funktion an:
help(print)
# Ausgabe mit Seperatoren:
print("foo", "bar", sep="#")
# Ausgabe mit end-string:
print("foo", "bar", end="##\n")
print("test")
4 + 34
print(3 + 4) # Addition
print(4 - 6) # Subtraktion
print(3 * 7) # Multiplikation
print(3 // 2) # Ganzzahlige Division
print(3 % 2) # Division mit Rest
print(3 / 2) # Division
print(2 ** 4) # Potenz, alternativ pow(2, 4)
print(4 << 1) # Bitshift nach links, alternativ 4 * (2 ** 1)
print(4 >> 1) # Bitshift nach rechts, alternativ 4 // (2 ** 1)
print(5 ^ 1) # bitweises XOR
print(4.5 + 3.8)
print(int(3.5))
print(float(4))
ham = 4
egg = 12
ham_price = 2.99
egg_price = 0.49
print(ham, egg)
print(ham_price, egg_price)
print()
print("ham: ", ham * ham_price)
print("egg: ", egg * egg_price)
summ = ham * ham_price + egg * egg_price
print("sum: ", summ)
# den Typen eines Wertes können wir mit type() bestimmt werden:
print(type("a"))
print(type(2))
print(type(4.8))
s = "String"
print(type(s))
s = 4
print(type(s))
hallo = 'Hallo Welt!'
text = "Programmieren mit Python."
print(hallo, text, sep="\n")
multiline =
Dies ist ein
mehrzeiliger
String
mit Einrückung.
print(multiline)
# Strings können wir miteinander "addieren", man spricht auch von konkatinieren
foo = "foo"
bar = "bar"
foobar = foo + bar
print(foobar)
# Strings können wir auch "multiplizieren":
print(10*"#" + " foo " + 10*"#")
# len() liefert uns die Länge eines Objektes:
text = "Programmieren mit Python."
length = len(text)
print(text)
print(length*"*")
print(length)
# mit der str() Funktion lassen sich Objekte in einen String umwandeln:
s = str(12)
print(s)
eingabe = input("Bitte etwas eingeben: ")
print(eingabe)
print(type(eingabe))
import keyword
print(keyword.kwlist)
True = 0 # Anzahl an Versuchen
# Berechnen der Summe zweier Zahlen
sum1 = 5 # erster Summand
sum2 = 7 # zweiter Summand
print(sum1 + sum2)
import this
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Einfache Operationen
Step2: Genauer betrachtet besteht die Zeile 4 + 34 aus zwei Literalen (4 und 34) und einem Operator (+), die kombiniert den Ausdruck ergeben. Ein Literal ist die direkte Darstellung eines Wertes. Operatoren verknüpfen Werte und geben Werte zurück.
Step3: Oben sind die wichtigsten Operatoren für Werte des Integer Typs aufgelistet. Bemerkenswert ist, dass es drei Arten der Divison gibt
Step4: Ein float repräsentiert Fließkommazahlen. Dieselben Operatoren, die oben auf Integer angewandt wurden, können auch auf floats angewendet werden. Wichtig ist, dass dabei das Ergebnis stets vom Typ float ist.
Step5: Variablen
Step6: Bei der Benennung von Variablen sollte darauf geachtet werden, kurze aber verständliche Variablennamen zu benutzen, da so klar ist wozu die Variable benutzt wird. Auf keinen Fall sollten Variablennamen wie l, O oder I benutzt werden, da diese, je nach Schriftart, wie 0 oder 1 aussehen können.
Step7: Python besitzt eine streng dynamische Typisierung, das heißt
Step8: Strings
Step10: Strings sind allerdings nicht auf eine Zeile begrenzt. Multiline strings werden durch dreifache Anführungszeichen definiert.
Step11: Eingabe
Step12: Schlüsselwörter
Step13: Kommentare
Step14: Zen of Python
|
12,732
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from matplotlib.pylab import *
from pymc3 import *
import numpy as np
d = np.random.normal(size=(3, 30))
d1 = d[0] + 4
d2 = d[1] + 4
yd = .2*d1 +.3*d2 + d[2]
lam = 3
with Model() as model:
s = Exponential('s', 1)
tau = Uniform('tau', 0, 1000)
b = lam * tau
m1 = Laplace('m1', 0, b)
m2 = Laplace('m2', 0, b)
p = d1*m1 + d2*m2
y = Normal('y', mu=p, sd=s, observed=yd)
import pymc3
pymc3.variational.advi( model=model, n=100000)
with model:
start = find_MAP()
step1 = Metropolis([m1, m2])
step2 = Slice([s, tau])
trace = sample(10000, [step1, step2], start=start)
traceplot(trace);
hexbin(trace[m1],trace[m2], gridsize = 50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then define the random variables.
Step2: For most samplers, including Metropolis and HamiltonianMC, simply pass a list of variables to sample as a block. This works with both scalar and array parameters.
|
12,733
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
from scipy.cluster.hierarchy import dendrogram, linkage
import ggplot as gg
import networkx as nx
%matplotlib inline
data_dir = os.path.join(os.getenv('MDA_DATA_DIR', '/home/mattmcd/Work/Data'), 'LoveActually')
def read_script():
Read the Love Actually script from text file into list of lines
The script is first Google hit for 'Love Actually script' as a doc
file. Use catdoc or Libre Office to save to text format.
with open(os.path.join(data_dir, 'love_actually.txt'), 'r') as f:
lines = [line.strip() for line in f]
return lines
def read_actors():
Read the mapping from character to actor using the varianceexplained data file
Used curl -O http://varianceexplained.org/files/love_actually_cast.csv to get a local copy
return pd.read_csv(os.path.join(data_dir, 'love_actually_cast.csv'))
def parse_script(raw):
df = pd.DataFrame(raw, columns=['raw'])
df = df.query('raw != ""')
df = df[~df.raw.str.contains("(song)")]
lines = (df.
assign(is_scene=lambda d: d.raw.str.contains(" Scene ")).
assign(scene=lambda d: d.is_scene.cumsum()).
query('not is_scene'))
speakers = lines.raw.str.extract('(?P<speaker>[^:]*):(?P<dialogue>.*)')
lines = (pd.concat([lines, speakers], axis=1).
dropna().
assign(line=lambda d: np.cumsum(~d.speaker.isnull())))
lines.drop(['raw', 'is_scene'], axis=1, inplace=True)
return lines
def read_all():
lines = parse_script(read_script())
cast = read_actors()
combined = lines.merge(cast).sort('line').assign(
character=lambda d: d.speaker + ' (' + d.actor + ')').reindex()
# Decode bytes to unicode
combined['character'] = map(lambda s: s.decode('utf-8'), combined['character'])
return combined
# Read in script and cast into a dataframe
lines = read_all()
# Print the first few rows
lines.head(10)
def get_scene_speaker_matrix(lines):
by_speaker_scene = lines.groupby(['character', 'scene'])['line'].count()
speaker_scene_matrix = by_speaker_scene.unstack().fillna(0)
return by_speaker_scene, speaker_scene_matrix
# Group by speaker and scene and construct the speaker-scene matrix
by_speaker_scene, speaker_scene_matrix = get_scene_speaker_matrix(lines)
def plot_dendrogram(mat, normalize=True):
# Cluster and plot dendrogram. Return order after clustering.
if normalize:
# Normalize by number of lines
mat = mat.div(mat.sum(axis=1), axis=0)
Z = linkage(mat, method='complete', metric='cityblock')
labels = mat.index
f = plt.figure()
ax = f.add_subplot(111)
R = dendrogram(Z, leaf_rotation=90, leaf_font_size=8,
labels=labels, ax=ax, color_threshold=-1)
f.tight_layout()
ordering = R['ivl']
return ordering
# Hierarchical cluster and return order of leaves
ordering = plot_dendrogram(speaker_scene_matrix)
print(ordering)
def get_scenes_with_multiple_characters(by_speaker_scene):
# Filter speaker scene dataframe to remove scenes with only one speaker
# n_scene x 1 Series with index 'scene'
filt = by_speaker_scene.count('scene') > 1
# n_scene x n_character Index
scene_index = by_speaker_scene.index.get_level_values('scene')
# n_scene x n_character boolean vector
ind = filt[scene_index].values
return by_speaker_scene[ind]
def order_scenes(scenes, ordering=None):
# Order scenes by e.g. leaf order after hierarchical clustering
scenes = scenes.reset_index()
scenes['scene'] = scenes['scene'].astype('category')
scenes['character'] = scenes['character'].astype('category', categories=ordering)
scenes['character_code'] = scenes['character'].cat.codes
return scenes
# Order the scenes by cluster leaves order
scenes = order_scenes(get_scenes_with_multiple_characters(by_speaker_scene), ordering)
def plot_timeline(scenes):
# Plot character vs scene timelime
# NB: due to limitations in Python ggplot we need to plot with scene on y-axis
# in order to label x-ticks by character.
# scale_x_continuous and scale_y_continuous behave slightly differently.
print (gg.ggplot(gg.aes(y='scene', x='character_code'), data=scenes) +
gg.geom_point() + gg.labs(x='Character', y='Scene') +
gg.scale_x_continuous(
labels=scenes['character'].cat.categories.values.tolist(),
breaks=range(len(scenes['character'].cat.categories))) +
gg.theme(axis_text_x=gg.element_text(angle=30, hjust=1, size=10)))
# Plot a timeline of characters vs scene
plot_timeline(scenes);
def get_cooccurrence_matrix(speaker_scene_matrix, ordering=None):
# Co-occurrence matrix for the characters, ignoring last scene where all are present
scene_ind = speaker_scene_matrix.astype(bool).sum() < 10
if ordering:
mat = speaker_scene_matrix.loc[ordering, scene_ind]
else:
mat = speaker_scene_matrix.loc[:, scene_ind]
return mat.dot(mat.T)
cooccur_mat = get_cooccurrence_matrix(speaker_scene_matrix, ordering)
def plot_heatmap(cooccur_mat):
# Plot co-ccurrence matrix as heatmap
plt.figure()
plt.pcolor(cooccur_mat)
# Plot heatmap of co-occurrence matrix
plot_heatmap(cooccur_mat)
def plot_network(cooccur_mat):
# Plot co-occurence matrix as network diagram
G = nx.Graph(cooccur_mat.values)
pos = nx.graphviz_layout(G) # NB: needs pydot installed
plt.figure()
nx.draw_networkx_nodes(G, pos, node_size=700, node_color='c')
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(
G, pos,
labels={i: s for (i, s) in enumerate(cooccur_mat.index.values)},
font_size=10)
plt.axis('off')
plt.show()
# Plot network graph of co-occurrence matrix
plot_network(cooccur_mat)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Data import
Step4: The cell below reproduces the logic in the first cell of the original article. It doesn't feel quite as nice to me as the dplyr syntax but is not too bad.
Step5: Constructing the n_character x n_scene matrix showing how many lines each character has in each scene is quite easy using pandas groupby method to create a hierarchical index, followed by the unstack method to convert the second level of the index into columns.
Step6: Analysis
Step7: Timeline
Step8: Co-occurrence matrix
Step9: The heatmap below is not as nice as the default R heatmap as it is missing the dendrograms on each axis and also the character names, so could be extended e.g. following Hierarchical Clustering Heatmaps in Python.
Step10: The network plot gives similar results to the original article. This could be extended, for example by adding weights to the graph edges.
|
12,734
|
<ASSISTANT_TASK:>
Python Code:
gap_fill_by_month = candles.groupby(["month", "gap_filled"]).size()
gap_fill_by_month.groupby("month").apply(lambda g: g / g.sum() * 100)
gap_fill_by_day_of_week = candles.groupby(["day_of_week", "gap_filled"]).size()
gap_fill_by_day_of_week.groupby("day_of_week").apply(lambda g: g / g.sum() * 100)
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
# One-hot encode categorical features like day_of_week and month
day_of_week = pd.get_dummies(candles["day_of_week"])
month = pd.get_dummies(candles["month"])
x = candles[["gap_size"]].join([day_of_week, month])
x
y = candles["gap_filled"].replace({True: "Filled", False: "NoFill"})
y
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
logistic = LogisticRegression()
logistic.fit(x_train, y_train)
logistic_predictions = logistic.predict(x_test)
print("Logistic Regression accuracy {:.1%}".format(metrics.accuracy_score(y_test, logistic_predictions)))
metrics.plot_confusion_matrix(logistic, x_test, y_test)
svm_model = svm.LinearSVC()
svm_model.fit(x_train, y_train)
svm_predictions = svm_model.predict(x_test)
print("SVM accuracy {:.1%}".format(metrics.accuracy_score(y_test, svm_predictions)))
tree = DecisionTreeClassifier(max_depth=4)
tree.fit(x_train, y_train)
tree_predictions = tree.predict(x_test)
print("Random Forest accuracy {:.1%}".format(metrics.accuracy_score(y_test, tree_predictions)))
# Partial gap fill
conditions = [candles["gap_filled"] == True, candles["gap_percent"] > 0, candles["gap_percent"] < 0]
choices = [math.nan, candles["low"] - candles["prev_close"], candles["high"] - candles["prev_close"]]
candles["remaining_gap"] = np.select(conditions, choices)
candles["partial_gap_fill_percent"] = 1 - candles["remaining_gap"] / candles["gap"]
partial_fill = candles.dropna()
partial_fill
print("Naive partial gap fill reaches {:.2f}% on average".format(partial_fill["partial_gap_fill_percent"].mean()*100))
print(partial_fill.groupby(["gap_size"])["partial_gap_fill_percent"].quantile(0.3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Month has no discernible effect on gap fill rate.
Step2: Monday has a slightly lower gap fill rate.
|
12,735
|
<ASSISTANT_TASK:>
Python Code:
import requests
url = 'http://www.tripadvisor.com/'
response = requests.get(url)
print(response.status_code)
#print(response.headers)
import requests
url = 'http://www.tripadvisor.com/robots.txt'
response = requests.get(url)
if response.status_code == 200:
print(response.status_code)
print(response.text)
else:
print('Failed to get a response from the url. Error code: ',resp.status_code )
import requests
url = 'http://tripadvisor.com'
response = requests.get(url)
if response.status_code == 200:
print(response.status_code)
print(response.text)
else:
print('Failed to get a response from the url. Error code: ',resp.status_code )
<h1 id="HEADING" property="name" class="heading_name ">
<div class="heading_height"></div>
"
Le Jardin Napolitain
"
</h1>
import requests
from bs4 import BeautifulSoup
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
response = requests.get(scrape_url)
print(response.status_code)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser') # Soup
print(soup.prettify)
<div class="entry">
<p class="partial_entry">
Popped in on way to Eiffel Tower for lunch, big mistake.
Pizza was disgusting and service was poor.
It’s a shame Trip Advisor don’t let you score venues zero....
<span class="taLnk ulBlueLinks" onclick="widgetEvCall('handlers.clickExpand',event,this);">More
</span>
</p>
</div>
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for review in ret_soup.find_all('p', class_='partial_entry'):
print(review.text) #We are interested only in the text data, since the reviews are stored as text
main()
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
page_no = 0
while(page_no < 60):
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-or'+str(page_no)+'-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for review in ret_soup.find_all('p', class_='partial_entry'):
print(review.text) #We are interested only in the text data, since the reviews are stored as text
page_no = page_no + 10
main()
#Enter your code here
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for rev_data in ret_soup.find_all('div', class_= 'review-container'):
date = rev_data.find('span', class_ ='ratingDate')# Get the date if the review
print(date.text)
review = rev_data.find('p') # Get the review text
print(review.text)
rating = rev_data.find('span',class_='ui_bubble_rating') #Get the rating of the review
print(int(rating['class'][1][7:])/10)
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the '/robots.txt' file contents
Step2: Get the HTML content from the website
Step3: Scraping websites
Step4: Step 1
Step5: Step 2
Step7: Step 3
Step9: Step 4
Step10: Using yesterdays sentiment analysis code and the corpus of sentiment found in the word_sentiment.csv file, calculate the sentiment of the reviews.
Step12: Expanding this further
|
12,736
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
12,737
|
<ASSISTANT_TASK:>
Python Code:
train = pd.read_csv("train.csv")
train.describe()
# Cleanup Gender and Embarked
train['Sex'] = np.where(train['Sex'] == 'male', 0, 1)
train['Embarked'] = train['Embarked'].fillna('Z').map(dict(C=0, S=1, Q=2, Z=3))
# AGE -- quickly look at data
train['hasage'] = np.isnan(train['Age'])
train.hist('Age', by='Survived', bins=25)
train.groupby('Survived').mean()
# Age is missing values
train['Age'] = np.where(np.isfinite(train['Age']), train['Age'], -1)
# Remap cabin to a numeric value depending on the letter
m = {chr(i+97).upper():i for i in range(26)}
shortenmap = lambda x: m[x[0]]
train['cleancabin'] = train['Cabin'].fillna('Z').apply(shortenmap)
train['cleancabin'].hist()
# Get person title / family name
# These might be overfitting the data since the title is correlated with gender
# and family name and siblings, however they seems to add more information.
train['family'] = train['Name'].apply(lambda x: x.split(',')[0])
train['title'] = train['Name'].apply(lambda x: x.split(',')[1].split()[0])
nfamily = dict(train['family'].value_counts())
train['nfamily'] = train['family'].map(nfamily)
ntitle = {title:i for i,title in enumerate(np.unique(train['title']))}
train['ntitle'] = train['title'].map(ntitle)
train.groupby('Survived').mean()
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", 'cleancabin', 'nfamily', 'ntitle']
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostRegressor
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import VotingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn import cross_validation
scores = cross_validation.cross_val_score(
LogisticRegression(random_state=0),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
scores = cross_validation.cross_val_score(
RandomForestClassifier(
random_state=0,
n_estimators=150,
min_samples_split=4,
min_samples_leaf=2
),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
scores = cross_validation.cross_val_score(
GradientBoostingClassifier(n_estimators=100,
learning_rate=1.0,
max_depth=1,
random_state=0),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
scores = cross_validation.cross_val_score(
SVC(random_state=0),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
scores = cross_validation.cross_val_score(
AdaBoostRegressor(SVC(kernel='poly', random_state=0), random_state=0, n_estimators=500, learning_rate=0.5),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
scores = cross_validation.cross_val_score(
AdaBoostClassifier(random_state=0, n_estimators=100),
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
bagging = BaggingClassifier(KNeighborsClassifier(), max_samples=0.5, max_features=0.5, random_state=0)
scores = cross_validation.cross_val_score(
bagging,
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
est = [('GNB', GaussianNB()),
('LR', LogisticRegression(random_state=1)),
('RFC',RandomForestClassifier(random_state=1))]
alg = BaggingClassifier(VotingClassifier(est, voting='soft'), max_samples=0.5, max_features=0.5)
scores = cross_validation.cross_val_score(
alg,
train[predictors],
train["Survived"],
cv=3
)
print('{:0.1f}'.format(100*scores.mean()))
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(train[predictors], train['Survived'])
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
ind = np.arange(len(indices))
pylab.errorbar(ind, importances[indices], yerr=std, fmt='s')
pylab.xticks(ind, [predictors[i] for i in indices], rotation='vertical')
pylab.axhline(0.0, color='orange', lw=2)
for index in indices:
print index, predictors[index], importances[index]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Clean Data
Step2: There is a clear difference in the distributions in ages between thoes who survived and not. Also from the table you can see the differences in the mean values of the passenger class (pclass), ages, and Fares. Note it is also more likely to have a missing age if you did not survive. Rather than attempting to model the missing ages, I include -1 age class
Step3: Feature Creation
Step4: Classify!
Step5: Logistic Regression
Step6: Random Forest
Step7: Gradient Boost
Step8: Support Vector Machine Classifier
Step9: Support Vector Machine Classifier with AdaBoost?!
Step10: AdaBoost
Step11: K Nearest Neighbors + Bagging
Step12: Voting Classifier with multiple classifiers
Step13: Measure feature Strength
|
12,738
|
<ASSISTANT_TASK:>
Python Code:
! gsutil ls gs://pyspark-workshop/so-posts
lines = sc.textFile("gs://pyspark-workshop/so-posts/*")
# or a smaller piece of them
lines = sc.textFile("gs://pyspark-workshop/so-posts/Posts.xml-*a")
lines.take(5)
rows = lines.filter(lambda x: x.lstrip().startswith('<row'))
import xml.etree.ElementTree as ET
parsed = lines.map(lambda x: x.lstrip()).filter(lambda x: x.startswith('<row')).map(lambda x: ET.fromstring(x))
from pprint import pprint
pprint(parsed.take(2))
pprint(parsed.map(lambda x: x.attrib).take(3))
def parse_tags(x):
return x[1:-1].split("><")
tags = parsed.map(lambda x: parse_tags(x.attrib['Tags']) if 'Tags' in x.attrib else [])
tags.take(5)
counts = tags.flatMap(lambda x: x).groupBy(lambda x: x).map(lambda x: (x[0], len(x[1])))
counts.sortBy(lambda x: x[1], ascending=False).take(10)
# if you hate xml (you do), then save it as json on hdfs!
import json
parsed.map(lambda x: json.dumps(x.attrib)).saveAsTextFile("posts.jsons")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's check what's inside these files...
Step2: Only proper rows with posts
Step3: Let's parse this mess...
Step4: Better
Step5: Let's compute tag counts!
Step6: Taking long? go to
Step7: Shout if you're the first one here! Congrats!
|
12,739
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer
from sklearn.svm import SVR
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def FrankeFunction(x,y):
term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))
term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))
term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))
term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)
return term1 + term2 + term3 + term4
def create_X(x, y, n ):
if len(x.shape) > 1:
x = np.ravel(x)
y = np.ravel(y)
N = len(x)
l = int((n+1)*(n+2)/2) # Number of elements in beta
X = np.ones((N,l))
for i in range(1,n+1):
q = int((i)*(i+1)/2)
for k in range(i+1):
X[:,q+k] = (x**(i-k))*(y**k)
return X
# Making meshgrid of datapoints and compute Franke's function
n = 5
N = 1000
x = np.sort(np.random.uniform(0, 1, N))
y = np.sort(np.random.uniform(0, 1, N))
z = FrankeFunction(x, y)
X = create_X(x, y, n=n)
# split in training and test data
X_train, X_test, y_train, y_test = train_test_split(X,z,test_size=0.2)
svm = SVR(gamma='auto',C=10.0)
svm.fit(X_train, y_train)
# The mean squared error and R2 score
print("MSE before scaling: {:.2f}".format(mean_squared_error(svm.predict(X_test), y_test)))
print("R2 score before scaling {:.2f}".format(svm.score(X_test,y_test)))
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print("Feature min values before scaling:\n {}".format(X_train.min(axis=0)))
print("Feature max values before scaling:\n {}".format(X_train.max(axis=0)))
print("Feature min values after scaling:\n {}".format(X_train_scaled.min(axis=0)))
print("Feature max values after scaling:\n {}".format(X_train_scaled.max(axis=0)))
svm = SVR(gamma='auto',C=10.0)
svm.fit(X_train_scaled, y_train)
print("MSE after scaling: {:.2f}".format(mean_squared_error(svm.predict(X_test_scaled), y_test)))
print("R2 score for scaled data: {:.2f}".format(svm.score(X_test_scaled,y_test)))
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.svm import SVC
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
svm = SVC(C=100)
svm.fit(X_train, y_train)
print("Test set accuracy: {:.2f}".format(svm.score(X_test,y_test)))
from sklearn.preprocessing import MinMaxScaler, StandardScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print("Feature min values before scaling:\n {}".format(X_train.min(axis=0)))
print("Feature max values before scaling:\n {}".format(X_train.max(axis=0)))
print("Feature min values before scaling:\n {}".format(X_train_scaled.min(axis=0)))
print("Feature max values before scaling:\n {}".format(X_train_scaled.max(axis=0)))
svm.fit(X_train_scaled, y_train)
print("Test set accuracy scaled data: {:.2f}".format(svm.score(X_test_scaled,y_test)))
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
svm.fit(X_train_scaled, y_train)
print("Test set accuracy scaled data: {:.2f}".format(svm.score(X_test_scaled,y_test)))
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
cancer = load_breast_cancer()
fig, axes = plt.subplots(15,2,figsize=(10,20))
male = cancer.data[cancer.target == 0]
bene = cancer.data[cancer.target == 1]
ax = axes.ravel()
for i in range(30):
_, bins = np.histogram(cancer.data[:,i], bins =50)
ax[i].hist(male[:,i], bins = bins, alpha = 0.5)
ax[i].hist(bene[:,i], bins = bins, alpha = 0.5)
ax[i].set_title(cancer.feature_names[i])
ax[i].set_yticks(())
ax[0].set_xlabel("Feature magnitude")
ax[0].set_ylabel("Frequency")
ax[0].legend(["Male", "Bene"], loc ="best")
fig.tight_layout()
plt.show()
X_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)
print(X_train.shape)
print(X_test.shape)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
print("Test set accuracy: {:.2f}".format(logreg.score(X_test,y_test)))
from sklearn.preprocessing import MinMaxScaler, StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
logreg.fit(X_train_scaled, y_train)
#svm.fit(X_train_scaled, y_train)
print("Test set accuracy scaled data: {:.2f}".format(logreg.score(X_test_scaled,y_test)))
X_centered = X - X.mean(axis=0)
U, s, V = np.linalg.svd(X_centered)
c1 = V.T[:, 0]
c2 = V.T[:, 1]
W2 = V.T[:, :2]
X2D = X_centered.dot(W2)
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X2D = pca.fit_transform(X)
pca.components_.T[:, 0]).
pca = PCA()
pca.fit(X)
cumsum = np.cumsum(pca.explained_variance_ratio_)
d = np.argmax(cumsum >= 0.95) + 1
pca = PCA(n_components=0.95)
X_reduced = pca.fit_transform(X)
from sklearn.decomposition import KernelPCA
rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=0.04)
X_reduced = rbf_pca.fit_transform(X)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple preprocessing examples, breast cancer data and classification
Step2: More on Cancer Data
Step3: Principal Component Analysis
Step4: PCA assumes that the dataset is centered around the origin. Scikit-Learn’s PCA classes take care of centering
Step5: <!-- !split -->
Step6: After fitting the PCA transformer to the dataset, you can access the principal components using the
Step7: Another very useful piece of information is the explained variance ratio of each principal component,
Step8: You could then set $n_components=d$ and run PCA again. However, there is a much better option
Step9: Incremental PCA
|
12,740
|
<ASSISTANT_TASK:>
Python Code:
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname)
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
fig.colorbar(im, ax=ax, cmap='RdBu_r')
fig_2, ax = plt.subplots()
ax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
fig_2.legend()
ax.set(title='Normal orientation sensitivity',
xlabel='sensitivity', ylabel='count')
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute sensitivity maps
Step2: Show gain matrix a.k.a. leadfield matrix with sensitivity map
|
12,741
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division, print_function
from collections import Counter, defaultdict
import re
import itertools
import random
Set = frozenset # Data will be frozensets, so they can't be mutated.
def words(text):
"All space-separated words in text."
return Set(text.split())
def phrases(text, sep='/'):
"All sep-separated phrases in text, uppercased and stripped."
return Set(p.upper().strip() for p in text.split(sep))
def mistakes(regex, winners, losers):
"The set of mistakes made by this regex in classifying winners and losers."
return ({"Should have matched: " + W
for W in winners if not re.search(regex, W)} |
{"Should not have matched: " + L
for L in losers if re.search(regex, L)})
def findregex(winners, losers, k=4):
"Find a regex that matches all winners but no losers (sets of strings)."
# Make a pool of regex parts, then pick from them to cover winners.
# On each iteration, add the 'best' part to 'solution',
# remove winners covered by best, and keep in 'pool' only parts
# that still match some winner.
pool = regex_parts(winners, losers)
solution = []
def score(p): return k * len(matches(p, winners)) - len(p)
while winners:
best = max(pool, key=score)
solution.append(best)
winners = winners - matches(best, winners)
pool = {p for p in pool if matches(p, winners)}
return OR(solution)
def matches(regex, strings):
"Return a set of all the strings that are matched by regex."
return {s for s in strings if re.search(regex, s)}
OR = '|'.join # Join a sequence of strings with '|' between them
cat = ''.join # Join a sequence of strings with nothing between them
def regex_parts(winners, losers):
"Return parts that match at least one winner, but no loser."
wholes = {'^' + w + '$' for w in winners}
parts = {d for w in wholes for p in subparts(w) for d in dotify(p)}
return wholes | {p for p in parts if not matches(p, losers)}
def subparts(word):
"Return a set of subparts of word: consecutive characters up to length 4."
return set(word[i:i+n] for i in range(len(word)) for n in (1, 2, 3, 4))
def dotify(part):
"Return all ways to replace a subset of chars in part with '.'."
choices = map(replacements, part)
return {cat(chars) for chars in itertools.product(*choices)}
def replacements(c):
"All ways to replace character c with something interesting: for now, 'c' or '.'."
return c if c in '^$' else c + '.'
def report(winners, losers):
"Find a regex to match A but not B, and vice-versa. Print summary."
solution = findregex(winners, losers)
assert not mistakes(solution, winners, losers)
print('Chars: {}, ratio: {:.1f}, inputs: {}:{}'.format(
len(solution), len(trivial(winners)) / len(solution) , len(winners), len(losers)))
return solution
def trivial(winners): return '^(' + OR(winners) + ')$'
winners = words('''washington adams jefferson jefferson madison madison monroe
monroe adams jackson jackson van-buren harrison polk taylor pierce buchanan
lincoln lincoln grant grant hayes garfield cleveland harrison cleveland mckinley
mckinley roosevelt taft wilson wilson harding coolidge hoover roosevelt
roosevelt roosevelt roosevelt truman eisenhower eisenhower kennedy johnson nixon
nixon carter reagan reagan bush clinton clinton bush bush obama obama''')
losers = words('''clinton jefferson adams pinckney pinckney clinton king adams
jackson adams clay van-buren van-buren clay cass scott fremont breckinridge
mcclellan seymour greeley tilden hancock blaine cleveland harrison bryan bryan
parker bryan roosevelt hughes cox davis smith hoover landon wilkie dewey dewey
stevenson stevenson nixon goldwater humphrey mcgovern ford carter mondale
dukakis bush dole gore kerry mccain romney''') - winners
boys = words('jacob mason ethan noah william liam jayden michael alexander aiden')
girls = words('sophia emma isabella olivia ava emily abigail mia madison elizabeth')
pharma = words('lipitor nexium plavix advair ablify seroquel singulair crestor actos epogen')
cities = words('paris trinidad capetown riga zurich shanghai vancouver chicago adelaide auckland')
foo = words('''afoot catfoot dogfoot fanfoot foody foolery foolish fooster footage foothot footle footpad footway
hotfoot jawfoot mafoo nonfood padfoot prefool sfoot unfool''')
bar = words('''Atlas Aymoro Iberic Mahran Ormazd Silipan altared chandoo crenel crooked fardo folksy forest
hebamic idgah manlike marly palazzi sixfold tarrock unfold''')
nouns = words('''time year people way day man thing woman life child world school
state family student group country problem hand part place case week company
system program question work government number night point home water room
mother area money story fact month lot right study book eye job word business
issue side kind head house service friend father power hour game line end member
law car city community name president team minute idea kid body information
back parent face others level office door health person art war history party result
change morning reason research girl guy moment air teacher force education''')
adverbs = words('''all particularly just less indeed over soon course still yet before
certainly how actually better to finally pretty then around very early nearly now
always either where right often hard back home best out even away enough probably
ever recently never however here quite alone both about ok ahead of usually already
suddenly down simply long directly little fast there only least quickly much forward
today more on exactly else up sometimes eventually almost thus tonight as in close
clearly again no perhaps that when also instead really most why ago off
especially maybe later well together rather so far once''') - nouns
verbs = words('''ask believe borrow break bring buy can be able cancel change clean
comb complain cough count cut dance draw drink drive eat explain fall
fill find finish fit fix fly forget give go have hear hurt know learn
leave listen live look lose make do need open close shut organise pay
play put rain read reply run say see sell send sign sing sit sleep
smoke speak spell spend stand start begin study succeed swim take talk
teach tell think translate travel try turn off turn on type understand
use wait wake up want watch work worry write''') - nouns
randoms = Set(vars(random))
builtins = Set(vars(__builtin__)) - randoms
starwars = phrases('''The Phantom Menace / Attack of the Clones / Revenge of the Sith /
A New Hope / The Empire Strikes Back / Return of the Jedi''')
startrek = phrases('''The Wrath of Khan / The Search for Spock / The Voyage Home /
The Final Frontier / The Undiscovered Country / Generations / First Contact /
Insurrection / Nemesis''')
dogs = phrases(''''Labrador Retrievers / German Shepherd Dogs / Golden Retrievers / Beagles / Bulldogs /
Yorkshire Terriers / Boxers / Poodles / Rottweilers / Dachshunds / Shih Tzu / Doberman Pinschers /
Miniature Schnauzers / French Bulldogs / German Shorthaired Pointers / Siberian Huskies / Great Danes /
Chihuahuas / Pomeranians / Cavalier King Charles Spaniels / Shetland Sheepdogs / Australian Shepherds /
Boston Terriers / Pembroke Welsh Corgis / Maltese / Mastiffs / Cocker Spaniels / Havanese /
English Springer Spaniels / Pugs / Brittanys / Weimaraners / Bernese Mountain Dogs / Vizslas / Collies /
West Highland White Terriers / Papillons / Bichons Frises / Bullmastiffs / Basset Hounds /
Rhodesian Ridgebacks / Newfoundlands / Russell Terriers / Border Collies / Akitas /
Chesapeake Bay Retrievers / Miniature Pinschers / Bloodhounds / St. Bernards / Shiba Inu / Bull Terriers /
Chinese Shar-Pei / Soft Coated Wheaten Terriers / Airedale Terriers / Portuguese Water Dogs / Whippets /
Alaskan Malamutes / Scottish Terriers / Australian Cattle Dogs / Cane Corso / Lhasa Apsos /
Chinese Crested / Cairn Terriers / English Cocker Spaniels / Dalmatians / Italian Greyhounds /
Dogues de Bordeaux / Samoyeds / Chow Chows / German Wirehaired Pointers / Belgian Malinois /
Great Pyrenees / Pekingese / Irish Setters / Cardigan Welsh Corgis / Staffordshire Bull Terriers /
Irish Wolfhounds / Old English Sheepdogs / American Staffordshire Terriers / Bouviers des Flandres /
Greater Swiss Mountain Dogs / Japanese Chin / Tibetan Terriers / Brussels Griffons /
Wirehaired Pointing Griffons / Border Terriers / English Setters / Basenjis / Standard Schnauzers /
Silky Terriers / Flat-Coated Retrievers / Norwich Terriers / Afghan Hounds / Giant Schnauzers / Borzois /
Wire Fox Terriers / Parson Russell Terriers / Schipperkes / Gordon Setters / Treeing Walker Coonhounds''')
cats = phrases('''Abyssinian / Aegean cat / Australian Mist / American Curl / American Bobtail /
American Polydactyl / American Shorthair / American Wirehair / Arabian Mau / Asian / Asian Semi-longhair /
Balinese / Bambino / Bengal / Birman / Bombay / Brazilian Shorthair / British Shorthair / British Longhair /
Burmese / Burmilla / California Spangled Cat / Chantilly/Tiffany / Chartreux / Chausie / Cheetoh /
Colorpoint Shorthair / Cornish Rex / Cymric / Cyprus cat / Devon Rex / Donskoy or Don Sphynx / Dragon Li /
Dwelf / Egyptian Mau / European Shorthair / Exotic Shorthair / German Rex / Havana Brown / Highlander /
Himalayan-Colorpoint Persian / Japanese Bobtail / Javanese / Khao Manee / Korat / Korn Ja /
Kurilian Bobtail / LaPerm / Maine Coon / Manx / Mekong bobtail / Minskin / Munchkin / Nebelung / Napoleon /
Norwegian Forest Cat / Ocicat / Ojos Azules / Oregon Rex / Oriental Bicolor / Oriental Shorthair /
Oriental Longhair / Persian / Peterbald / Pixie-bob / Ragamuffin / Ragdoll / Russian Blue / Russian Black /
Sam Sawet / Savannah / Scottish Fold / Selkirk Rex / Serengeti cat / Serrade petit / Siamese / Siberian /
Singapura / Snowshoe / Sokoke / Somali / Sphynx / Swedish forest cat / Thai / Tonkinese / Toyger /
Turkish Angora / Turkish Van / Ukrainian Levkoy / York Chocolate Cat''')
movies = phrases('''Citizen Kane / The Godfather / Vertigo / 2001: A Space Odyssey / The Searchers / Sunrise /
Singin’ in the Rain / Psycho / Casablanca / The Godfather Part II / The Magnificent Ambersons / Chinatown /
North by Northwest / Nashville / The Best Years of Our Lives / McCabe & Mrs Miller / The Gold Rush /
City Lights / Taxi Driver / Goodfellas / Mulholland Drive / Greed / Annie Hall / The Apartment /
Do the Right Thing / Killer of Sheep / Barry Lyndon / Pulp Fiction / Raging Bull / Some Like It Hot /
A Woman Under the Influence / The Lady Eve / The Conversation / The Wizard of Oz / Double Indemnity /
Star Wars / Imitation of Life / Jaws / The Birth of a Nation / Meshes of the Afternoon / Rio Bravo /
Dr Strangelove / Letter from an Unknown Woman / Sherlock Jr / The Man Who Shot Liberty Valance /
It’s a Wonderful Life / Marnie / A Place in the Sun / Days of Heaven / His Girl Friday / Touch of Evil /
The Wild Bunch / Grey Gardens / Sunset Boulevard / The Graduate / Back to the Future / Crimes and Misdemeanors /
The Shop Around the Corner / One Flew Over the Cuckoo’s Nest / Blue Velvet / Eyes Wide Shut / The Shining /
Love Streams / Johnny Guitar / The Right Stuff / Red River / Modern Times / Notorious / Koyaanisqatsi /
The Band Wagon / Groundhog Day / The Shanghai Gesture / Network / Forrest Gump /
Close Encounters of the Third Kind / The Empire Strikes Back / Stagecoach / Schindler’s List /
The Tree of Life / Meet Me in St Louis / Thelma & Louise / Raiders of the Lost Ark / Bringing Up Baby /
Deliverance / Night of the Living Dead / The Lion King / Eternal Sunshine of the Spotless Mind /
West Side Story / In a Lonely Place / Apocalypse Now / ET: The Extra-Terrestrial / The Night of the Hunter /
Mean Streets / 25th Hour / Duck Soup / The Dark Knight / Gone With the Wind / Heaven’s Gate / 12 Years a Slave /
Ace in the Hole''')
tv = phrases('''The Abbott and Costello Show / ABC’s Wide World of Sports / Alfred Hitchcock Presents /
All in the Family / An American Family / American Idol / Arrested Development / Battlestar Galactica /
The Beavis and Butt-Head Show / The Bob Newhart Show / Brideshead Revisited / Buffalo Bill /
Buffy the Vampire Slayer / The Carol Burnett Show / The CBS Evening News with Walter Cronkite /
A Charlie Brown Christmas / Cheers / The Cosby Show / The Daily Show / Dallas / The Day After /
Deadwood / The Dick Van Dyke Show / Dragnet / The Ed Sullivan Show / The Ernie Kovacs Show /
Felicity / Freaks and Geeks / The French Chef / Friends / General Hospital /
The George Burns and Gracie Allen Show / Gilmore Girls / Gunsmoke / Hill Street Blues /
Homicide: Life on the Street / The Honeymooners / I, Claudius / I Love Lucy / King of the Hill /
The Larry Sanders Show / Late Night with David Letterman / Leave It to Beaver / Lost /
Married with Children / Mary Hartman, Mary Hartman / The Mary Tyler Moore Show / MASH / The Monkees /
Monty Python’s Flying Circus / Moonlighting / My So-Called Life / Mystery Science Theater 3000 /
The Odd Couple / The Office / The Oprah Winfrey Show / Pee Wee’s Playhouse / Playhouse 90 /
The Price Is Right / Prime Suspect / The Prisoner / The Real World / Rocky and His Friends / Roots /
Roseanne / Sanford and Son / Saturday Night Live / Second City Television / See It Now / Seinfeld /
Sesame Street / Sex and the City / The Shield / The Simpsons / The Singing Detective / Six Feet Under /
60 Minutes / Soap / The Sopranos / South Park / SpongeBob SquarePants / SportsCenter / Star Trek /
St Elsewhere / The Super Bowl / Survivor / Taxi /The Tonight Show Starring Johnny Carson /
24 / The Twilight Zone / Twin Peaks / The West Wing / What’s My Line / WKRP in Cincinnati /
The Wire / Wiseguy / The X-Files''')
stars = phrases('''Humphrey Bogart / Cary Grant / James Stewart / Marlon Brando / Fred Astaire / Henry Fonda /
Clark Gable / James Cagney / Spencer Tracy / Charlie Chaplin / Gary Cooper / Gregory Peck / John Wayne /
Laurence Olivier / Gene Kelly / Orson Welles / Kirk Douglas / James Dean / Burt Lancaster / Marx Brothers /
Buster Keaton / Sidney Poitier / Robert Mitchum / Edward G. Robinson / William Holden / Katharine Hepburn /
Bette Davis / Audrey Hepburn / Ingrid Bergman / Greta Garbo / Marilyn Monroe / Elizabeth Taylor / Judy Garland /
Marlene Dietrich / Joan Crawford / Barbara Stanwyck / Claudette Colbert / Grace Kelly / Ginger Rogers /
Mae West / Vivien Leigh / Lillian Gish / Shirley Temple / Rita Hayworth / Lauren Bacall / Sophia Loren /
Jean Harlow / Carole Lombard / Mary Pickford / Ava Gardner''')
scientists = phrases('''Alain Aspect / Martin Karplus / David Baltimore / Donald Knuth / Allen Bard /
Robert Marks II / Timothy Berners-Lee / Craig Mello / John Tyler Bonner / Luc Montagnier / Dennis Bray /
Gordon Moore / Sydney Brenner / Kary Mullis / Pierre Chambon / C Nüsslein-Volhard / Simon Conway Morris /
Seiji Ogawa / Mildred Dresselhaus / Jeremiah Ostriker / Gerald M Edelman / Roger Penrose / Ronald Evans /
Stanley Prusiner / Anthony Fauci / Henry F Schaefer III / Anthony Fire / Thomas Südhof / Jean Frechet /
Jack Szostak / Margaret Geller / James Tour / Jane Goodall / Charles Townes / Alan Guth / Harold Varmus /
Lene Vestergaard Hau / Craig Venter / Stephen Hawking / James Watson / Peter Higgs / Steven Weinberg /
Leroy Hood / George Whitesides / Eric Kandel / Edward Wilson / Andrew Knoll / Edward Witten / Charles Kao /
Shinya Yamanaka''')
solution = findregex(starwars, startrek)
solution
not mistakes(solution, starwars, startrek)
%timeit findregex(adverbs, nouns)
import cProfile
cProfile.run('findregex(adverbs, nouns)', sort='cumulative')
def matches(regex, strings):
"Return a set of all the strings that are matched by regex."
searcher = re.compile(regex).search
return set(filter(searcher, strings))
re.purge()
%timeit findregex(adverbs, nouns)
re.purge()
%timeit regex_parts(adverbs, nouns)
def regex_parts(winners, losers):
"Return parts that match at least one winner, but no loser."
losers_str = '\n'.join(losers)
def no_losers(part): return not re.compile(part, re.MULTILINE).search(losers_str)
wholes = {'^' + w + '$' for w in winners}
parts = {d for w in wholes for p in subparts(w) for d in dotify(p)}
return wholes | set(filter(no_losers, parts))
re.purge()
%timeit regex_parts(adverbs, nouns)
def findregex(winners, losers, k=4):
"Find a regex that matches all winners but no losers (sets of strings)."
# Make a pool of regex parts, then pick from them to cover winners.
# On each iteration, add the 'best' part to 'solution',
# remove winners covered by best, and keep in 'pool' only parts
# that still match some winner.
pool = regex_parts(winners, losers)
solution = []
def score(p): return k * len(matches(p, winners)) - len(p)
while winners:
best = max(pool, key=score)
solution.append(best)
winners = winners - matches(best, winners)
pool = {p for p in pool if matches(p, winners)}
return OR(solution)
re.purge()
%timeit findregex(adverbs, nouns)
def findregex(winners, losers, k=4):
"Find a regex that matches all winners but no losers (sets of strings)."
# Initialize covers = {regex: {winner,...}} for a large set of regex components.
# On each iteration, add the 'best' component to 'solution',
# remove winners covered by best, and keep in 'pool' only components
# that still match some winner.
covers = regex_covers(winners, losers)
pool = list(covers)
solution = []
def score(p): return k * len(covers[p] & winners) - len(p)
while winners:
best = max(pool, key=score)
solution.append(best)
winners = winners - covers[best]
pool = [p for p in pool
if not covers[p].isdisjoint(winners)]
return OR(solution)
def regex_covers(winners, losers):
Generate regex parts and return a dict of {regex: {winner,...}}.
Each regex matches at least one winner and no loser.
losers_str = '\n'.join(losers)
wholes = {'^' + w + '$' for w in winners}
parts = {d for w in wholes for p in subparts(w) for d in dotify(p)}
pool = wholes | parts
searchers = {p: re.compile(p, re.MULTILINE).search for p in pool}
return {p: Set(filter(searchers[p], winners))
for p in pool
if not searchers[p](losers_str)}
re.purge()
%timeit findregex(adverbs, nouns)
EXAMPLES = [ex for example in [
(winners, 'win', 'lose', losers),
(boys, 'boy', 'girl', girls),
(pharma, 'drug', 'city', cities),
(foo, 'foo', 'bar', bar),
(starwars, 'wars', 'trek', startrek),
(nouns, 'noun', 'adj', adverbs),
(nouns, 'noun', 'verb', verbs),
(randoms, 'rand', 'built', builtins),
(dogs, 'dog', 'cat', cats),
(movies, 'movie', 'tv', tv),
(scientists, 'sci', 'star', stars)]
for ex in (example, example[::-1])] # Do each example both ways
SOLUTION = {} # A cached table of solutions; SOLUTION[W, L] will hold a regex
def benchmark(examples=EXAMPLES):
"Run examples; print summaries; return total of solution lengths."
totalchars = 0
for (W, Wname, Lname, L) in examples:
re.purge()
solution = SOLUTION[W, L] = findregex(W, L)
assert not mistakes(solution, W, L)
legend = '{}({}):{}({})'.format(Wname, len(W), Lname, len(L))
print('{:20} {:3}: "{}"'.format(legend, len(solution), truncate(solution, 50)))
totalchars += len(solution)
print('Total characters: {:6}'.format(totalchars))
return totalchars
def truncate(text, nchars, ellipsis=' ...'):
"Return whole string, or version truncated to nchars."
return text if len(text) < nchars else text[:nchars-len(ellipsis)] + ellipsis
%time benchmark()
parts = [p for regex in SOLUTION.values() for p in regex.split('|')]
lengths = [len(p) for p in parts]
Counter(lengths)
max(parts, key=len)
Counter(cat(parts)).most_common(20)
Counter(parts).most_common(20)
def show(W, L, N=10):
"Summarize the links between the set of winners, W, and its parts."
covers = regex_covers(W, L)
inv = invert_multimap(covers)
for ((n1, w), (n2, r)) in zip(top(N, covers), top(N, inv)):
print("{:8} {:2} | {:3}| {}".format(w, n1, n2, r))
print("TOTAL %5d | %3d TOTAL" % (len(covers), len(W)))
plt.subplot(1, 2, 1); histmm(covers, "parts", "winners")
plt.subplot(1, 2, 2); histmm(inv, "winners", "parts")
def top(N, multimap):
"The top N longest items in a dict of {key: {val,...})"
return sorted([(len(vals), key) for (key, vals) in multimap.items()], reverse=True)[:N]
def histmm(multimap, key, val):
"Display a histogram of how many values each key has."
plt.rcParams['figure.figsize'] = (8.0, 4.0)
plt.hist([len(v) for v in multimap.values()])
plt.xlabel('set size of {' + val + ',...}')
plt.ylabel('number of ' + key + ' mapping to that size')
def invert_multimap(multimap):
"Covert {key: {val,...}} to {val: [key,...]}."
result = defaultdict(set)
for key in multimap:
for val in multimap[key]:
result[val].add(key)
return result
show(winners, losers)
show(dogs, cats)
show(adverbs, nouns)
# Pseudocode
def greedy_search(parts, partial_solution=None):
if is_complete(partial_solution):
return partial_solution
else:
best = max(parts, key=score)
return greedy_search(parts - {best}, partial_solution + best)
# Pseudocode
def exhaustive_search(parts, partial_solution=None):
if is_complete(partial_solution):
return partial_solution
else:
best = max(parts, key=score)
return min(exhaustive_search(parts - {best}, partial_solution + best),
exhaustive_search(parts - {best}, partial_solution),
key=cost)
# Pseudocode
def branch_and_bound_search(parts, partial_solution=None):
if is_complete(partial_solution):
CHEAPEST = min(CHEAPEST, partial_solution, key=cost)
elif cost(partial_solution) < cost(CHEAPEST):
best = select_best(parts)
branch_and_bound_search(parts - {best}, partial_solution + best)
branch_and_bound_search(parts - {best}, partial_solution)
return CHEAPEST
class BranchBound(object):
"Hold state information for a branch and bound search."
def __init__(self, winners, max_num_calls, k=4):
self.cheapest = trivial(winners)
self.calls = max_num_calls
self.k = k
def search(self, covers, partial=None):
Recursively extend partial regex until it matches all winners in covers.
Try all reasonable combinations until we run out of calls.
if self.calls <= 0:
return self.cheapest
self.calls -= 1
covers, partial = simplify_covers(covers, partial)
if not covers: # Nothing left to cover; solution is complete
self.cheapest = min(partial, self.cheapest, key=len)
elif len(OR(partial, min(covers, key=len))) < len(self.cheapest):
def score(p): return self.k * len(covers[p]) - len(p)
best = max(covers, key=score) # Best part
covered = covers[best] # Set of winners covered by best
covers.pop(best)
# Try with and without the greedy-best part
self.search({c:covers[c]-covered for c in covers}, OR(partial, best))
self.search(covers, partial)
return self.cheapest
def simplify_covers(covers, partial=None):
"Eliminate dominated regexes, and select ones that uniquely cover a winner."
previous = None
while covers != previous:
previous = covers
covers = eliminate_dominated(covers)
covers, necessary = select_necessary(covers)
partial = OR(partial, necessary)
return covers, partial
def OR(*regexes):
OR together component regexes. Ignore 'None' components.
Allows both OR(a, b, c) and OR([a, b, c]), similar to max.
if len(regexes) == 1:
regexes = regexes[0]
return '|'.join(r for r in regexes if r)
def eliminate_dominated(covers):
Given a dict of {regex: {winner...}}, make a new dict with only the regexes
that are not dominated by any others. A regex part p is dominated by p2 if p2 covers
a superset of the matches covered by p, and rp is shorter.
newcovers = {}
def signature(p): return (-len(covers[p]), len(p))
for p in sorted(covers, key=signature):
if not covers[p]: break # All remaining r must not cover anything
# r goes in newcache if it is not dominated by any other regex
if not any(covers[p2] >= covers[p] and len(p2) <= len(p)
for p2 in newcovers):
newcovers[p] = covers[p]
return newcovers
def select_necessary(covers):
Select winners covered by only one part; remove from covers.
Return a pair of (covers, necessary).
counts = Counter(w for p in covers for w in covers[p])
necessary = {p for p in covers if any(counts[w] == 1 for w in covers[p])}
if necessary:
covered = {w for p in necessary for w in covers[p]}
covers = {p: covers[p] - covered
for p in covers if p not in necessary}
return covers, OR(necessary)
else:
return covers, None
def test_bb():
assert OR(['a', 'b', 'c']) == OR('a', 'b', 'c') == 'a|b|c'
assert OR(['a|b', 'c|d']) == OR('a|b', 'c|d') == 'a|b|c|d'
assert OR(None, 'c') == 'c'
covers1 = {'a': {'ann', 'abe'}, 'ab': {'abe'}}
assert eliminate_dominated(covers1) == {'a': {'ann', 'abe'}}
assert simplify_covers(covers1) == ({}, 'a')
covers2 = {'a': {'abe', 'cab'}, 'b': {'abe', 'cab', 'bee'},
'c': {'cab', 'cee'}, 'c.': {'cab', 'cee'}, 'abe': {'abe'},
'ab': {'abe', 'cab'}, '.*b': {'abe', 'cab', 'bee'},
'e': {'abe', 'bee', 'cee'}, 'f': {}, 'g': {}}
assert eliminate_dominated(covers2) == simplify_covers(covers2)[0] == {
'c': {'cab', 'cee'}, 'b': {'cab', 'abe', 'bee'}, 'e': {'cee', 'abe', 'bee'}}
covers3 = {'1': {'w1'}, '.1': {'w1'}, '2': {'w2'}}
assert eliminate_dominated(covers3) == {'1': {'w1'}, '2': {'w2'}}
assert simplify_covers(covers3) in (({}, '2|1'), ({}, '1|2'))
covers, nec = select_necessary({'a': {'abe'}, 'c': {'cee'}})
assert covers == {} and (nec == 'c|a' or nec == 'a|c')
assert {0, 1, 2} >= {1, 2}
assert {1, 2} >= {1, 2}
assert not ({1, 2, 4} >= {1, 3})
return 'test_bb passes'
test_bb()
covers = regex_covers(winners, losers)
len(covers)
covers2, partial = simplify_covers(covers)
len(covers2)
1 - len(covers2)/len(covers)
simplify_covers(covers)
def bbenchmark(examples=EXAMPLES, calls=10**4):
"Run these data sets; print summaries; return total of solution lengths."
totalchars = 0
for (W, Wname, Lname, L) in examples: # Do both ways
re.purge()
bb = bb_findregex(W, L, calls)
SOLUTION[W, L] = bb.cheapest
assert not mistakes(bb.cheapest, W, L)
legend = '{}({}):{}({})'.format(Wname, len(W), Lname, len(L))
print('{:20} {:6} calls, {:3}: "{}"'.format(
legend, (calls - bb.calls), len(bb.cheapest), truncate(bb.cheapest, 45)))
totalchars += len(bb.cheapest)
return totalchars
def bb_findregex(winners, losers, calls=10**4):
"Return a BranchBound object which contains the shortest regex that covers winners but not losers."
bb = BranchBound(winners, calls)
bb.search(regex_covers(winners, losers))
return bb
def findregex(winners, losers): return bb_findregex(winners, losers).cheapest
%time bbenchmark(calls=1000)
%time bbenchmark(calls=10000)
%time bbenchmark(calls=100000)
def regex_covers(winners, losers):
Generate regex components and return a dict of {regex: {winner...}}.
Each regex matches at least one winner and no loser.
losers_str = '\n'.join(losers)
wholes = {'^'+winner+'$' for winner in winners}
parts = {d for w in wholes for p in subparts(w) for d in dotify(p)}
reps = {r for p in parts for r in repetitions(p)}
pool = wholes | parts | pairs(winners) | reps
searchers = {p:re.compile(p, re.MULTILINE).search for p in pool}
return {p: Set(filter(searchers[p], winners))
for p in pool
if not searchers[p](losers_str)}
def pairs(winners, special_chars=Set('*+?^$.[](){}|\\')):
chars = Set(cat(winners)) - special_chars
return {A+'.'+q+B
for A in chars for B in chars for q in '*+?'}
def repetitions(part):
Return a set of strings derived by inserting a single repetition character
('+' or '*' or '?'), after each non-special character.
Avoid redundant repetition of dots.
splits = [(part[:i], part[i:]) for i in range(1, len(part)+1)]
return {A + q + B
for (A, B) in splits
# Don't allow '^*' nor '$*' nor '..*' nor '.*.'
if not (A[-1] in '^$')
if not A.endswith('..')
if not (A.endswith('.') and B.startswith('.'))
for q in '*+?'}
def test_new_parts():
assert repetitions('a') == {'a+', 'a*', 'a?'}
assert repetitions('ab') == {'a+b', 'a*b', 'a?b',
'ab+', 'ab*', 'ab?'}
assert repetitions('a.c') == {'a+.c', 'a*.c', 'a?.c',
'a.c+', 'a.*c', 'a.?c',
'a.+c', 'a.c*', 'a.c?'}
assert repetitions('^a..d$') == {'^a+..d$', '^a*..d$', '^a?..d$',
'^a..d+$', '^a..d*$', '^a..d?$'}
assert pairs({'ab', 'c'}) == {
'a.*a', 'a.*b', 'a.*c',
'a.+a', 'a.+b', 'a.+c',
'a.?a', 'a.?b', 'a.?c',
'b.*a', 'b.*b', 'b.*c',
'b.+a', 'b.+b', 'b.+c',
'b.?a', 'b.?b', 'b.?c',
'c.*a', 'c.*b', 'c.*c',
'c.+a', 'c.+b', 'c.+c',
'c.?a', 'c.?b','c.?c'}
assert len(pairs({'1...2...3', '($2.34)', '42', '56', '7-11'})) == 8 * 8 * 3
covers = regex_covers({'one', 'on'}, {'won', 'wuan', 'juan'})
assert (eliminate_dominated(covers) == {'e': {'one'}, '^o': {'on', 'one'}})
return 'test_new_parts passes'
test_new_parts()
findregex(starwars, startrek)
starwars = starwars | {'THE FORCE AWAKENS'}
startrek = startrek | {'BEYOND'}
findregex(starwars, startrek)
covers = regex_covers(winners, losers)
len(covers)
covers2, partial = simplify_covers(covers)
len(covers2)
SUBPARTS = 5 # Maximum length of a subpart
def subparts(word):
"Return a set of subparts of word, consecutive characters up to length 5."
return set(word[i:i+1+s] for i in range(len(word)) for s in range(SUBPARTS))
covers = regex_covers(winners, losers)
len(covers)
covers2, partial = simplify_covers(covers)
len(covers2)
%time bbenchmark(calls=1000)
matches('H.*N.S', dogs)
%time bbenchmark(calls=10000)
def bb_findregex(winners, losers, calls=10000, restarts=10):
"Find the shortest disjunction of regex components that covers winners but not losers."
bb = BranchBoundRandomRestart(winners, calls)
covers = eliminate_dominated(regex_covers(winners, losers))
for _ in range(restarts):
bb.calls = calls
bb.search(covers.copy())
if bb.calls > 0: # If search was not cut off, we have optimal solution
return bb
return bb
class BranchBoundRandomRestart(BranchBound):
def search(self, covers, partial=None):
Recursively extend partial regex until it matches all winners in covers.
Try all reasonable combinations until we run out of calls.
if self.calls <= 0:
return partial, covers
self.calls -= 1
covers, partial = simplify_covers(covers, partial)
if not covers: # Nothing left to cover; solution is complete
self.cheapest = min(partial, self.cheapest, key=len)
elif len(OR(partial, min(covers, key=len))) < len(self.cheapest):
# Try with and without the greedy-best component
K = random.choice((2, 3, 4, 4, 4, 5, 6))
F = random.choice((0.1, 0.1, 2.0))
def score(c): return K * len(covers[c]) - len(c) + random.uniform(0., F)
best = max(covers, key=score) # Best component
covered = covers[best] # Set of winners covered by r
covers.pop(best)
self.search({c:covers[c]-covered for c in covers}, OR(partial, best))
self.search(covers, partial)
return self.cheapest
%time bbenchmark(calls=100)
%time bbenchmark(calls=1000)
%time bbenchmark(calls=10000)
not_foo = '^(?!.*foo)'
not mistakes(not_foo, bar, foo)
def consider_negative_lookahead(W, L):
"Return either SOLUTION[W, L] or negative lookup of SOLUTION[L, W], whichever is shorter."
solution = min(SOLUTION[W, L], '^(?!.*(' + SOLUTION[L, W] + '))',
key=len)
assert not mistakes(solution, W, L)
return solution
%time sum(len(consider_negative_lookahead(W, L)) for (W, _, _, L) in EXAMPLES)
# Algorithm Line Times Total character counts
DATA = [('Greedy', 'gD-', [6], [1740]),
('BB (1/10/100K)', 'bo-', [8, 21, 196], [1676, 1674, 1669]),
('BB + Parts (1/10K)', 'kd-', [272, 288], [1595, 1587]),
('BB Restart (1/10/100K)', 'rs-', [289, 303, 493], [1560, 1556, 1547]),
('BB Restart NegLook (100K)','m*-', [493], [1426])]
def display(data=DATA):
fig = plt.figure()
ax = fig.add_subplot(111)
for (label, line, times, counts) in data:
plt.plot(times, counts, line, label=label)
x, y = times[-1], counts[-1]
offset = (-22, -15) if line in ('rs-', 'm*-') else (10, -5)
ax.annotate(str(y), xy=(x, y), xytext=offset, textcoords='offset points')
plt.xlabel('Time (seconds)'); plt.ylabel('Solution size (total chars)')
plt.legend(loc='lower left');
display()
cProfile.run('findregex(adverbs, nouns)', sort='cumulative')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here are some "arbitrary lists" (see panel two of the comic) which we will be using to test out the code.
Step2: And here we show how it works
Step3: Plan of Attack
Step4: On my computer it was 4 seconds. I have some ideas for how to make this faster, but I know that I shouldn't waste effort trying to speed up parts of a program that don't take much of the total time. I'll use the cProfile module to see where the time goes
Step5: About 99% of the time was spent in matches. And most of the time in matches goes to re.search. So my thoughts are
Step6: One thing to be careful about
Step7: That was almost twice as fast! Not bad for changing two lines.
Step8: We've made regex_parts almost two times faster.
Step10: Notice that we call matches twice (once within score) for every part in the pool on every iteration of the main loop. If there are 50 iterations of the loop, and 1000 parts, that's 100,000 calls to matches. Instead of repeating all these calls, I propose that, for each part in the pool, we limit ourselves to this
Step11: Wow! That's a dozen times faster! But I don't want to draw too many conclusions from just this one example.
Step12: In 5 seconds we solved 22 problems, getting a score of 1749 for the total number of characters in all the solutions.
Step13: This says that there are 17 parts of length 1, 198 parts of length 2, etc. There's also one part of length 10; that's such an outlier, I wonder if it is an error?
Step14: I happen to own a Havanese, but I don't think the program knew that. Rather, the issue is that in the set of cats we have 'HAVANA BROWN' and 'JAVANESE', which means that any 4-letter-or-less subsequence of 'HAVANESE' matches one of these two. So the only component that doesn't match one of these two cats is the whole, '^HAVANESE$'.
Step15: I wonder if there are parts that appear in many solutions?
Step16: Most parts appear in only 2 solutions or less.
Step17: What does this say? First the table says that there are only three regexes that have 4 winners; all the rest have 3 or fewer winners. So the links going left-to-right are rather sparse. There are more links going the other direction; 110 to van-buren, for example. The histograms give the whole picture mapping set sizes to the number of keys that have that set size. So, on the left we see that over 1400 regexes have a set size of one winner, and 100 or fewer have set sizes of 2 and 3, and we saw that only 3 regexes have a set size of 4 winners. On the right, the histogram shows a wide spread of winners that have set sizes between 1 and 110 regexes.
Step18: My immediate reaction to this is "there are a lot of retrievers and terriers." All ten of the parts in the table recognize this fact, but the left-hand histogram shows that almost all the parts match fewer than 5 dog breeds. In contrast, the right-hand histogram shows that most of the dog breeds have 9 or more parts that match them.
Step19: The pattern looks similar here. Almost all the parts match only one winner, but most of the winners are matched by many parts. What can I conclude from this?
Step20: An exhaustive search considers every possible choice of parts, and selects the best solution. On each iteration exhaustive search picks a part (just like greedy search), but then it considers both using the part and not using the part. You can see that exhaustive search is almost identical to greedy search, except that it has two recursive calls (on lines 7 and 8) instead of one (on line 7). (If you are viewing this in a iPython notebook, not just a web page, you can toggle line numbers by pressing 'ctrl-M L' within a cell.) How do we choose between the results of the two calls? We need a cost function that we are trying to minimize. (For regex golf the cost of a solution is the length of the string.)
Step21: Here's an interesting piece of trivia
Step23: Searching
Step24: Simplifying Covers
Step26: For convenience we modified the function OR slightly; we made it work just like the function max in that you can call it two ways
Step28: Simplifying Covers
Step30: Simplifying Covers
Step31: Testing and Benchmarking
Step32: Let's investigate how much the cover simplification process is helping
Step33: We see that simplify_covers gives us a 97% reduction in the number of parts! What do the remaining covers look like?
Step34: Now let's benchmark branch and bound. I'm going to introduce a modified version of benchmark, called bbenchmark, to print the number of calls taken. The easiest way to do that is to introduce a new version of findregex, called bb_findregex, that returns the BranchBound object it creates, rather than returning the solution. (In general, I tend to introduce a new function when I change the call/return signature, but not when I just improve the mechanism.)
Step35: Remember, the old benchmark took about 5 seconds and totaled 1749 characters. Let's see how the branch and bound benchmark compares
Step36: This is encouraging! We took about 2 seconds longer, but improved on the total number of characters by about 4%. For 10 of the 22 cases we didn't reach the cutoff of 1000 calls, which means that we searched every feasible combination of the parts we have. Let's try with 10,000 calls
Step37: We doubled the time and gained a few characters. How about 100,000 calls?
Step40: We're slowly decreasing the total number of characters, but at the expense of increasing run time. I think if we want to make more progress, it is time to stop worrying about the search algorithm, and start worrying about what we're searching over.
Step41: Let's take our new parts out for a spin and see what they can do
Step42: Awesome!
Step43: Too bad; back to ten characters.
Step44: So the total number of regexes is increased almost 10-fold, but after simplification the number is only double what it was before. (To put it another way, simplify_covers eliminated 99.3% of the parts.) That's not too bad; let's add more components.
Step45: But how many more regex components does this give us?
Step46: In the end the longer parts add only 22 new components, rejecting nearly 40,000 other parts. It is time to benchmark again. Remember our previous best was Branch and Bound with 100,000 calls, which yielded 1669 total characters in 170 seconds.
Step47: We reduced the total characters by about 4%, although it did take longer, even with only 1,000 calls. And notice we no longer need '^HAVANESE$'; instead we have 'H.*N.S', which saves 4 characters, and, look how many breeds it matches
Step48: Let's continue benchmarking
Step50: We're making good progress. What next? We now have so many parts, that many of the searches do not complete. Suppose one of our first choices was wrong, and the search took a part that does not belong in an optimal solution. We might well spend all our calls on the half of the search tree that keeps the part, never exploring a part of the tree without it. I'd like to alter the search algorithm to get us out of that rut.
Step51: Remember, when it says "100 calls" here, it really means 1,000 calls, because we are doing 10 random restarts. This looks very promising; even with only 100 calls per restart, we're doing better than 100,000 calls without restarts. Let's continue
Step52: Just One More Thing
Step53: Whoa—what just happened there? The form '^(?!.*foo)' means "starting from the beginning of the line, look ahead for '.*foo' (that is, any number of characters followed by 'foo'). If it is there, then fail, else succeeed." That's just what we need to match all the non-foo strings in the bar collection. Let's apply this idea to the complete benchmark and see how many characters we save. We'll use the SOLUTION dict that was compiled by the previous call to benchmark
Step54: Very nice! We've improved from 1163 to 1079 characters, and it took almost no time at all.
Step55: We've made great strides, decreasing the total solution length by 20%. (It is up to you whether that 20% improvement is worth the increase in time from 6 seconds to 8 minutes.)
|
12,742
|
<ASSISTANT_TASK:>
Python Code:
a = [4,5,6,8,10]
for i in a:
print(i)
# A fragment of `One Hundred Years of Solitude`
GGM = 'Many years later, as he faced the firing squad, \
Colonel Aureliano Buendía was to remember that dist \
ant afternoon when his father took him to discover ice. \
At that time Macondo was a village of twenty adobe houses,\
built on the bank of a river of clear water that ran along \
a bed of polished stones, which were white and enormous,\
like prehistoric eggs.'
print(GGM)
dot = GGM.split() # we create a list where each element is a word
print(dot)
for i in dot:
print(i)
a = {} # empty dictionary
a[1] = 'one'
a[2] = 'two'
a[3] = 'three'
a[4] = 'four'
a[5] = 'five'
print(a)
for k in a.keys(): # iterate over the keys
print(a[k])
for v in a.values(): #iterate over the values
print(v)
print(range(10)) # range itself returns an iterable object
a = list(range(10)) # this translates that iterable object into a list
print(a) # be careful! the lists has 10 objects starting with 0
for i in range(10): # if you given a single argument the iterations starts at 0.
print(i)
for i in range(4,10): # you can algo give two arguments: range(start, end).
print(i)
for i in range(0,10,3): # if you give three arguments they are interpreted as range(start, end, step)
print(i)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Iterating over dictionaries
Step2: Iterating over a sequence
|
12,743
|
<ASSISTANT_TASK:>
Python Code:
def regexp_sum(S):
n = len(S)
if n == 0:
return 0
elif n == 1:
r, = S
return r
else:
r, *Rs = S
return ('+', r, regexp_sum(Rs))
def rpq(p1, p2, Σ, 𝛿, Allowed):
if len(Allowed) == 0:
AllChars = { c for c in Σ
if 𝛿.get((p1, c)) == p2
}
r = regexp_sum(AllChars)
if p1 == p2:
if AllChars == set():
return ''
else:
return ('+', '', r)
else:
return r
else:
q, *RestAllowed = Allowed
rp1p2 = rpq(p1, p2, Σ, 𝛿, RestAllowed)
rp1q = rpq(p1, q, Σ, 𝛿, RestAllowed)
rqq = rpq( q, q, Σ, 𝛿, RestAllowed)
rqp2 = rpq( q, p2, Σ, 𝛿, RestAllowed)
return ('+', rp1p2, ('&', ('&', rp1q, ('*', rqq)), rqp2))
def dfa_2_regexp(F):
States, Σ, 𝛿, q0, Accepting = F
r = regexp_sum({ rpq(q0, p, Σ, 𝛿, States) for p in Accepting })
return r
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function rpq assumes there is some <span style="font-variant
Step2: The function dfa_2_regexp takes a deterministic <span style="font-variant
|
12,744
|
<ASSISTANT_TASK:>
Python Code:
cat = True
dog = False
print(type(cat))
from cities import cities
print(cities)
first_alb = cities[0] == 'Albuquerque'
second_alb = cities[1] == 'Albuquerque'
first_last = cities[0] == cities[-1]
print(first_alb, second_alb, first_last)
crime_rates = [749, 371, 828, 503, 1379, 425, 408, 542, 1405, 835, 1288, 647, 974, 1383, 455, 658, 675, 615, 2122, 423, 362, 587, 543, 563, 168, 992, 1185, 617, 734, 1263, 784, 352, 397, 575, 481, 598, 1750, 399, 1172, 1294, 992, 522, 1216, 815, 639, 1154, 1993, 919, 594, 1160, 636, 752, 130, 517, 423, 443, 738, 503, 413, 704, 363, 401, 597, 1776, 722, 1548, 616, 1171, 724, 990, 169, 1177, 742]
print(crime_rates)
first = crime_rates[0]
first_500 = first > 500
first_749 = first >= 749
first_last = first >= crime_rates[-1]
print(first_500, first_749, first_last)
second = crime_rates[1]
second_500 = second < 500
second_371 = second <= 371
second_last = second <= crime_rates[-1]
print(second_500, second_371, second_last)
result = 0
if cities[2] == u"Anchorage":
result = 1
assert result == 1
reqults = 0
if crime_rates[0] > 500:
if crime_rates[0] > 300:
results = 3
five_hundred_list = []
for cr in crime_rates:
if cr > 500:
five_hundred_list.append(cr)
assert all([_>500 for _ in five_hundred_list])
print(crime_rates)
highest = crime_rates[0]
for cr in crime_rates:
if cr > highest:
highest = cr
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2
Step2: 3
Step3: 4
Step4: 5
Step5: 6
Step6: 7
Step7: 8
|
12,745
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
to_categorical(Y.values[test_split], 2)
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 400, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
12,746
|
<ASSISTANT_TASK:>
Python Code:
def odd_number(num):
L=[]
for i in range(num):
if i%2 == 1:
L.append(i)
return L
%time odd_sample1 = odd_number(100000000)
odd_sample1[:20]
odd_number1 = [x for x in range(100000000) if x % 2 == 1]
odd_number1 = []
for x in range(100000000):
if x % 2 == 1:
odd_number1.append(x)
else:
pass
odd_number1[:20]
odd_number2 = [2 * y + 1 for y in range(50000000)]
odd_number2 = []
for x in range(50000000):
odd_number2.append(2*x+1)
odd_number2[:20]
import math
[math.exp(n) for n in range(11)]
[math.exp(3*n) for n in range(11)]
words = 'The quick brown for jumps \
over the lazy dog'.split()
words
L =[]
for x in words:
L.append([x.upper(), x.lower(), len(x)])
L
[[x.upper(), x.lower(), len(x)] for x in words]
[[x.upper(), x.lower(), len(x)] for x in words[:2]]
[[words[n].upper(), words[n].lower(), len(words[n])] \
for n in range(len(words)) if n < 2]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1억 까지의 홀수들의 리스트를 생성하는 걸리는 시간을 확인해보자.
Step2: 지금 사용하는 컴퓨터에서는 9초 정도 걸린다.
Step3: 이제 질문을 좀 달리하자.
Step4: 위 코드에서 아래 부분이 핵심이다.
Step5: 이제 조건제시법 2를 구현해보자.
Step6: 이 방식은 좀 더 쉬워 보인다. if 문이 없기 때문이다.
Step7: 예제
Step8: 좀 더 복잡하게 사용해되 된다.
Step9: 예제
Step10: 위 words 리스트의 각 항목의 문자열들을 모두 대문자와 소문자로 바꾼 단어와 그리고 해당 항목의 문자열의 길이를 항목으로 갖는 리스트들의 리스트를 작성하고자 한다.
Step11: 리스트 조건제시법으로는 아래와 같이 보다 간결하게 구현할 수 있다.
Step12: 처음 두 단어만 다루고자 할 경우에는 아래처럼 하면 된다.
Step13: 아래처럼 인덱스에 제한을 가하는 방식도 가능하다. 즉, if 문을 추가로 활용한다.
|
12,747
|
<ASSISTANT_TASK:>
Python Code:
from pyspark import SparkContext
sc = SparkContext('local[*]')
from pyspark.sql import SQLContext
sqlc = SQLContext(sc)
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import PCA
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.clustering import GaussianMixture
from pyspark.mllib.classification import LogisticRegressionWithLBFGS, LogisticRegressionModel
df = (sqlc.read.format('com.databricks.spark.csv')
.options(header='false', inferschema='true')
.load('data/sonar.all-data.txt'))
df.printSchema()
df = df.withColumnRenamed("C60","label")
assembler = VectorAssembler(
inputCols=['C%d' % i for i in range(60)],
outputCol="features")
output = assembler.transform(df)
standardizer = StandardScaler(withMean=True, withStd=True,
inputCol='features',
outputCol='std_features')
model = standardizer.fit(output)
output = model.transform(output)
indexer = StringIndexer(inputCol="label", outputCol="label_idx")
indexed = indexer.fit(output).transform(output)
sonar = indexed.select(['std_features', 'label', 'label_idx'])
sonar.show(n=3)
pca = PCA(k=2, inputCol="std_features", outputCol="pca")
model = pca.fit(sonar)
transformed = model.transform(sonar)
features = transformed.select('pca').rdd.map(lambda x: np.array(x))
features.take(3)
gmm = GaussianMixture.train(features, k=2)
predict = gmm.predict(features).collect()
labels = sonar.select('label_idx').rdd.map(lambda r: r[0]).collect()
np.corrcoef(predict, labels)
xs = np.array(features.collect()).squeeze()
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
axes[0].scatter(xs[:, 0], xs[:,1], c=predict)
axes[0].set_title('Predicted')
axes[1].scatter(xs[:, 0], xs[:,1], c=labels)
axes[1].set_title('Labels')
pass
sonar.show(n=3)
data = sonar.map(lambda x: LabeledPoint(x[2], x[0]))
train, test = data.randomSplit([0.7, 0.3])
model = LogisticRegressionWithLBFGS.train(train)
y_yhat = test.map(lambda x: (x.label, model.predict(x.features)))
err = y_yhat.filter(lambda x: x[0] != x[1]).count() / float(test.count())
print("Error = " + str(err))
transformer = VectorAssembler(inputCols=['C%d' % i for i in range(60)],
outputCol="features")
standardizer = StandardScaler(withMean=True, withStd=True,
inputCol='features',
outputCol='std_features')
indexer = StringIndexer(inputCol="C60", outputCol="label_idx")
pca = PCA(k=5, inputCol="std_features", outputCol="pca")
lr = LogisticRegression(featuresCol='std_features', labelCol='label_idx')
pipeline = Pipeline(stages=[transformer, standardizer, indexer, pca, lr])
df = (sqlc.read.format('com.databricks.spark.csv')
.options(header='false', inferschema='true')
.load('data/sonar.all-data.txt'))
train, test = df.randomSplit([0.7, 0.3])
model = pipeline.fit(train)
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore')
prediction = model.transform(test)
score = prediction.select(['label_idx', 'prediction'])
score.show(n=score.count())
acc = score.map(lambda x: x[0] == x[1]).sum() / score.count()
acc
from sklearn import svm, grid_search, datasets
from spark_sklearn import GridSearchCV
iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svr = svm.SVC()
clf = GridSearchCV(sc, svr, parameters)
clf.fit(iris.data, iris.target)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spark MLLib imports
Step2: Unsupervised Learning
Step3: Pre-process the data
Step4: Transform 60 features into MMlib vectors
Step5: Scale features to have zero mean and unit standard deviation
Step6: Convert label to numeric index
Step7: Extract only columns of interest
Step8: Data conversion
Step9: Build Model
Step10: Optimize and fit the model to data
Step11: Post-processing and model evaluation
Step12: Plot discrepancy between predicted and labels
Step13: Supervised Learning
Step14: Using mllib and RDDs
Step15: Split into test and train sets
Step16: Fit model to training data
Step17: Evaluate on test data
Step18: Using the newer ml pipeline
Step19: Spark MLLIb and sklearn integration
|
12,748
|
<ASSISTANT_TASK:>
Python Code:
age = 33
print(age)
nouvelAge = age + 1
print(nouvelAge)
input(a)
a = input()
print(a)
print(a*3)
b = int(input())
print(b*5)
r_cercle = int(input ("Rayon du cercle ?"))
pi = 3.14
d_cercle = r_cercle *2
p_cercle =pi*d_cercle
a_cercle = pi*r_cercle*r_cercle
print("Diametre du cercle =", d_cercle, "cm")
print("Périmetre du cercle =", p_cercle, "cm")
print("Aire du cercle = ", a_cercle, "cm²")
a = int(input())
b = int(input())
c = int(input())
delt = b*b-4*a*c
print (delt)
if delt < 0:
print ("Aucune solution")
elif delt > 0:
print ("2 solutions : ", (-b-delt**0.5)/2*a, " et ", (-b+delt**0.5)/2*a)
else:
print ("une seule solution : ", -b/2*a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ci-dessous, on affiche la valeur de la variable age
Step2: Ci-dessous, on crée une variable nouvelAge et on lui affecte la valeur de la variable age augmentée de 1. Puis, on affiche le contenu de la variable nouvelAge
Step3: Maintenant, on va demander une valeur à l'utilisateur. Le premier exemple ne fonctionne pas (rappel
Step4: Lorsque l'on essaie d'afficher le résultat de a multiplié par 3, on obtient un résultat curieux... C'est parce que Python n'a pas compris que 5 est un entier. Ce nigaud pense que c'est une chaîne de caractères, et l'affiche trois fois de suite.
Step5: Pour dire à Python que la valeur saisie est un entier, on peut lui préciser au moment de la saisie... Et ça fonctionne pareil pour les floats (les flottants, i.e. les nombres à virgules).
Step6: Exercice 1. Écrire un algorithme qui demande à l'utilisateur le rayon d'un cercle et qui affiche sur la sortie standard son diamètre, son périmètre et son aire.
Step7: Dans ce premier exercice, on a appris plein de choses
|
12,749
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset(phoebe.dataset.orb, compute_times=np.linspace(0,10,10), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
# test.lc.in has 1000 datapoints... let's use every 10 just for brevity
times, fluxes, sigmas = times[::10], fluxes[::10], sigmas[::10]
b.add_dataset(phoebe.dataset.lc, times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
print(b.computes)
print(b.filter(context='compute'))
b.set_value('irrad_method', 'none')
b.add_compute(phoebe.compute.phoebe, compute='preview', irrad_method='none')
print(b['preview@compute'])
b.add_compute('phoebe', compute='detailed', irrad_method='wilson')
print(b.get_compute('detailed'))
print(b['enabled@lc01'])
b['enabled@lc01@preview'] = False
print(b['enabled@lc01'])
b.set_value_all('enabled@lc01', True)
print(b['enabled@lc01'])
b.run_compute(compute='preview')
print(b.models)
b.set_value('incl@orbit', 90)
b.run_compute(compute='preview', model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(compute='preview', model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(compute='preview', model='run_with_incl_80')
print(b.models)
b.remove_model('latest')
print(b.models)
b.run_compute(compute='preview',
times=[0,0.1,0.2],
model='override_times')
print("dataset times: {}\nmodel times: {}".format(
b.get_value('times', dataset='lc01', context='dataset'),
b.get_value('times', dataset='lc01', model='override_times')))
b.set_value('compute_times', dataset='lc01', value=[0, 0.2, 0.4])
b.run_compute(compute='preview',
model='override_compute_times')
print("dataset times: {}\ndataset compute_times: {}\ndataset compute_phases: {}\n model times: {}".format(
b.get_value('times', dataset='lc01', context='dataset'),
b.get_value('compute_times', dataset='lc01', context='dataset'),
b.get_value('compute_phases', dataset='lc01', context='dataset'),
b.get_value('times', dataset='lc01', model='override_compute_times')))
print(b['enabled@orb01'])
b.set_value_all('enabled@orb01@detailed', False)
b.set_value_all('enabled@orb01@preview', True)
print(b['enabled@orb01'])
print(b['enabled@lc01'])
b.set_value_all('enabled@lc01@detailed', True)
b.set_value_all('enabled@lc01@preview', False)
print(b['enabled@lc01'])
b.run_compute(compute=['detailed', 'preview'], model='multiplecompute')
print(b.models)
b['run_with_incl_90']
b['primary@run_with_incl_90']
b['us@primary@run_with_incl_90']
print(b.get_value(qualifier='us', dataset='orb01', component='primary', model='run_with_incl_90')[:10])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See the building a system tutorial for more details.
Step2: And we'll attach some dummy datasets. See the datasets tutorial for more details.
Step3: Default Compute Options
Step4: Adding Compute Options
Step5: Editing Compute Options
Step6: as you can see, there is a copy for both of our compute options ('preview' and 'detailed').
Step7: or to enable/disable a dataset for all sets of compute options, we can use the set_value_all method
Step8: If the enabled parameter is missing for a set of compute options - it is likely that that particular backend does not support that dataset type.
Step9: Storing Models
Step10: We will now have three new sets of synthetics which can be compared, plotted, or removed.
Step11: To remove a model, call remove_model.
Step12: Overriding Times
Step13: compute_times parameter
Step14: for more details, see the advanced
Step15: We probably have the same problem with 'lc01', but just didn't get far enough to raise the error. So let's fix that as well
Step16: So in this case, 'lc01' will be computed using the options in 'detailed' while 'orb01' will use the options in 'preview'.
Step17: Accessing Synthetics from Models
Step18: or of course through method access
|
12,750
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import seaborn as sns
pp_data = pd.read_csv('ccpp.csv')
pp_data.head()
for c in pp_data.columns:
_ = pylab.figure()
pp_data.loc[:, c].plot(kind='hist')
feature_data = pp_data.drop('AT', axis=1)
corrs = feature_data.corrwith(pp_data.loc[:, 'AT'])
corrs.sort_values(ascending=False).plot(kind='barh')
f_corrs = feature_data.corr()
sns.heatmap(f_corrs, annot=True)
fig, axes = pylab.subplots(1, 4, figsize=(16, 8))
for i, c in enumerate(feature_data.columns):
sns.regplot(x=c, y='AT', data=pp_data, ax=axes[i])
from sklearn.feature_selection import f_regression
f_scores, f_probs = f_regression(feature_data, pp_data.loc[:, 'AT'])
f_imp_df = pd.DataFrame({'scores': f_scores, 'probs': f_probs}, index=feature_data.columns)
f_imp_df.plot(kind='barh', subplots=True)
import statsmodels.formula.api as sm
model = sm.ols(formula='AT ~ PE + RH + V', data=pp_data)
result = model.fit()
result.summary()
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
n_splits = 10
fold = KFold(n_splits=n_splits, shuffle=True)
scores = []
for train_idx, test_idx in fold.split(pp_data):
model = sm.ols(formula='AT ~ PE + RH + V', data=pp_data.loc[train_idx])
result = model.fit()
test_features = pp_data.loc[test_idx].drop('AT', axis=1)
predictions = result.predict(test_features)
actual = pp_data.loc[test_idx, 'AT']
score = r2_score(actual, predictions)
scores.append(score)
scores = pd.Series(scores)
scores.plot(kind='bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attribute Information
Step2: Correlation With the Target column
Step3: Feature Correlations
Step4: Bivariate Analysis
Step5: OLS Regression
Step6: Cross-Validation
|
12,751
|
<ASSISTANT_TASK:>
Python Code:
def split_data(data, prob):
split data into fractions [prob, 1 - prob]
results = [], []
for row in data:
results[0 if random.random() < prob else 1].append(row)
return results
def train_test_split(x, y, test_pct):
data = zip(x, y) # pair corresponding values
train, test = split_data(data, 1 - test_pct) # split the data set of pairs
x_train, y_train = zip(*train) # magical un-zip trick
x_test, y_test = zip(*test)
return x_train, x_test, y_train, y_test
true_positives = 70
false_positives = 4930
true_negatives = 981070
false_negatives = 13930
def accuracy(tp, fp, fn, tn):
correct = tp + tn
total = tp + fp + fn + tn
return correct / total
accuracy(true_positives, false_positives, false_negatives, true_negatives) # 0.98114
def precision(tp, fp, fn, tn):
return tp / (tp + fp)
precision(true_positives, false_positives, false_negatives, true_negatives) # 0.014
def recall(tp, fp, fn, tn):
return tp / (tp + fn)
recall(true_positives, false_positives, false_negatives, true_negatives) # 0.005
def f1_score(tp, fp, fn, tn):
p = precision(tp, fp, fn, tn)
r = recall(tp, fp, fn, tn)
return 2 * p * r / (p + r)
f1_score(true_positives, false_positives, false_negatives, true_negatives) # 0.007
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling
Step2: When splitting data, it's important to keep input data and target data in the same order
Step3: Correctness
Step4: Precision measures the accuracy of our positive predictions
Step5: Recall measures the proportion of positives predicted by the model
Step6: The precision and recall scores clearly point to the inadqequacy of our model. The F1 score combines precision and recall into a single new measure
|
12,752
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import pandas as pd
from scipy import stats
from scipy import optimize
import emcee
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
clr_plt = sns.color_palette()
import models
# the true parameters
eps_true = 5e-4
t_true = 3e5
rho_true = 2.
inh_true = 5e4
# depths and sample size
depth_minmax = [50, 500]
N = 8
# perturbations
err_magnitude = 20.
err_variability = 5.
import gendata
profile_data = gendata.generate_dataset(
models.C_10Be,
(eps_true, t_true, rho_true, inh_true),
zlimits=depth_minmax,
n=N,
err=(err_magnitude, err_variability)
)
sns.set_context('notebook')
fig, ax = plt.subplots()
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
ax.invert_yaxis()
param_names = 'erosion rate', 'time exposure', 'soil density'
param_true = pd.Series((eps_true, t_true, rho_true), index=param_names)
eps_prior = stats.uniform(loc=0., scale=1e-3)
t_prior = stats.uniform(loc=0., scale=8e5)
rho_prior = stats.uniform(loc=1.6, scale=1.)
priors = eps_prior, t_prior, rho_prior
param_priors = pd.Series(priors, index=param_names)
def get_bounds(f, lower_qtl=0., upper_qtl=1.):
return f.ppf(lower_qtl), f.ppf(upper_qtl)
eps_bounds = get_bounds(eps_prior, 0, 1)
t_bounds = get_bounds(t_prior, 0, 1)
rho_bounds = get_bounds(rho_prior, 0, 1)
bounds = eps_bounds, t_bounds, rho_bounds
param_bounds = pd.DataFrame(
np.array(bounds), columns=('min', 'max'), index=param_names
)
param_bounds
fig, axes = plt.subplots(1, 3, figsize=(13, 3))
for ax, p, b, name in zip(axes.flatten(),
param_priors.values,
param_bounds.values,
param_names):
xmin, xmax = b
eps = 0.1 * (xmax - xmin)
x = np.linspace(xmin - eps, xmax + eps, 200)
d = p.pdf(x)
ax.plot(x, d)
ax.fill(x, d, alpha=0.4)
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
plt.setp(ax, ylim=(0, None), yticklabels=[],
xlabel=name)
plt.subplots_adjust()
def lnprior(m):
lps = [p.logpdf(v) for (p, v) in zip(priors, m)]
if not np.all(np.isfinite(lps)):
return -np.inf
return np.sum(lps)
def lnlike(m):
eps, t, rho = m
mean = models.C_10Be(profile_data['depth'].values,
eps, t, rho, inh_true)
var = profile_data['std']**2
lngauss = -0.5 * np.sum(
np.log(2. * np.pi * var) +
(profile_data['C'] - mean)**2 / var
)
return lngauss
def lnprob(m):
lp = lnprior(m)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(m)
n_params, n_walkers = len(param_names), 100
# randomly choose initial guesses according to the prior
init_guesses = np.array(
[p.rvs(size=n_walkers) for p in priors]
).T
# perform bounded non-linear optimization from each initial guess
op_lnlike = lambda *args: -lnlike(*args)
init_walkers = np.empty_like(init_guesses)
for i, g in enumerate(init_guesses):
res = optimize.minimize(op_lnlike, g,
method='TNC',
bounds=bounds)
init_walkers[i] = res['x']
df_init_guesses = pd.DataFrame(init_guesses, columns=param_names)
df_init_walkers = pd.DataFrame(init_walkers, columns=param_names)
def scatter_pos(xcol, ycol, ax):
df_init_guesses.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.5, ax=ax, color=clr_plt[0], label='init guesses'
)
df_init_walkers.plot(
kind='scatter', x=xcol, y=ycol,
alpha=0.8, ax=ax, color=clr_plt[1], label='init walkers'
)
legend = ax.legend(frameon=True, loc='lower right')
legend.get_frame().set_facecolor('w')
plt.setp(ax, xlim=param_bounds.loc[xcol],
ylim=param_bounds.loc[ycol])
fig, ax = plt.subplots(2, 2, figsize=(12,12))
scatter_pos('erosion rate', 'time exposure', ax[0][0])
scatter_pos('soil density', 'time exposure', ax[0][1])
scatter_pos('erosion rate', 'soil density', ax[1][0])
sampler = emcee.EnsembleSampler(n_walkers, n_params, lnprob)
n_steps = 500
sampler.run_mcmc(init_walkers, n_steps)
mcmc_samples = pd.DataFrame(sampler.flatchain,
columns=param_names)
sample_plot_range = slice(None)
axes = mcmc_samples[sample_plot_range].plot(
kind='line', subplots=True,
figsize=(10, 8), color=clr_plt[0]
)
for i, ax in enumerate(axes):
ax.axhline(param_true.iloc[i], color='r')
nburn = 100
mcmc_kept_samples = pd.DataFrame(
sampler.chain[:, nburn:, :].reshape((-1, n_params)),
columns=param_names
)
def jointplot_density(xcol, ycol):
p = sns.jointplot(
xcol, ycol,
data=mcmc_kept_samples,
xlim=(mcmc_kept_samples[xcol].min(),
mcmc_kept_samples[xcol].max()),
ylim=(mcmc_kept_samples[ycol].min(),
mcmc_kept_samples[ycol].max()),
joint_kws={'alpha': 0.02}
)
p.ax_joint.axhline(param_true.loc[ycol], color='r')
p.ax_joint.axvline(param_true.loc[xcol], color='r')
jointplot_density('erosion rate', 'time exposure')
jointplot_density('soil density', 'time exposure')
jointplot_density('erosion rate', 'soil density')
mcmc_kept_samples.mean()
max_ppd = sampler.lnprobability[:, nburn:].reshape((-1)).argmax()
mcmc_kept_samples.iloc[max_ppd]
percentiles = np.array([2.5, 5, 25, 50, 75, 95, 97.5])
mcmc_kept_samples.quantile(percentiles * 0.01)
fig, ax = plt.subplots()
# plot the profile data with error bars
profile_data.plot(
y='depth', x='C', xerr='std',
kind="scatter", ax=ax, rot=45
)
# plot 50 randomly chosen profiles from MCMC samples
depths = np.linspace(profile_data['depth'].min(),
profile_data['depth'].max(),
100)
for i in np.random.randint(len(mcmc_kept_samples), size=100):
eps, t, rho = mcmc_kept_samples.iloc[i]
c = models.C_10Be(depths, eps, t, rho, inh_true)
ax.plot(c, depths, color='grey', alpha=0.1)
# plot the true profile
c_true = models.C_10Be(depths, eps_true, t_true,
rho_true, inh_true)
ax.plot(c_true, depths, color='r', label='true model')
ax.invert_yaxis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The mathematical (deterministic, forward) model
Step2: The data
Step3: The gendata Python module is used to generate the dataset (see the notebook Datasets).
Step4: Make a plot of the dataset
Step5: The statistical model used for computing the posterior probability density PPD
Step6: Create a pd.Series with the true parameter values. It will be used for plotting purpose.
Step7: Define the prior probability distribution for each free parameter. Here the uniform distribution is used, with given bounds (loc and scale arguments of scipy.stats.uniform are the lower bound and the range, respectively)
Step8: Define (min, max) bounds for each free parameter. It should be given by lower and upper quantiles (lower_qtl, upper_qtl) of the prior distribution. Choose the extreme quantiles (0, 1) if the distribution is uniform. It will be used for plotting purpose and also for constrained optimization (see below).
Step9: Plot the prior probability density for each parameter.
Step10: Define a function that returns the (logarithm of the) prior probability density for a given data model m.
Step11: Define a function that returns the log-likelihood. It is a $n$-dimensional Gaussian ($n$ nucleide concentrations sampled along the depth profile) with the mean given by the formard model and the variance given by the error estimated from the measurements of the nucleide concentration of each sample. This Gaussian implies that (1) the error on each measurement is random, (2) the sampled nucleide concentrations are measured independently of each other, (3) the forward model - i.e., the deterministic model that predicts the nucleide concentration profile - represents the real physics and (4) the values of the non-free parameters of the forward model - e.g., nucleide surface production rate, attenuation lengths... - are exactly known.
Step12: Define a function that returns the log-posterior probability density, according to the Bayes's theorem.
Step13: Sampling the posterior probablility density using MCMC
Step14: We show below the initial guesses and the initial positions of the walkers in a scatter plot.
Step15: We can then setup the emcee sampler and run the MCMC for n_steps iterations starting from the initial positions defined above.
Step16: Let's plot the trace of the MCMC iterations. The red lines show the true values.
Step17: Try plotting only the firsts samples (e.g., sample_plot_range = slice(0, 1000)). We see that thanks to the initial positions of the walkers, the emcee sampler quickly starts exploring the full posterior distribution. The “burn-in” period is small and we can therefore set a small value for nburn below.
Step18: We can visualize the sampled posterior propbability density by joint plots of the MCMC samples. The red lines show the true values.
Step19: Given the samples, it is straightforward to characterize the posterior porbability density and estimate its moments.
Step20: the sample which have the max PPD value (i.e., the most probable sampled model)
Step21: the PPD quantiles (useful for delineating the Bayesian confidence intervals or credible intervals for each free parameter)
Step22: We finally plot the nucleide concentration profiles (blue dots
|
12,753
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
from IPython.display import Image, display
b_img = Image('notMNIST_small/B/MDEtMDEtMDAudHRm.png')
j_img = Image('notMNIST_small/J/Nng3b2N0IEFsdGVybmF0ZSBSZWd1bGFyLnR0Zg==.png')
print('B:')
display(b_img)
print('J:')
display(j_img)
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
import matplotlib.pyplot as plt
def load_dataset(filename):
with open(filename, 'rb') as f:
return pickle.load(f)
# Display a random matrix with a specified figure number and a grayscale colormap
largeNameA = train_datasets[0]
print(largeNameA)
largeDataA = load_dataset(largeNameA)
img1 = largeDataA[0, :, :]
plt.matshow(img1, cmap=plt.cm.gray)
plt.show()
smallNameJ = test_datasets[9]
print(smallNameJ)
smallDataJ = load_dataset(smallNameJ)
img2 = smallDataJ[0, :, :]
plt.matshow(img2, cmap=plt.cm.gray)
plt.show()
for name in train_datasets:
dataset = load_dataset(name)
print(name, ' size:', dataset.shape)
for name in test_datasets:
dataset = load_dataset(name)
print(name, ' size:', dataset.shape)
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
def show_images(dataset, labels, count):
for i in range(0,count):
print(labels[i])
plt.matshow(dataset[i,:,:], cmap=plt.cm.gray)
plt.show()
show_images(train_dataset, train_labels, 3)
show_images(test_dataset, test_labels, 3)
show_images(valid_dataset, valid_labels, 3)
pickle_file = 'notMNIST.pickle'
if not os.path.exists(pickle_file):
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
def sanitize_dataset(dataset, labels, filter_dataset, similarity_epsilon):
similarity = cosine_similarity(np.reshape(dataset, (dataset.shape[0],-1)), np.reshape(filter_dataset, (filter_dataset.shape[0],-1)))
same_filter = np.sum(similarity == 1, axis=1) > 0
similar_filter = np.sum(similarity > 1-similarity_epsilon, axis=1) > 0
same_count = np.sum(same_filter)
similar_count = np.sum(similar_filter)
filtered_dataset = dataset[same_filter==False]
filtered_labels = labels[same_filter==False]
return filtered_dataset, filtered_labels, same_count, similar_count
sanit_pickle_file = 'notMNIST_sanit.pickle'
if not os.path.exists(sanit_pickle_file):
filtered_valid_dataset, filtered_valid_labels, train_valid_same, train_valid_similar = \
sanitize_dataset(valid_dataset, valid_labels, train_dataset, 0.001)
print("training-validation: same=", train_valid_same, "similar=", train_valid_similar)
filtered_test_dataset, filtered_test_labels, train_test_same, train_test_similar = \
sanitize_dataset(test_dataset, test_labels, train_dataset, 0.001)
print("training-testing: same=", train_test_same, "similar=", train_test_similar)
filtered_test_dataset, filtered_test_labels, valid_test_same, valid_test_similar = \
sanitize_dataset(filtered_test_dataset, filtered_test_labels, filtered_valid_dataset, 0.001)
print("validation-testing: same=", valid_test_same, "similar=", valid_test_similar)
try:
f = open(sanit_pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': filtered_valid_dataset,
'valid_labels': filtered_valid_labels,
'test_dataset': filtered_test_dataset,
'test_labels': filtered_test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
def load_datasets(pickle_file):
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
f = open(pickle_file, 'rb')
save = pickle.load(f)
f.close()
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
return train_dataset, train_labels, valid_dataset, valid_labels, test_dataset, test_labels
train_dataset, train_labels, filtered_valid_dataset, filtered_valid_labels, filtered_test_dataset, filtered_test_labels = load_datasets(sanit_pickle_file)
print('Training (sanitized):', train_dataset.shape, train_labels.shape)
print('Validation (sanitized):', filtered_valid_dataset.shape, filtered_valid_labels.shape)
print('Testing (sanitized):', filtered_test_dataset.shape, filtered_test_labels.shape)
def train_model(dataset, labels, size=None):
maxSize = dataset.shape[0]
if size is None:
size = maxSize
else:
if size > maxSize:
size = maxSize
indices = np.arange(maxSize)
np.random.shuffle(indices)
indices = indices[0:size]
dataset = dataset[indices]
labels = labels[indices]
X = np.reshape(dataset, (size,-1))
y = labels
lr = LogisticRegression(n_jobs=4)
lr.fit(X, y)
return lr
def model_score(model, dataset, labels):
X = np.reshape(dataset, (dataset.shape[0],-1))
y = labels
return model.score(X, y)
def train(size=None):
if size is None:
print("Training with all examples:")
else:
print("Training with ", size, " examples:")
model = train_model(train_dataset, train_labels, size)
print(" validation score: ", model_score(model, valid_dataset, valid_labels))
print(" test score: ", model_score(model, test_dataset, test_labels))
print(" validation score (sanitized): ", model_score(model, filtered_valid_dataset, filtered_valid_labels))
print(" test score (sanitized): ", model_score(model, filtered_test_dataset, filtered_test_labels))
for size in [50, 100, 1000, 5000]:
train(size)
# training on all examples:
#train()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
Step5: Problem 1
Step7: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
Step8: Problem 2
Step9: Problem 3
Step10: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Step11: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step12: Problem 4
Step13: Finally, let's save the data for later reuse
Step14: Problem 5
Step15: Problem 6
|
12,754
|
<ASSISTANT_TASK:>
Python Code:
all_df=[]
nfiles=15
for i in range(nfiles):
filename = 'msample%d.csv' % i
print i
all_df.append(pd.read_csv(filename, header=None))
all_df[0]
Y=[]
for i in range(nfiles):
Y.append(all_df[i][8]=='Success')
Y[1]
def map_user(x):
if x.startswith('C'):
return 'C'
elif x.startswith('U'):
return 'U'
else:
return x
X=[]
for i in range(nfiles):
df=all_df[i]
df["source_user"], df["source_domain"] = zip(*df[1].str.split('@').tolist())
df["source_user"]=df["source_user"].str.rstrip('$')
df["destination_user"], df["destination_domain"] = zip(*df[2].str.split('@').tolist())
df["destination_user"]=df["destination_user"].str.rstrip('$')
df['source_class']=df['source_user'].map(map_user)
df['destination_class']=df['destination_user'].map(map_user)
x=pd.DataFrame.from_items([
('time', (df[0]%(24*60*60)).astype(int))])
x['same_user']= (df['destination_user']==df['source_user'])
x['same_domain']=(df['destination_domain']==df['source_domain'])
x['source_user_comp_same']=(df[3]==df['source_user'])
x['destination_user_comp_same']=(df['destination_user']==df[4])
x['same_comp']=(df[3]==df[4])
x['source_domain_comp_same']=(df[3]==df['source_domain'])
x['destination_domain_comp_same']=(df['destination_domain']==df[4])
for j in [5,6, 7]:
for label in sorted(df[j].unique()):
if label=='?':
if j==5:
x['?_authentication type']=(df[j]==label)
elif j==6:
x['?_logon type']=(df[j]==label)
else:
x[label]=(df[j]==label)
for cl in ['source_class', 'destination_class']:
for label in sorted(df[cl].unique()):
if cl=='source_class':
x['source_'+label]=(df[cl]==label)
else:
x['destination_'+label]=(df[cl]==label)
X.append(x)
X[1]
X[0].columns
[len(entry.columns) for entry in X]
all_col = set(sum([list(entry.columns) for entry in X], []))
[all_col.difference(list(entry.columns)) for entry in X]
col_set = [set(entry.columns) for entry in X]
common_subset = set.intersection(*col_set)
drop_cols = [e.difference(common_subset) for e in col_set]
for entry, to_drop in zip(X, drop_cols):
print 'dropping', to_drop
for item in to_drop:
del entry[item]
col0 = list(X[0].columns)
for i in range(1,nfiles):
col_i = list(X[i].columns)
assert col0 == col_i, 'mismatch in %r:\n%s\n%s' % (i, col0, col_i)
from sklearn import linear_model
clf_l1_LR = linear_model.LogisticRegression(C=1000, penalty='l1', tol=0.001).fit(X[0], Y[0])
scores=[]
scores.append(clf_l1_LR.score(X[0], Y[0]))
print 'score for training set', scores[0]
for i in range(1,nfiles):
scores.append(clf_l1_LR.score(X[i], Y[i]))
print 'score for test set', i, scores[i]
print 'mean', np.mean(scores), 'std', np.std(scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I here repeat my procedure for generating labeled data and features for training/test data.
Step2: I just discovered that my sample sets do not contain the same number of features.
Step3: This is potentially different spelling of two different commands/labels. For now I will just remove all the labels that are not present in 15 files of data I have just downloaded. If the scores for machine learning will change noticeably. I will look into ways to clean and incorporate this data.
Step4: Machine learning with logistic regression with Lasso
|
12,755
|
<ASSISTANT_TASK:>
Python Code:
paired_bp_tn_split??
cc = codes.ix[matched_rna.columns.get_level_values(0)].dropna().unique()
r = pd.DataFrame({c: ttest_rel(matched_rna.ix['PLAU'].ix[ti(codes==c)])
for c in cc}).T
fig, ax = subplots(figsize=(7,3))
cc = ['HNSC','LUSC','LUAD','BLCA','THCA','BRCA','COAD','READ']
paired_bp_tn_split(matched_rna.ix['PLAU'], codes[codes.isin(cc)], ax=ax)
fig.savefig('/cellar/users/agross/figures/plau.pdf')
r.sort('p')
ttest_rel(matched_rna.ix['PLAU'])
paired_bp_tn_split(matched_rna.ix['PLAT'], codes)
paired_bp_tn_split(matched_rna.ix['MMP1'], codes)
g = ['CELA1','CELA2A','CELA2B','CELA3A','CELA3B','CTRC','ELANE','MMP12']
paired_bp_tn_split?
fig, axs = subplots(8, 1, figsize=(15,20), sharex=True)
for i,gene in enumerate(g):
paired_bp_tn_split(matched_rna.ix[gene], codes, ax=axs[i],
data_type='')
g = ['CTSA','CTSB','CTSC','CTSD','CTSE','CTSF','CTSG','CTSH',
'CTSK','CTSL1','CTSL2','CTSO','CTSS','CTSW','CTSZ']
len(g)
fig, axs = subplots(15, 1, figsize=(15,40), sharex=True)
for i,gene in enumerate(g):
paired_bp_tn_split(matched_rna.ix[gene], codes, ax=axs[i],
data_type='')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TPA protease
Step2: Collagenase
Step3: elastases
Step4: Cathepsin
|
12,756
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
dtheta = y[1]
domega = -g/l*np.sin(y[0])-a*y[1]-b*np.sin(omega0*t)
return [dtheta, domega]
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
try:
if y.shape[1]:
return g*l*(1-np.cos(y[:,0]))+.5*l**2*y[:,1]**2
except:
Epm = g*l*(1-np.cos(y[0]))+.5*l**2*y[1]**2
return Epm
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
a=0
b=0
omega0=2
pend = odeint(derivs, [0,0], t, args=(a,b,omega0),atol=1e-3, rtol=1e-2)
f=plt.figure(figsize=(15,10))
ax = plt.subplot(311)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,pend[:,0]);
plt.title(r"$\theta$ vs. time")
plt.xlabel("Time")
plt.ylabel(r"$\theta$")
ax = plt.subplot(312)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,pend[:,1]);
plt.title(r"$\omega$ vs. time")
plt.xlabel("Time")
plt.ylabel(r"$\omega$")
ax = plt.subplot(313)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,energy(pend));
plt.title(r"Energy vs. time")
plt.xlabel("Time")
plt.ylabel("$Energy$")
plt.tight_layout()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
pend = odeint(derivs, [-np.pi+0.1,0], t, args=(a,b,omega0),atol=1e-11, rtol=1e-10)
f=plt.figure(figsize=(15,10))
ax = plt.subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(pend[:,0], pend[:,1]);
plt.xlim(-2*np.pi, 2*np.pi)
plt.ylim(-10, 10)
plt.title(r"$\theta$ vs. $\omega$")
plt.xlabel(r"$\omega$")
plt.ylabel(r"$\theta$")
plot_pendulum(0.5, 0.0, 0.0)
interact(plot_pendulum, a=(0.0,1.0, 0.1), b=(0.0,1.0,0.1), omega0=(0.0,10.0,0.1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Damped, driven nonlinear pendulum
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Step7: Damped pendulum
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with
|
12,757
|
<ASSISTANT_TASK:>
Python Code:
apikey = '34b41fe7b9db6c1bd5f8ea3492bca332'
# TA-COMMENT: Nice!
coordinates = {'San Antonio': '29.4241,-98.4936', 'Miami': '25.7617,-80.1918', 'Central Park': '40.7829,-73.9654'}
import requests
url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['San Antonio']
response = requests.get(url)
data = response.json()
# #Is it in my time zone?
# #temp. Answer: dict
# print(type(data))
# #temp. Answer: ['offset', 'latitude', 'hourly', 'flags', 'minutely', 'longitude', 'timezone', 'daily', 'currently']
# print(data.keys())
# #temp. Answer: dict
# print(type(data['currently']))
# #temp. Answer: ['windSpeed', 'time', 'dewPoint', 'icon', 'temperature', 'apparentTemperature', 'precipProbability',
#'visibility', 'cloudCover', 'nearestStormDistance', 'pressure', 'windBearing', 'ozone', 'humidity', 'precipIntensity',
#'summary', 'nearestStormBearing']
# print(data['currently'].keys())
# #temp. It's in my time zone!
# print(data['currently']['time'])
#Oh, this would have been easier:
#temp. Answer: America/Chicago
print(data['timezone'])
print('The current wind speed is', data['currently']['windSpeed'], 'miles per hour.')
print('It feels', round(data['currently']['apparentTemperature'] - data['currently']['temperature'], 2), 'degrees Fahrenheit warmer than it actually is.')
# #temp. Answer: dict
# print(type(data['daily']))
# #temp. Answer: ['summary', 'data', 'icon']
# print(data['daily'].keys())
# #temp. Answer: list
# print(type(data['daily']['data']))
# #temp. It's a list of dictionaries
# #this time means Wed, 08 Jun 2016 05:00:00 GMT, which is currently today
# print(data['daily']['data'][0])
# #this time means Thu, 09 Jun 2016 05:00:00 GMT
# print(data['daily']['data'][1])
# #temp. Answer: 8
# print(len(data['daily']['data']))
# #temp. Answer: ['windSpeed', 'time', 'sunsetTime', 'precipIntensityMaxTime', 'apparentTemperatureMax', 'windBearing',
# #'temperatureMinTime', 'precipIntensityMax', 'precipProbability', 'sunriseTime', 'temperatureMin',
# #'apparentTemperatureMaxTime', 'precipIntensity', 'apparentTemperatureMinTime', 'temperatureMax', 'dewPoint',
# #'temperatureMaxTime', 'icon', 'moonPhase', 'precipType', 'visibility', 'cloudCover', 'pressure',
# #'apparentTemperatureMin', 'ozone', 'humidity', 'summary']
# print(data['daily']['data'][0].keys())
today_moon = data['daily']['data'][0]['moonPhase']
print(100 * (1 - abs(1 - (today_moon * 2))), 'percent of the moon is visible today.')
print('The difference between today\'s high and low temperatures is', round(data['daily']['data'][0]['temperatureMax'] - data['daily']['data'][0]['temperatureMin'], 2), 'degrees Fahrenheit.')
daily_forecast = data['daily']['data']
print('Starting with today\'s, the forecasts for the next week are for highs of:')
for day in daily_forecast:
if 85 <= day['temperatureMax']:
warmth = 'hot'
elif 70 <= day['temperatureMax'] < 85:
warmth = 'warm'
else:
warmth = 'cold'
print(day['temperatureMax'], 'degrees Fahrenheit, a pretty', warmth, 'day.')
fl_url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['Miami']
fl_response = requests.get(url)
fl_data = fl_response.json()
# #temp. Answer: dict
# print(type(fl_data['hourly']))
# #temp. Answer: ['summary', 'data', 'icon']
# print(fl_data['hourly'].keys())
# #temp. Answer: list
# print(type(fl_data['hourly']['data']))
# #temp. Answer: 49
# print(len(fl_data['hourly']['data']))
# #temp. It's a list of dictionaries
# #the top of this hour
# print(fl_data['hourly']['data'][0])
# #the top of next hour
# print(fl_data['hourly']['data'][1])
# #temp. Answer: ['precipType', 'time', 'apparentTemperature', 'windSpeed', 'icon', 'summary', 'precipProbability',
# #'visibility', 'cloudCover', 'pressure', 'windBearing', 'ozone', 'humidity', 'precipIntensity', 'temperature',
# #'dewPoint']
# print(fl_data['hourly']['data'][0].keys())
# # how many hours are left in the day in EDT: (24 - ((time % 86400)/3600 - 4))
# times = [1465423200, 1465426800]
# for time in times:
# print (24 - ((time % 86400)/3600 - 4))
hourly_data = fl_data['hourly']['data']
hours_left = range(int(24 - ((hourly_data[0]['time'] % 86400)/3600 - 4)))
print('Starting with this hour, the hourly forecasts for the rest of the day are for:')
for hour in hours_left:
if hourly_data[hour]['cloudCover'] > .5:
print(hourly_data[hour]['temperature'], 'degrees Fahrenheit and cloudy')
else:
print(hourly_data[hour]['temperature'], 'degrees Fahrenheit')
decades = range(3)
for decade in decades:
cp_url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['Central Park'] + ',' + str(10 * decade + 1980) + '-12-25T12:00:00'
cp_response = requests.get(cp_url)
cp_data = cp_response.json()
print('On Christmas Day in', str(1980 + decade * 10) + ', the high in Central Park was', cp_data['daily']['data'][0]['temperatureMax'], 'degrees Fahrenheit.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
|
12,758
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import copy
from sklearn.datasets import fetch_mldata
from sklearn import cross_validation
from sklearn import base
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
#mnist = fetch_mldata('iris')
import matplotlib.pyplot as plt
ds = sklearn.datasets.make_classification(n_samples=20000,
n_features=30, # 30 features
n_informative=5, # only 5 informatives ones
n_redundant=0,
n_repeated=3, # and 3 duplicate
n_classes=2,
n_clusters_per_class=1,
weights=None,
flip_y=0.03,
class_sep=0.8,
hypercube=True,
shift=0.0,
scale=1.0,
shuffle=True,
random_state=None)
X= ds[0]
y= ds[1]
# labels: [0,1] -> [-1,1]
for idx,i in enumerate(y):
if (i==0):
y[idx]=-1
print(X[0])
print(y[0])
class GradientDescent(base.BaseEstimator):
def __init__(self,theta,lamb,eps):
self.theta=theta
self.eps=eps
self.lamb=lamb
self.used_features=len(theta)
def fit(self,X,y,nbIt=1000,printevery=-1):
l=len(X)
xTrans = X.transpose()
for i in xrange(0,nbIt):
#index = np.random.randint(l)
loss = np.dot(X, self.theta) - y
cost = np.sum(loss ** 2) * (1 / l) + (self.lamb*np.linalg.norm(self.theta))
gradient = np.dot(xTrans,(np.dot(self.theta,xTrans)-y))
if i%(nbIt/100)==0:
thetaprime = self.theta - self.eps * (np.sign(theta)*self.lamb)
else:
thetaprime = self.theta - self.eps * gradient
for k in xrange(0,len(theta)):
self.theta[k] = 0 if thetaprime[k]*theta[k]<0 else thetaprime[k]
if printevery!=-1 and i%printevery==0:
print("Iteration %s | Cost: %f | Score: %.03f" % (str(i).ljust(6), cost,self.score(X,y)))
ttt = self.nb_used_features()
print("%d features used"%(ttt))
self.used_features=ttt
elif i%1000==0:
ttt = self.nb_used_features()
self.used_features=ttt
def predict(self,x):
ret=[]
for i in x:
ret.append(1 if np.dot(i,self.theta)>0 else -1)
return ret
def score(self,X,y):
cpt=0.0
allpred = self.predict(X)
for idx,i in enumerate(allpred):
cpt += 1 if i==y[idx] else 0
return cpt/len(X)
def nb_used_features(self):
cpt=0
for ii in self.theta:
if ii==0:
cpt+=1
return len(self.theta)-cpt
theta = copy.deepcopy(X[0])
lamb=500
eps=0.00001
gd = GradientDescent(theta,lamb,eps)
nbIterations = 5000
gd.fit(X,y,nbIterations,printevery=nbIterations/10)
scores = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0,4000,200):
theta = copy.deepcopy(X[0])
gd = GradientDescent(theta,lamb,eps)
nbIterations = 4000
gd.fit(X,y,nbIterations)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),gd.used_features))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(gd.used_features)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
class GradientDescentL2(base.BaseEstimator):
def __init__(self,theta,lamb,eps):
self.theta=theta
self.eps=eps
self.lamb=lamb
self.used_features=len(theta)
def fit(self,X,y,nbIt=1000,printevery=-1):
l=len(X)
xTrans = X.transpose()
for i in xrange(0,nbIt):
index = np.random.randint(l)
loss = np.dot(X, self.theta) - y
cost = np.sum(loss ** 2) * (1 / l) + (self.lamb*np.linalg.norm(self.theta))**2
gradient = np.dot(xTrans,(np.dot(self.theta,xTrans)-y))
if i%(nbIt/100)==0:
thetaprime = self.theta - self.eps * (np.sign(theta)*self.lamb)
else:
thetaprime = self.theta - self.eps * gradient
for k in xrange(0,len(theta)):
self.theta[k] = 0 if thetaprime[k]*theta[k]<0 else thetaprime[k]
if printevery!=-1 and i%printevery==0:
print("Iteration %s | Cost: %f | Score: %.03f" % (str(i).ljust(6), cost,self.score(X,y)))
ttt = self.nb_used_features()
print("%d features used"%(ttt))
self.used_features=ttt
elif i%1000==0:
ttt = self.nb_used_features()
self.used_features=ttt
def predict(self,x):
ret=[]
for i in x:
ret.append(1 if np.dot(i,self.theta)>0 else -1)
return ret
def score(self,X,y):
cpt=0.0
allpred = self.predict(X)
for idx,i in enumerate(allpred):
cpt += 1 if i==y[idx] else 0
return cpt/len(X)
def nb_used_features(self):
cpt=0
for ii in self.theta:
if ii==0:
cpt+=1
return len(self.theta)-cpt
ds = sklearn.datasets.make_classification(n_samples=200,
n_features=30, # 30 features
n_informative=5, # only 5 informatives ones
n_redundant=0,
n_repeated=3, # and 3 duplicate
n_classes=2,
n_clusters_per_class=1,
weights=None,
flip_y=0.01,
class_sep=0.8,
hypercube=True,
shift=0.0,
scale=1.0,
shuffle=True,
random_state=None)
X= ds[0]
y= ds[1]
# labels: [0,1] -> [-1,1]
for idx,i in enumerate(y):
if (i==0):
y[idx]=-1
theta = copy.deepcopy(X[0])
lamb=2000
eps=0.00001
gd = GradientDescentL2(theta,lamb,eps)
#gd.tmp
nbIterations = 5000
gd.fit(X,y,nbIterations,printevery=nbIterations/10)
scores = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0,4000,200):
theta = copy.deepcopy(X[0])
gd = GradientDescentL2(theta,lamb,eps)
nbIterations = 5000
gd.fit(X,y,nbIterations)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(gd.used_features)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
#used to cross-val on lasso and elastic-net
def scorer(estimator, X, y):
pred = estimator.predict(X)
cpt=0.0
for idx,i in enumerate(pred):
if i<0:
cpt += 1 if y[idx]==-1 else 0
else:
cpt += 1 if y[idx]==1 else 0
return cpt/len(y)
lass = Lasso(alpha = 0.2)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print(lass.coef_)
print("Feature used: %d"%np.count_nonzero(lass.coef_))
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0.05,1.05,0.05):
theta = copy.deepcopy(X[0])
gd = Lasso(alpha = lamb)
nbIterations = 4000
gd.fit(X,y)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring=scorer)
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),np.count_nonzero(gd.coef_)))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(np.count_nonzero(gd.coef_))
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
lass = ElasticNet(alpha = 0.2, l1_ratio=0)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
lass = ElasticNet(alpha = 0.2, l1_ratio=0.5)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
lass = ElasticNet(alpha = 0.2, l1_ratio=1)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0.05,1.05,0.05):
theta = copy.deepcopy(X[0])
gd = ElasticNet(alpha = 0.2, l1_ratio=lamb)
nbIterations = 4000
gd.fit(X,y)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring=scorer)
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),np.count_nonzero(gd.coef_)))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(np.count_nonzero(gd.coef_))
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#FF9900')
ax2.plot(la, used_features, '#9933FF')
ax1.set_xlabel('L1 L2 ratio')
ax1.set_ylabel('Cross val score', color='#FF9900')
ax2.set_ylabel('Nb features used', color='#9933FF')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data generation
Step2: L1
Step3: Selecting lambda
Step4: L2
Step5: Test with only 200 samples
Step6: Selecting lambda
Step7: Evaluation using sklearn Lasso
Step8: Comparaison of L1 and L2 using sklearn ElasticNet
Step9: We observe that, as expected, the more we take L1 into account the less features are used.
|
12,759
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
show_digit(13)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 50, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Define the neural network
def build_cnn_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 784])
net = tflearn.reshape(net, [-1, 28, 28, 1])
net = tflearn.conv_2d(net, 32, filter_size=5, strides=1, padding='same', activation='relu')
net = tflearn.max_pool_2d(net, 5)
net = tflearn.conv_2d(net, 64, filter_size=3, strides=1, padding='same', activation='relu')
net = tflearn.avg_pool_2d(net, 3)
net = tflearn.flatten(net)
net = tflearn.dropout(net, 0.8)
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
#model = build_model()
model = build_cnn_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=500, n_epoch=5)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
12,760
|
<ASSISTANT_TASK:>
Python Code:
#@title Imports
!pip install jax_md
import jax.numpy as np
import numpy as onp
from jax import jit
from jax import random
from jax import lax
from jax.config import config
config.update('jax_enable_x64', True)
from jax_md import space
from jax_md import energy
from jax_md import simulate
from jax_md import quantity
from jax_md import partition
from jax_md.colab_tools import renderer
!nvidia-smi
lattice_constant = 1.37820
N_rep = 40
box_size = N_rep * lattice_constant
# Using float32 for positions / velocities, but float64 for reductions.
dtype = np.float32
# Specify the format of the neighbor list.
# Options are Dense, Sparse, or OrderedSparse.
format = partition.OrderedSparse
displacement, shift = space.periodic(box_size)
R = []
for i in range(N_rep):
for j in range(N_rep):
for k in range(N_rep):
R += [[i, j, k]]
R = np.array(R, dtype=dtype) * lattice_constant
N = R.shape[0]
phi = N / (lattice_constant * N_rep) ** 3
print(f'Created a system of {N} LJ particles with number density {phi:.3f}')
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement,
box_size,
r_cutoff=3.0,
dr_threshold=1.,
format=format)
init, apply = simulate.nvt_nose_hoover(energy_fn, shift, 5e-3, kT=1.2)
key = random.PRNGKey(0)
# We pick an "extra capacity" to ensure ahead of time that the neighbor
# list will have enough capacity. Since sparse neighbor lists are more
# robust to changes in the number of particles, in this case we only
# need to actually add more capacity for dense neighbor lists.
if format is partition.Dense:
nbrs = neighbor_fn.allocate(R, extra_capacity=55)
else:
nbrs = neighbor_fn.allocate(R)
state = init(key, R, neighbor=nbrs)
def step(i, state_and_nbrs):
state, nbrs = state_and_nbrs
nbrs = nbrs.update(state.position)
return apply(state, neighbor=nbrs), nbrs
# Run once to make sure the JIT cache is occupied.
new_state, new_nbrs = lax.fori_loop(0, 10000, step, (state, nbrs))
new_state.position.block_until_ready()
# Check to make sure the neighbor list didn't overflow.
new_nbrs.did_buffer_overflow
%%timeit
new_state, new_nbrs = lax.fori_loop(0, 10000, step, (state, nbrs))
new_state.position.block_until_ready()
renderer.render(
box_size,
{'particles': renderer.Sphere(new_state.position)}
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 64k Particle LJ System
Step2: Prepare the system
Step3: Benchmark using fixed size neighbor list.
Step4: On an A100 this comes out to 22.4 s / loop which is 2.24 ms / step.
|
12,761
|
<ASSISTANT_TASK:>
Python Code:
import gensim, logging, os
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
class Corpus(object):
'''Clase Corpus que permite leer de manera secuencial un directorio de documentos de texto'''
def __init__(self, directorio):
self.directory = directorio
def __iter__(self):
for fichero in os.listdir(self.directory):
for linea in open(os.path.join(self.directory, fichero)):
yield linea.split()
CORPUSDIR = 'PATH_TO_YOUR_CORPUS_DIRECTORY'
oraciones = Corpus(CORPUSDIR)
model = gensim.models.Word2Vec(oraciones, min_count=10, size=150, workers=2)
# el modelo puede entrenarse en dos pasos sucesivos pero por separado
#model = gensim.models.Word2Vec() # modelo vacío
#model.build_vocab(oraciones) # primera pasada para crear la lista de vocabulario
#model.train(other_sentences) # segunda pasada para calcula vectores
model.save('PATH_TO_YOUR_MODEL.w2v')
#model = gensim.models.Word2Vec.load('PATH_TO_YOUR_MODEL.w2v')
#model = gensim.models.Word2Vec.load('/data/w2v/eswiki-280.w2v')
model = gensim.models.Word2Vec.load('/data/w2v/efe.model.w2v')
print(model.corpus_count)
print(model['azul'], '\n')
print(model['verde'], '\n')
print(model['microsoft'])
print('hombre - mujer', model.similarity('hombre', 'mujer'))
print('madrid - parís', model.similarity('madrid', 'parís'))
print('perro - gato', model.similarity('perro', 'gato'))
print('gato - periódico', model.similarity('gato', 'periódico'))
lista1 = 'madrid barcelona gonzález washington'.split()
print('en la lista', ' '.join(lista1), 'sobra:', model.doesnt_match(lista1))
lista2 = 'psoe pp ciu epi'.split()
print('en la lista', ' '.join(lista2), 'sobra:', model.doesnt_match(lista2))
lista3 = 'publicaron declararon soy negaron'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
lista3 = 'homero saturno cervantes shakespeare cela'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
terminos = 'psoe chicago sevilla aznar podemos estuvieron'.split()
terminos = 'microsoft ibm iberia repsol'.split()
for t in terminos:
print(t, '==>', model.most_similar(t), '\n')
print('==> alcalde + mujer - hombre')
most_similar = model.most_similar(positive=['alcalde', 'mujer'], negative=['hombre'], topn=3)
for item in most_similar:
print(item)
print('==> madrid + filipinas - españa')
most_similar = model.most_similar(positive=['madrid', 'filipinas'], negative=['españa'], topn=3)
for item in most_similar:
print(item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Entrenamiento de un modelo
Step2: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
Step3: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
Step4: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción
Step5: Probando nuestro modelo
Step6: Cada término del vocabulario está representado como un vector con 150 dimensiones
Step7: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños
Step8: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match
Step9: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo
Step10: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones.
|
12,762
|
<ASSISTANT_TASK:>
Python Code:
# %matplotlib inline
# %config InlineBackend.figure_format='retina' # mac
# %load_ext autoreload
# %autoreload 2
import pandas as pd
import gseapy as gp
import matplotlib.pyplot as plt
gp.__version__
# read in an example gene list
gene_list = pd.read_csv("./tests/data/gene_list.txt",header=None, sep="\t")
gene_list.head()
# convert dataframe or series to list
glist = gene_list.squeeze().str.strip().tolist()
print(glist[:10])
names = gp.get_library_name() # default: Human
names[:10]
yeast = gp.get_library_name(organism='Yeast')
yeast[:10]
# run enrichr
# if you are only intrested in dataframe that enrichr returned, please set no_plot=True
# list, dataframe, series inputs are supported
enr = gp.enrichr(gene_list="./tests/data/gene_list.txt",
gene_sets=['KEGG_2016','KEGG_2013'],
organism='Human', # don't forget to set organism to the one you desired! e.g. Yeast
description='test_name',
outdir='test/enrichr_kegg',
# no_plot=True,
cutoff=0.5 # test dataset, use lower value from range(0,1)
)
# obj.results stores all results
enr.results.head(5)
enr2 = gp.enrichr(gene_list="./tests/data/gene_list.txt",
# or gene_list=glist
description='test_name',
gene_sets="./tests/data/genes.gmt",
background='hsapiens_gene_ensembl', # or the number of genes, e.g 20000
outdir='test/enrichr_kegg2',
cutoff=0.5, # only used for testing.
verbose=True)
enr2.results.head(5)
# simple plotting function
from gseapy.plot import barplot, dotplot
# to save your figure, make sure that ``ofname`` is not None
barplot(enr.res2d,title='KEGG_2013',)
# to save your figure, make sure that ``ofname`` is not None
dotplot(enr.res2d, title='KEGG_2013',cmap='viridis_r')
# !gseapy enrichr -i ./data/gene_list.txt \
# --ds BP2017 \
# -g GO_Biological_Process_2017 \
# -v -o test/enrichr_BP
rnk = pd.read_csv("./tests/data/edb/gsea_data.gsea_data.rnk", header=None, sep="\t")
rnk.head()
# run prerank
# enrichr libraries are supported by prerank module. Just provide the name
# use 4 process to acceralate the permutation speed
# note: multiprocessing may not work on windows
pre_res = gp.prerank(rnk=rnk, gene_sets='KEGG_2016',
processes=4,
permutation_num=100, # reduce number to speed up testing
outdir='test/prerank_report_kegg', format='png', seed=6)
#access results through obj.res2d attribute or obj.results
pre_res.res2d.sort_index().head()
# extract geneset terms in res2d
terms = pre_res.res2d.index
terms
## easy way
from gseapy.plot import gseaplot
# to save your figure, make sure that ofname is not None
gseaplot(rank_metric=pre_res.ranking, term=terms[0], **pre_res.results[terms[0]])
# save figure
# gseaplot(rank_metric=pre_res.ranking, term=terms[0], ofname='your.plot.pdf', **pre_res.results[terms[0]])
# ! gseapy prerank -r temp.rnk -g temp.gmt -o prerank_report_temp
phenoA, phenoB, class_vector = gp.parser.gsea_cls_parser("./tests/data/P53.cls")
#class_vector used to indicate group attributes for each sample
print(class_vector)
gene_exp = pd.read_csv("./tests/data/P53.txt", sep="\t")
gene_exp.head()
print("positively correlated: ", phenoA)
print("negtively correlated: ", phenoB)
# run gsea
# enrichr libraries are supported by gsea module. Just provide the name
gs_res = gp.gsea(data=gene_exp, # or data='./P53_resampling_data.txt'
gene_sets='KEGG_2016', # enrichr library names
cls= './tests/data/P53.cls', # cls=class_vector
# set permutation_type to phenotype if samples >=15
permutation_type='phenotype',
permutation_num=100, # reduce number to speed up test
outdir=None, # do not write output to disk
no_plot=True, # Skip plotting
method='signal_to_noise',
processes=4, seed= 7,
format='png')
#access the dataframe results throught res2d attribute
gs_res.res2d.sort_index().head()
from gseapy.plot import gseaplot, heatmap
terms = gs_res.res2d.index
# Make sure that ``ofname`` is not None, if you want to save your figure to disk
gseaplot(gs_res.ranking, term=terms[0], **gs_res.results[terms[0]])
# plotting heatmap
genes = gs_res.res2d.genes[0].split(";")
# Make sure that ``ofname`` is not None, if you want to save your figure to disk
heatmap(df = gs_res.heatmat.loc[genes], z_score=0, title=terms[0], figsize=(18,6))
# !gseapy gsea -d ./data/P53_resampling_data.txt \
# -g KEGG_2016 -c ./data/P53.cls \
# -o test/gsea_reprot_2 \
# -v --no-plot \
# -t phenotype
# txt, gct file input
ss = gp.ssgsea(data="./tests/data/testSet_rand1200.gct",
gene_sets="./tests/data/randomSets.gmt",
outdir='test/ssgsea_report',
sample_norm_method='rank', # choose 'custom' for your own rank list
permutation_num=0, # skip permutation procedure, because you don't need it
no_plot=True, # skip plotting, because you don't need these figures
processes=4, format='png', seed=9)
ss.res2d.sort_index().head()
# or assign a dataframe, or Series to ssgsea()
ssdf = pd.read_csv("./tests/data/temp.txt", header=None, sep="\t")
ssdf.head()
# dataframe with one column is also supported by ssGSEA or Prerank
# But you have to set gene_names as index
ssdf2 = ssdf.set_index(0)
ssdf2.head()
type(ssdf2)
ssSeries = ssdf2.squeeze()
type(ssSeries)
# reuse data
df = pd.read_csv("./tests/data/P53_resampling_data.txt", sep="\t")
df.head()
# Series, DataFrame Example
# supports dataframe and series
ssgs = []
for i, dat in enumerate([ssdf, ssdf2, ssSeries, df]):
sstemp = gp.ssgsea(data=dat,
gene_sets="./tests/data/genes.gmt",
outdir='test/ssgsea_report_'+str(i),
scale=False, # set scale to False to get real original ES
permutation_num=0, # skip permutation procedure, because you don't need it
no_plot=True, # skip plotting, because you don't need these figures
processes=4, seed=10,
format='png')
ssgs.append(sstemp)
# normalized es save to res2d attri
# one sample input
# NES
ssgs[0].res2d.sort_index().head()
# ES
# convert dict to DataFrame
es = pd.DataFrame(ssgs[-1].resultsOnSamples)
es.sort_index().head()
# if set scale to True, then
# Scaled ES equal to es/gene_numbers
ses = es/df.shape[0]
ses
# NES
# scale or no have no affects on final nes value
nes = ssgs[-1].res2d
nes.sort_index().head()
# set --no-scale to obtain the real original enrichment score
# !gseapy ssgsea -d ./data/testSet_rand1200.gct \
# -g data/temp.gmt \
# -o test/ssgsea_report2 \
# -p 4 --no-plot --no-scale
# run command inside python console
rep = gp.replot(indir="./tests/data", outdir="test/replot_test")
# !gseapy replot -i data -o test/replot_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check gseapy version
Step2: 1. (Optional) Convert IDs Using Biomart API
Step3: See all supported enrichr library names
Step4: 2.1 Assign enrichr with pd.Series, pd.DataFrame, or list object
Step5: 2.1.2 Local mode of GO analysis
Step6: 2.1.3 Plotting
Step7: 2.2 Command line usage
Step8: 3. Prerank example
Step9: Leading edge genes save to the final output results
Step10: 3.2 How to generate your GSEA plot inside python console
Step11: 3) Command line usage
Step12: 4. GSEA Example
Step13: 4.2 Show the gsea plots
Step14: 4.3 Command line usage
Step15: 5. Single Sample GSEA example
Step16: 5.2 Access Enrichment Score (ES) and NES
Step17: Note
Step18: 3) command line usage of single sample gsea
Step19: 6. Replot Example
Step20: 6.2 command line usage of replot
|
12,763
|
<ASSISTANT_TASK:>
Python Code:
# sphinx_gallery_thumbnail_number = 2
import os.path as op
import matplotlib.pyplot as plt
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evokeds = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evokeds)
evoked = mne.read_evokeds(fname, condition='Left Auditory')
evoked.apply_baseline((None, 0)).apply_proj()
print(evoked)
print(evoked.info)
print(evoked.times)
print(evoked.nave) # Number of averaged epochs.
print(evoked.first) # First time sample.
print(evoked.last) # Last time sample.
print(evoked.comment) # Comment on dataset. Usually the condition.
print(evoked.kind) # Type of data, either average or standard_error.
data = evoked.data
print(data.shape)
print('Data from channel {0}:'.format(evoked.ch_names[10]))
print(data[10])
gfp = evoked.copy().pick_types(eeg=True, meg=False).data.std(axis=0)
fig, ax = plt.subplots(1)
ax.plot(evoked.times, gfp / 1e6) # scale to uV
ax.set(xlabel='Time (sec)', ylabel='GFP (uV)')
fig.tight_layout()
evoked = mne.EvokedArray(data, evoked.info, tmin=evoked.times[0])
evoked.plot(time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here for convenience we read the evoked dataset from a file.
Step2: Notice that the reader function returned a list of evoked instances. This is
Step3: If you're gone through the tutorials of raw and epochs datasets, you're
Step4: The evoked data structure also contains some new attributes easily
Step5: The data is also easily accessible. Since the evoked data arrays are usually
Step6: The data is arranged in an array of shape (n_channels, n_times). Notice
Step7: In the same vein, we can quickly extract (and, e.g., plot) the GFP as the
Step8: If you want to import evoked data from some other system and you have it in a
|
12,764
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
'../../assignment1/cs231n/datasets/cifar-10-batches-py'
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Batch Normalization
Step2: Batch normalization
Step3: Batch Normalization
Step4: Batch Normalization
Step5: Fully Connected Nets with Batch Normalization
Step6: Batchnorm for deep networks
Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
Step8: Batch normalization and initialization
|
12,765
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import mmlspark
# load raw data from small-sized 30 MB CSV file (trimmed to contain just what we use)
dataFile = "On_Time_Performance_2012_9.csv"
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
flightDelay = spark.createDataFrame(
pd.read_csv(dataFile, dtype={"Month": np.float64, "Quarter": np.float64,
"DayofMonth": np.float64, "DayOfWeek": np.float64,
"OriginAirportID": np.float64, "DestAirportID": np.float64,
"CRSDepTime": np.float64, "CRSArrTime": np.float64}))
# Print information on the dataset we loaded
print("records read: " + str(flightDelay.count()))
print("Schema:")
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
train,test = flightDelay.randomSplit([0.75, 0.25])
from mmlspark import TrainRegressor, TrainedRegressorModel
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import StringIndexer
# Convert columns to categorical
catCols = ["Carrier", "DepTimeBlk", "ArrTimeBlk"]
trainCat = train
testCat = test
for catCol in catCols:
simodel = StringIndexer(inputCol=catCol, outputCol=catCol + "Tmp").fit(train)
trainCat = simodel.transform(trainCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
testCat = simodel.transform(testCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
lr = LinearRegression().setSolver("l-bfgs").setRegParam(0.1).setElasticNetParam(0.3)
model = TrainRegressor(model=lr, labelCol="ArrDelay").fit(trainCat)
model.write().overwrite().save("flightDelayModel.mml")
flightDelayModel = TrainedRegressorModel.load("flightDelayModel.mml")
scoredData = flightDelayModel.transform(testCat)
scoredData.limit(10).toPandas()
from mmlspark import ComputeModelStatistics
metrics = ComputeModelStatistics().transform(scoredData)
metrics.toPandas()
from mmlspark import ComputePerInstanceStatistics
evalPerInstance = ComputePerInstanceStatistics().transform(scoredData)
evalPerInstance.select("ArrDelay", "Scores", "L1_loss", "L2_loss").limit(10).toPandas()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, import the CSV dataset.
Step2: Split the dataset into train and test sets.
Step3: Train a regressor on dataset with l-bfgs.
Step4: Score the regressor on the test data.
Step5: Compute model metrics against the entire scored dataset
Step6: Finally, compute and show per-instance statistics, demonstrating the usage
|
12,766
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
from sklearn import linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
clf = linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
import pandas as pd
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns =['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
from sklearn import preprocessing
scaling = preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
x, x.isnull()
x['B_isnull'] = x['B'].isnull()
x
(x[['A', 'B', 'C', 'D', 'E']] - x[['A', 'B', 'C', 'D', 'E']].mean()) / \
x[['A', 'B', 'C', 'D', 'E']].std()
x_scaled = _74
x_scaled.mean(), x_scaled.std()
x['C_cat'] = x['C'] > 0.125
x
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
Plot an image along with its histogram and cumulative histogram.
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from sklearn import datasets
digits = datasets.load_digits()
#print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
from skimage import exposure
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
plt.imshow(img_adapteq, cmap=plt.cm.gray)
plt.show()
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from skimage.transform import rescale
im_small = rescale(img, 0.5)
patches = image.extract_patches_2d(im_small, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
count_vect = CountVectorizer(stop_words='english', ngram_range=(1,2))
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tabular data
Step2: Normalization
Step3: Categorical data
Step4: Exercises
Step6: Image data
Step7: Text
Step8: Exercises
|
12,767
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
%config InlineBackend.figure_format = 'retina'
%load_ext line_profiler
from __future__ import division
import numpy as np
import glob
import matplotlib.pyplot as plt
import scipy.linalg as sl
import enterprise
from enterprise.pulsar import Pulsar
import enterprise.signals.parameter as parameter
from enterprise.signals import utils
from enterprise.signals import signal_base
from enterprise.signals import selections
from enterprise.signals.selections import Selection
from enterprise.signals import white_signals
from enterprise.signals import gp_signals
import corner
from PTMCMCSampler.PTMCMCSampler import PTSampler as ptmcmc
datadir = enterprise.__path__[0] + '/datafiles/mdc_open1/'
parfiles = sorted(glob.glob(datadir + '/*.par'))
timfiles = sorted(glob.glob(datadir + '/*.tim'))
psrs = []
for p, t in zip(parfiles, timfiles):
psr = Pulsar(p, t)
psrs.append(psr)
##### parameters and priors #####
# Uniform prior on EFAC
efac = parameter.Uniform(0.1, 5.0)
# red noise parameters
# Uniform in log10 Amplitude and in spectral index
log10_A = parameter.Uniform(-18,-12)
gamma = parameter.Uniform(0,7)
##### Set up signals #####
# white noise
ef = white_signals.MeasurementNoise(efac=efac)
# red noise (powerlaw with 30 frequencies)
pl = utils.powerlaw(log10_A=log10_A, gamma=gamma)
rn = gp_signals.FourierBasisGP(spectrum=pl, components=30)
# timing model
tm = gp_signals.TimingModel()
# full model is sum of components
model = ef + rn + tm
# initialize PTA
pta = signal_base.PTA([model(psrs[0])])
print(pta.params)
xs = {par.name: par.sample() for par in pta.params}
print(xs)
# dimension of parameter space
ndim = len(xs)
# initial jump covariance matrix
cov = np.diag(np.ones(ndim) * 0.01**2)
# set up jump groups by red noise groups
ndim = len(xs)
groups = [range(0, ndim)]
groups.extend([[1,2]])
# intialize sampler
sampler = ptmcmc(ndim, pta.get_lnlikelihood, pta.get_lnprior, cov, groups=groups,
outDir='chains/mdc/open1/')
# sampler for N steps
N = 100000
x0 = np.hstack(p.sample() for p in pta.params)
sampler.sample(x0, N, SCAMweight=30, AMweight=15, DEweight=50)
chain = np.loadtxt('chains/mdc/open1/chain_1.txt')
pars = sorted(xs.keys())
burn = int(0.25 * chain.shape[0])
truths = [1.0, 4.33, np.log10(5e-14)]
corner.corner(chain[burn:,:-4], 30, truths=truths, labels=pars);
# find the maximum time span to set GW frequency sampling
tmin = [p.toas.min() for p in psrs]
tmax = [p.toas.max() for p in psrs]
Tspan = np.max(tmax) - np.min(tmin)
##### parameters and priors #####
# white noise parameters
# in this case we just set the value here since all efacs = 1
# for the MDC data
efac = parameter.Constant(1.0)
# red noise parameters
log10_A = parameter.Uniform(-18,-12)
gamma = parameter.Uniform(0,7)
##### Set up signals #####
# white noise
ef = white_signals.MeasurementNoise(efac=efac)
# red noise (powerlaw with 30 frequencies)
pl = utils.powerlaw(log10_A=log10_A, gamma=gamma)
rn = gp_signals.FourierBasisGP(spectrum=pl, components=30, Tspan=Tspan)
# gwb
# We pass this signal the power-law spectrum as well as the standard
# Hellings and Downs ORF
orf = utils.hd_orf()
crn = gp_signals.FourierBasisCommonGP(pl, orf, components=30, name='gw', Tspan=Tspan)
# timing model
tm = gp_signals.TimingModel()
# full model is sum of components
model = ef + rn + tm + crn
# initialize PTA
pta = signal_base.PTA([model(psr) for psr in psrs])
# initial parameters
xs = {par.name: par.sample() for par in pta.params}
# dimension of parameter space
ndim = len(xs)
# initial jump covariance matrix
cov = np.diag(np.ones(ndim) * 0.01**2)
# set up jump groups by red noise groups
ndim = len(xs)
groups = [range(0, ndim)]
groups.extend(map(list, zip(range(0,ndim,2), range(1,ndim,2))))
sampler = ptmcmc(ndim, pta.get_lnlikelihood, pta.get_lnprior, cov, groups=groups,
outDir='chains/mdc/open1_gwb/')
# sampler for N steps
N = 100000
x0 = np.hstack(p.sample() for p in pta.params)
sampler.sample(x0, N, SCAMweight=30, AMweight=15, DEweight=50)
chain = np.loadtxt('chains/mdc/open1_gwb/chain_1.txt')
pars = sorted(xs.keys())
burn = int(0.25 * chain.shape[0])
corner.corner(chain[burn:,-6:-4], 40, labels=pars[-2:], smooth=True, truths=[4.33, np.log10(5e-14)]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get par and tim files
Step2: Load pulsars into Pulsar objects
Step3: Setup and run a simple noise model on a single pulsar
Step4: We can see which parameters we are going to be searching over with
Step5: Get initial parameters
Step6: Note that the rest of the analysis here is dependent on the sampling method and not on enterprise itself.
Step7: Sample!
Step8: Examine chain output
Step9: Run full PTA GWB analysis
Step10: Set up sampler
Step11: Plot output
|
12,768
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow import keras
img_rows, img_cols = 28, 28
num_classes = 10
def prep_data(raw):
y = raw[:, 0]
out_y = keras.utils.to_categorical(y, num_classes)
x = raw[:,1:]
num_images = raw.shape[0]
out_x = x.reshape(num_images, img_rows, img_cols, 1)
out_x = out_x / 255
return out_x, out_y
fashion_file = "../input/fashionmnist/fashion-mnist_train.csv"
fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',')
x, y = prep_data(fashion_data)
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_7 import *
print("Setup Complete")
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D
# Your Code Here
____
# Check your answer
q_1.check()
#_COMMENT_IF(PROD)_
q_1.solution()
# Your code here
____
# Check your answer
q_2.check()
# q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
# Your code here
____
# Check your answer
q_3.check()
# q_3.solution()
# Your code to compile the model in this cell
____
# Check your answer
q_4.check()
# q_4.solution()
# Your code to fit the model here
____
# Check your answer
q_5.check()
#_COMMENT_IF(PROD)_
q_5.solution()
# Your code below
____
# Don't remove this line (ensures comptibility with tensorflow 2.0)
second_fashion_model.history.history['val_acc'] = second_fashion_model.history.history['val_accuracy']
# Check your answer
q_6.check()
#_COMMENT_IF(PROD)_
q_6.solution()
#%%RM_IF(PROD)%%
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D
fashion_model = Sequential()
q_1.assert_check_passed()
fashion_model.add(Conv2D(12,
activation='relu',
kernel_size=3,
input_shape = (img_rows, img_cols, 1)))
q_2.assert_check_passed()
fashion_model.add(Conv2D(20, activation='relu', kernel_size=3))
fashion_model.add(Conv2D(20, activation='relu', kernel_size=3))
fashion_model.add(Flatten())
fashion_model.add(Dense(100, activation='relu'))
fashion_model.add(Dense(10, activation='softmax'))
q_3.assert_check_passed()
fashion_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
q_4.assert_check_passed()
# 1 epoch and high val_split to speed up testing
fashion_model.fit(x, y, batch_size=100, epochs=1, validation_split=0.5)
q_5.assert_check_passed()
second_fashion_model = Sequential()
second_fashion_model.add(Conv2D(12,
activation='relu',
kernel_size=3,
input_shape = (img_rows, img_cols, 1)))
# Changed kernel sizes to be 2
second_fashion_model.add(Conv2D(20, activation='relu', kernel_size=2))
second_fashion_model.add(Conv2D(20, activation='relu', kernel_size=2))
# added an addition Conv2D layer
second_fashion_model.add(Conv2D(20, activation='relu', kernel_size=2))
second_fashion_model.add(Flatten())
second_fashion_model.add(Dense(100, activation='relu'))
# It is important not to change the last layer. First argument matches number of classes. Softmax guarantees we get reasonable probabilities
second_fashion_model.add(Dense(10, activation='softmax'))
second_fashion_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# 1 epoch to speed up testing
second_fashion_model.fit(x, y, batch_size=100, epochs=1, validation_split=0.2)
q_6.assert_check_passed()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Start the model
Step2: 2) Add the first layer
Step3: 3) Add the remaining layers
Step4: 4) Compile Your Model
Step5: 5) Fit The Model
Step6: 6) Create A New Model
|
12,769
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
print(mnist.train.images.shape[1])
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=[None, image_size], name="inputs")
targets_ = tf.placeholder(tf.float32, shape=[None, image_size], name="targets")
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name="output")
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
12,770
|
<ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim.summarization import summarize
text = "Thomas A. Anderson is a man living two lives. By day he is an " + \
"average computer programmer and by night a hacker known as " + \
"Neo. Neo has always questioned his reality, but the truth is " + \
"far beyond his imagination. Neo finds himself targeted by the " + \
"police when he is contacted by Morpheus, a legendary computer " + \
"hacker branded a terrorist by the government. Morpheus awakens " + \
"Neo to the real world, a ravaged wasteland where most of " + \
"humanity have been captured by a race of machines that live " + \
"off of the humans' body heat and electrochemical energy and " + \
"who imprison their minds within an artificial reality known as " + \
"the Matrix. As a rebel against the machines, Neo must return to " + \
"the Matrix and confront the agents: super-powerful computer " + \
"programs devoted to snuffing out Neo and the entire human " + \
"rebellion. "
print ('Input text:')
print (text)
print ('Summary:')
print (summarize(text))
print (summarize(text, split=True))
print ('Summary:')
print (summarize(text, ratio=0.5))
print ('Summary:')
print (summarize(text, word_count=50))
from gensim.summarization import keywords
print ('Keywords:')
print (keywords(text))
import requests
text = requests.get('http://rare-technologies.com/the_matrix_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
import requests
text = requests.get('http://rare-technologies.com/the_big_lebowski_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
import requests
from gensim.summarization import mz_keywords
text=requests.get("http://www.gutenberg.org/files/49679/49679-0.txt").text
mz_keywords(text,scores=True,threshold=0.001)
mz_keywords(text,scores=True,weighted=False,threshold=1.0)
mz_keywords(text,scores=True,weighted=False,threshold="auto")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will try summarizing a small toy example; later we will use a larger piece of text. In reality, the text is too small, but it suffices as an illustrative example.
Step2: To summarize this text, we pass the <b>raw string data</b> as input to the function "summarize", and it will return a summary.
Step3: Use the "split" option if you want a list of strings instead of a single string.
Step4: You can adjust how much text the summarizer outputs via the "ratio" parameter or the "word_count" parameter. Using the "ratio" parameter, you specify what fraction of sentences in the original text should be returned as output. Below we specify that we want 50% of the original text (the default is 20%).
Step5: Using the "word_count" parameter, we specify the maximum amount of words we want in the summary. Below we have specified that we want no more than 50 words.
Step6: As mentioned earlier, this module also supports <b>keyword</b> extraction. Keyword extraction works in the same way as summary generation (i.e. sentence extraction), in that the algorithm tries to find words that are important or seem representative of the entire text. They keywords are not always single words; in the case of multi-word keywords, they are typically all nouns.
Step7: <h2>Larger example</h2>
Step8: If you know this movie, you see that this summary is actually quite good. We also see that some of the most important characters (Neo, Morpheus, Trinity) were extracted as keywords.
Step9: This time around, the summary is not of high quality, as it does not tell us much about the movie. In a way, this might not be the algorithms fault, rather this text simply doesn't contain one or two sentences that capture the essence of the text as in "The Matrix" synopsis.
Step10: By default, the algorithm weights the entropy by the overall frequency of the word in the document. We can remove this weighting by setting weighted=False
Step11: When this option is used, it is possible to calculate a threshold automatically from the number of blocks
|
12,771
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)
type(mnist)
mnist.train.images
mnist.train.num_examples
mnist.test.num_examples
mnist.validation.num_examples
import matplotlib.pyplot as plt
%matplotlib inline
mnist.train.images[1].shape
plt.imshow(mnist.train.images[1].reshape(28,28))
plt.imshow(mnist.train.images[1].reshape(28,28),cmap='gist_gray')
mnist.train.images[1].max()
plt.imshow(mnist.train.images[1].reshape(784,1))
plt.imshow(mnist.train.images[1].reshape(784,1),cmap='gist_gray',aspect=0.02)
x = tf.placeholder(tf.float32,shape=[None,784])
# 10 because 0-9 possible numbers
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
# Create the Graph
y = tf.matmul(x,W) + b
y_true = tf.placeholder(tf.float32,[None,10])
# Cross Entropy
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_true, logits=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5)
train = optimizer.minimize(cross_entropy)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# Train the model for 1000 steps on the training set
# Using built in batch feeder from mnist for convenience
for step in range(1000):
batch_x , batch_y = mnist.train.next_batch(100)
sess.run(train,feed_dict={x:batch_x,y_true:batch_y})
# Test the Train Model
matches = tf.equal(tf.argmax(y,1),tf.argmax(y_true,1))
acc = tf.reduce_mean(tf.cast(matches,tf.float32))
print(sess.run(acc,feed_dict={x:mnist.test.images,y_true:mnist.test.labels}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternative sources of the data just in case
Step2: Visualizing the Data
Step3: Create the Model
Step4: Loss and Optimizer
Step5: Create Session
|
12,772
|
<ASSISTANT_TASK:>
Python Code:
from poliastro.atmosphere import COESA62, COESA76
from astropy import units as u
import numpy as np
import matplotlib.pyplot as plt
# We build the atmospheric instances
coesa62 = COESA62()
coesa76 = COESA76()
# Create the figure
fig, ax = plt.subplots(figsize=(10,10))
ax.set_title("U.S Standard Atmospheres")
# Collect all atmospheric models and define their plotting properties
atm_models = {coesa62: ["--r", "r", "Coesa 1962"], coesa76: ["-b", "b", "Coesa 1976"]}
# Solve atmospheric temperature for each of the models
for atm in atm_models:
z_span = np.linspace(0, 86, 100) * u.km
T_span = np.array([]) * u.K
for z in z_span:
# We discard density and pressure
T = atm.temperature(z)
T_span = np.append(T_span, T)
# Temperature plot
ax.plot(T_span, z_span, atm_models[atm][0], label=atm_models[atm][-1])
ax.plot(atm.Tb_levels[:8], atm.zb_levels[:8], atm_models[atm][1] + "o")
ax.set_xlim(150, 300)
ax.set_ylim(0, 100)
ax.set_xlabel("Temperature $[K]$")
ax.set_ylabel("Altitude $[km]$")
ax.legend()
# Add some information on the plot
ax.annotate(
"Tropopause",
xy=(coesa76.Tb_levels[1].value, coesa76.zb_levels[1].value),
xytext=(coesa76.Tb_levels[1].value + 10, coesa76.zb_levels[1].value + 5),
arrowprops=dict(arrowstyle="simple", facecolor="black")
)
ax.annotate(
"Stratopause",
xy=(coesa76.Tb_levels[4].value, coesa76.zb_levels[4].value),
xytext=(coesa76.Tb_levels[4].value - 25, coesa76.zb_levels[4].value + 5),
arrowprops=dict(arrowstyle="simple", facecolor="black")
)
ax.annotate(
"Mesopause",
xy=(coesa76.Tb_levels[7].value, coesa76.zb_levels[7].value),
xytext=(coesa76.Tb_levels[7].value + 10, coesa76.zb_levels[7].value + 5),
arrowprops=dict(arrowstyle="simple", facecolor="black")
)
# Layers in the atmosphere
for h in [11.019, 47.350, 86]:
ax.axhline(h, color='k', linestyle='--', xmin=0.0, xmax=0.35)
ax.axhline(h, color='k', linestyle='-', xmin=0.0, xmax=0.15)
layer_names = {"TROPOSPHERE": 5, "STRATOSPHERE": 30, "MESOSPHERE": 65, "THERMOSPHERE": 90}
for name in layer_names:
ax.annotate(
name,
xy=(152, layer_names[name]),
xytext=(152, layer_names[name]),
)
# We create the basis for the figure
fig, axs = plt.subplots(1, 3, figsize=(12, 5))
fig.suptitle("State variables against altitude", fontweight="bold")
fig.text(0.04, 0.5, 'Altitude [km]', va='center', rotation='vertical')
# Complete altitude range and initialization of state variables sets
alt_span = np.linspace(0, 1000, 1001) * u.km
T_span = np.array([]) * u.K
p_span = np.array([]) * u.Pa
rho_span = np.array([]) * u.kg / u.m ** 3
# We solve for each property at given altitude
for alt in alt_span:
T, p, rho = coesa76.properties(alt)
T_span = np.append(T_span, T)
p_span = np.append(p_span, p.to(u.Pa))
rho_span = np.append(rho_span, rho)
# Temperature plot
axs[0].set_title("Temperature")
axs[0].set_xlabel("T [K]")
axs[0].set_xlabel("Altitude [K]")
axs[0].plot(T_span, alt_span)
# Pressure plot
axs[1].set_title("Pressure")
axs[1].set_xlabel("p [Pa]")
axs[1].plot(p_span, alt_span)
axs[1].set_xscale('log')
# Density plot
axs[2].set_title("Density")
axs[2].set_xlabel(r"$\rho$ [kg/m3]")
axs[2].plot(rho_span, alt_span)
axs[2].set_xscale('log')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comparing coesa62 and coesa76
Step2: Temperature, pressure and density distrubutions
|
12,773
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import string
from sklearn.ensemble import GradientBoostingClassifier
def read_file(filename):
with open(filename) as f:
content = f.readlines()
y = [line[0] for line in content]
X = [line[2:].strip() for line in content]
return X,y
X_train,y_train = read_file('Names_data_train.txt')
X_test,y_test = read_file('Names_data_test.txt')
class Gradient_Boosting_Estimator():
'''
Class for training a gradient boosting, rule-based estimator on the letters
Parameter is the number of letters of the word to consider
'''
def __init__( self, letters ):
self.letters = letters
self.gbes = GradientBoostingClassifier()
# compute the histograms P(Y) (stored in self.Py) and P(x_i|y) (stored in self.Px)
def fit( self, X, y) :
# convert to numeric entries
ty = np.zeros( len(y) )
for k in range( len(y) ):
if y[k]=='+':
ty[k] = 1
tX = np.empty( (0, self.letters) )
for mys in X:
if len(mys) < self.letters:
# add spaces if string is too short
mys += ( ' ' * (self.letters-len(mys) ) )
tX = np.vstack( (tX, [ord(x) for x in mys[0:self.letters] ] ) )
# fit the classifier (taken from the SciKit-Learn library)
gbes.fit(tX, ty)
# perform the prediction based on maximizing P(y)\prod P(x_i|y)
# normally, the prediction would be done in logairthmic domain, but here we just use plain probabilities
def predict(self, X):
rety = ['+' for _ in X]
for idx, elem_X in enumerate(X):
# add spaces if string is too short
elem_X += ( ' ' * max(0,self.letters-len(elem_X) ) )
elem_numeric = np.array([ord(x) for x in elem_X[0:self.letters]])
rv = gbes.predict(elem_numeric.reshape(1,-1))
if rv == 0:
rety[idx] = '-'
return rety
clf = Gradient_Boosting_Estimator(10)
clf.fit(X_train,y_train)
y = clf.predict(X_test)
errors = 0
for idx,value in enumerate(y_test):
print(value,'predicted as:', y[idx], ' (',X_test[idx],')')
if value != y[idx]:
errors += 1
print('Prediction errors: %d (error rate %1.2f %%)' % (errors, errors/len(y)*100))
# find optimal number of errors
for letter in range(1,10):
clf = Gradient_Boosting_Estimator(letter)
clf.fit(X_train,y_train)
y = clf.predict(X_test)
errors = 0
for idx,k in enumerate(y_test):
if k != y[idx]:
errors += 1
print('%d letters: %d prediction errors (error rate %1.2f %%)' % (letter, errors,errors*100/len(y_test)))
# Train with 5 letters
clf = Gradient_Boosting_Estimator(5)
clf.fit(X_train,y_train)
print(clf.predict(['Xavier Jones']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load file from data and convert to training set and test set (reading from two distinct files)
Step2: A simple class that converts the string into numbers and than trains a simple classifier using the gradient boosting technique. The resulting gradient boosting classifier is essentially a rule-based system, where the results are derived from the inputs to the classifier.
|
12,774
|
<ASSISTANT_TASK:>
Python Code:
import notebook
from __future__ import print_function
from jupyter_core.paths import jupyter_data_dir, jupyter_path
print(jupyter_data_dir())
print(jupyter_path())
! sudo jupyter nbextension install sas_kernel/sas_kernel/nbextensions/showSASLog
if notebook.nbextensions.check_nbextension('showSASLog', user=False):
E = notebook.nbextensions.EnableNBExtensionApp()
E.enable_nbextension('showSASLog/main')
else:
print ("Extension not found")
! sudo jupyter nbextension install /root/sas_kernel/sas_kernel/nbextensions/theme
if notebook.nbextensions.check_nbextension('theme', user=False):
E = notebook.nbextensions.EnableNBExtensionApp()
E.enable_nbextension('theme/theme_selector')
else:
print ("Extension not found")
! jupyter nbextension install sas_kernel/sas_kernel/nbextensions/showSASLog --user
if notebook.nbextensions.check_nbextension('showSASLog', user=True):
E = notebook.nbextensions.EnableNBExtensionApp()
E.enable_nbextension('showSASLog/main')
else:
print ("Extension not found")
! jupyter nbextension install /root/sas_kernel/sas_kernel/nbextensions/theme --user
if notebook.nbextensions.check_nbextension('theme', user=True):
E = notebook.nbextensions.EnableNBExtensionApp()
E.enable_nbextension('theme/theme_selector')
else:
print ("Extension not found")
from notebook.services.config import ConfigManager
from IPython.display import HTML
ip = get_ipython()
cm = ConfigManager(parent=ip, profile_dir=ip.profile_dir.location)
extensions =cm.get('notebook')
table = ""
for ext in extensions['load_extensions']:
table += "<tr><td>%s</td>\n" % (ext)
top =
<table border="1">
<tr>
<th>Extension name</th>
</tr>
bottom =
</table>
HTML(top + table + bottom)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To Install Systemwide
Step2: This python code will check on the nbextension in systemwide folders (user=False is the flag for this)
Step3: To install for the Current User
Step4: This python code will check on the nbextension in user folders ~/ (user=True is the flag for this)
Step7: Check to see what NBExtensions are Installed
|
12,775
|
<ASSISTANT_TASK:>
Python Code:
X = np.linspace(0, 20, 100)
def f(x):
if x < 7:
return 'a', 2. + np.random.random()
elif x < 14:
return 'b', 4 + np.random.random()
else:
return 'c', 6 + np.random.random()
K, Y = zip(*[f(x) for x in X])
colors = plt.get_cmap('Set1')
categories = ['a', 'b', 'c']
plt.scatter(X, Y, c=[colors(categories.index(k)*20) for k in K])
plt.show()
bycategory = [ [Y[i] for i in xrange(len(Y)) if K[i] == k] for k in categories ]
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.boxplot(bycategory)
plt.ylim(0, 8)
plt.title('ANOVA')
plt.subplot(122)
plt.boxplot(bycategory, 0, 'rs', 0)
plt.title('LDA')
plt.xlim(0, 8)
plt.show()
X = [np.linspace(0, 7, 50),
np.linspace(2, 10, 50),
np.linspace(7, 16, 50)]
plt.figure(figsize=(10, 4))
k = 1
for x in X:
mu_k = x.mean()
plt.plot(x, stats.norm.pdf(x, loc=mu_k))
plt.plot([mu_k, mu_k], [0, 0.5], c='k')
plt.text(mu_k + 0.2, 0.5, "$\mu_%i$" % k, size=18)
k += 1
plt.ylim(0, 0.75)
plt.xlabel('Predictor', size=18)
plt.ylabel('Probability', size=18)
plt.show()
class LDAModel_1D(object):
Linear Discriminant Analysis with one predictor.
Parameters
----------
X_bound : list
Boundary points between categories in ``K_ordered``.
K_ordered : list
Categories, ordered by mean.
def __init__(self, mu, sigma, K_labels):
assert len(mu) == len(sigma)
assert len(K_labels) == len(mu)
self.K = len(K_labels)
self.K_labels = K_labels
self.mu = mu
self.sigma = sigma
def find_bounds(self):
K_ordered = np.array(self.K_)[np.argsort(np.array(X_means.values()))]
self.X_bound = []
for i in xrange(1, len(K_ordered)):
k_0, k_1 = K_ordered[i-1], K_ordered[i]
mu_0, mu_1 = X_means[k_0], X_means[k_1]
self.X_bound.append(mu_0 + ((mu_1 - mu_0)/2.))
def _predict(self, x):
for i in xrange(self.K):
if i == 0:
comp = lambda x: x <= self.X_bound[0]
elif i == self.K - 1:
comp = lambda x: x >= self.X_bound[-1]
else:
comp = lambda x: self.X_bound[i-1] < x < self.X_bound[i]
if comp(x):
return self.K_ordered[i]
def predict(self, x, criterion=None):
if criterion:
return self.K_labels[criterion(self.posterior(x))]
return self.K_labels[np.argmax(self.posterior(x))]
def posterior(self, x):
post_values = [stats.norm.pdf(x, loc=self.mu[i], scale=self.sigma[i])
for i in xrange(self.K)]
return [pv/sum(post_values) for pv in post_values]
def lda(K_x, X):
Calculate the boundary points between categories.
Parameters
----------
K_x : list
Known category for each observation.
X : list
Observations of a continuous variable.
Returns
-------
model : :class:`.LDAModel_1D`
K = set(K_x)
X_grouped = {k:[] for k in list(K)}
for k, x in zip(K_x, X):
X_grouped[k].append(x)
K_labels, mu = zip(*[(k, mean(v)) for k,v in X_grouped.iteritems()])
sigma = [mean([np.var(v) for v in X_grouped.values()]) for i in xrange(len(K_labels))]
return LDAModel_1D(mu, sigma, K_labels)
X = np.linspace(0, 20, 100)
def f(x):
if x < 7:
return 'a', 2. + np.random.random()
elif x < 14:
return 'b', 4 + np.random.random()
else:
return 'c', 6 + np.random.random()
K, Y = zip(*[f(x) for x in X])
model = lda(K, X)
iris = pd.read_csv('data/iris.csv')
iris_training = pd.concat([iris[iris.Species == 'setosa'].sample(25, random_state=8675309),
iris[iris.Species == 'versicolor'].sample(25, random_state=8675309),
iris[iris.Species == 'virginica'].sample(25, random_state=8675309)])
iris_test = iris.loc[iris.index.difference(iris_training.index)]
iris_training.groupby('Species')['Sepal.Length'].hist()
plt.show()
model = lda(iris_training.Species, iris_training['Sepal.Length'])
predictions = np.array([model.predict(x) for x in iris_test['Sepal.Length']])
truth = iris_test['Species'].values
results = pd.DataFrame(np.array([predictions, truth]).T,
columns=['Prediction', 'Truth'])
vcounts = results.groupby('Prediction').Truth.value_counts()
vcounts_dense = np.zeros((3,3))
for i in xrange(model.K):
k_i = model.K_labels[i]
for j in xrange(model.K):
k_j = model.K_labels[j]
try:
vcounts_dense[i,j] = vcounts[k_i][k_j]
except KeyError:
pass
comparison = pd.DataFrame(vcounts_dense, columns=model.K_labels)
comparison['Truth'] = model.K_labels
comparison
x = stats.norm.rvs(loc=4, scale=1.3, size=200)
def qda(K_x, X):
K = set(K_x)
X_grouped = {k:[] for k in list(K)}
for k, x in zip(K_x, X):
X_grouped[k].append(x)
# Maximize f to find mu and sigma
params_k = {}
for k, x in X_grouped.iteritems():
guess = (np.mean(x), np.std(x))
# Variance must be greater than 0.
constraints = {'type': 'eq', 'fun': lambda params: params[1] > 0}
f = lambda params: np.sum(((-1.*(x - params[0])**2)/(2.*params[1]**2)) - np.log(params[1]*np.sqrt(2.*np.pi)))
params_k[k] = optimize.minimize(lambda params: -1.*f(params), guess, constraints=constraints).x
K_ordered = np.array(params_k.keys())[np.argsort(np.array(zip(*params_k.values())[0]))]
X_bound = []
for i in xrange(1, len(K_ordered)):
k_0, k_1 = K_ordered[i-1], K_ordered[i]
mu_0, sigma2_0 = params_k[k_0]
mu_1, sigma2_1 = params_k[k_1]
delta_0 = lambda x: ((-1.*(x - mu_0)**2)/(2.*sigma2_0**2)) - np.log(sigma2_0*np.sqrt(2.*np.pi))
delta_1 = lambda x: ((-1.*(x - mu_1)**2)/(2.*sigma2_1**2)) - np.log(sigma2_1*np.sqrt(2.*np.pi))
bound = lambda x: np.abs(delta_0(x) - delta_1(x))
o = optimize.minimize(bound, mu_0 + (mu_1-mu_0))
X_bound.append(o.x)
mu, sigma = zip(*params_k.values())
return LDAModel_1D(mu, sigma, params_k.keys())
qmodel = qda(iris_training.Species, iris_training['Sepal.Length'])
qpredictions = np.array([qmodel.predict(x) for x in iris_test['Sepal.Length']])
plt.figure(figsize=(15, 5))
X_ = np.linspace(0, 20, 200)
iris_training.groupby('Species')['Sepal.Length'].hist()
# iris_test.groupby('Species')['Sepal.Length'].hist()
ax = plt.gca()
ax2 = ax.twinx()
for k in qmodel.K_labels:
i = qmodel.K_labels.index(k)
ax2.plot(X_, stats.norm.pdf(X_, loc=qmodel.mu[i], scale=qmodel.sigma[i]),
label='{0}, $\mu={1}$, $\sigma={2}$'.format(k, qmodel.mu[i], qmodel.sigma[i]), lw=4)
plt.legend(loc=2)
plt.xlim(2, 9)
plt.show()
results = pd.DataFrame(np.array([qpredictions, truth]).T,
columns=['Prediction', 'Truth'])
vcounts = results.groupby('Prediction').Truth.value_counts()
vcounts_dense = np.zeros((3,3))
for i in xrange(qmodel.K):
k_i = qmodel.K_labels[i]
for j in xrange(qmodel.K):
k_j = qmodel.K_labels[j]
try:
vcounts_dense[i,j] = vcounts[k_i][k_j]
except KeyError:
pass
comparison = pd.DataFrame(vcounts_dense, columns=qmodel.K_labels)
comparison['Truth'] = qmodel.K_labels
comparison
c = np.array(zip(qpredictions, truth)).T
float((c[0] == c[1]).sum())/c.shape[1]
Hemocrit = pd.read_csv('data/Hemocrit.csv')
model = lda(Hemocrit.status, Hemocrit.hemocrit)
# Histogram of hemocrit values for cheaters and non-cheaters.
Hemocrit[Hemocrit.status == 'Cheat'].hemocrit.hist(histtype='step')
Hemocrit[Hemocrit.status == 'Clean'].hemocrit.hist(histtype='step')
plt.ylim(0, 40)
plt.ylabel('N')
# Probability of being a cheater (or not) as a function of hemocrit.
ax = plt.gca()
ax2 = ax.twinx()
R = np.linspace(0, 100, 500)
post = np.array([model.posterior(r) for r in R])
ax2.plot(R, post[:, 0], label=model.K_labels[0])
ax2.plot(R, post[:, 1], label=model.K_labels[1])
plt.ylabel('P(Y=k)')
plt.xlabel('Hemocrit')
plt.legend()
plt.xlim(40, 60)
plt.title('Criterion: P > 0.5')
plt.show()
predictions = [model.predict(h) for h in Hemocrit.hemocrit]
truth = Hemocrit.status.values
confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))
confusion.groupby('Prediction').Truth.value_counts()
qmodel = qda(Hemocrit.status, Hemocrit.hemocrit)
qpredictions = np.array([qmodel.predict(h) for h in Hemocrit.hemocrit])
truth = Hemocrit.status.values
qconfusion = pd.DataFrame(np.array([qpredictions, truth]).T, columns=('Prediction', 'Truth'))
plt.figure(figsize=(5, 5))
plt.text(0.25, 0.75, 'TN', size=18)
plt.text(0.75, 0.75, 'FP', size=18)
plt.text(0.25, 0.25, 'FN', size=18)
plt.text(0.75, 0.25, 'TN', size=18)
plt.xticks([0.25, 0.75], ['Neg', 'Pos'], size=20)
plt.yticks([0.25, 0.75], ['Pos', 'Neg'], size=20)
plt.ylabel('Truth', size=24)
plt.xlabel('Prediction', size=24)
plt.title('Confusion Matrix', size=26)
plt.show()
plt.figure()
X = np.linspace(0., 0.5, 200)
f = lambda x: 0.001 if x < 0.01 else 0.8
plt.plot(X, map(f, X))
plt.ylabel('True positive rate (power)')
plt.xlabel('False positive rate (type 1 error)')
plt.show()
ROC = []
C = []
for p in np.arange(0.5, 1.0, 0.005):
criterion = lambda posterior: 0 if posterior[0] > p else 1
predictions = [model.predict(h, criterion) for h in Hemocrit.hemocrit]
truth = Hemocrit.status.values
confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))
FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0]
N = confusion[confusion['Truth'] == 'Clean'].shape[0]
FP_rate = float(FP)/N
TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0]
P = confusion[confusion['Truth'] == 'Cheat'].shape[0]
TP_rate = float(TP)/P
ROC.append((FP_rate, TP_rate))
C.append(p)
plt.title('ROC curve for LDA')
FP_rate, TP_rate = zip(*ROC)
plt.plot(FP_rate, TP_rate)
for i in xrange(0, len(FP_rate), 10):
plt.plot(FP_rate[i], TP_rate[i], 'ro')
plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i])
plt.xlim(-0.01, 0.14)
plt.ylim(0, .7)
plt.ylabel('True positive rate (power)')
plt.xlabel('False positive rate (type 1 error)')
plt.show()
QROC = []
C = []
for p in np.arange(0.5, 1.0, 0.005):
criterion = lambda posterior: 0 if posterior[0] > p else 1
predictions = [qmodel.predict(h, criterion) for h in Hemocrit.hemocrit]
truth = Hemocrit.status.values
confusion = pd.DataFrame(np.array([predictions, truth]).T, columns=('Prediction', 'Truth'))
FP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Clean'].shape[0]
N = confusion[confusion['Truth'] == 'Clean'].shape[0]
FP_rate = float(FP)/N
TP = confusion[confusion['Prediction'] == 'Cheat'][confusion['Truth'] == 'Cheat'].shape[0]
P = confusion[confusion['Truth'] == 'Cheat'].shape[0]
TP_rate = float(TP)/P
QROC.append((FP_rate, TP_rate))
C.append(p)
plt.title('ROC curve for QDA')
FP_rate, TP_rate = zip(*QROC)
plt.plot(FP_rate, TP_rate)
for i in xrange(0, len(FP_rate), 10):
plt.plot(FP_rate[i], TP_rate[i], 'ro')
plt.text(FP_rate[i]+0.001, TP_rate[i]+0.01, C[i])
plt.xlim(-0.01, 0.14)
plt.ylim(0, .7)
plt.ylabel('True positive rate (power)')
plt.xlabel('False positive rate (type 1 error)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LDA is like inverted ANOVA
Step2: LDA assumes that the variance in each group is the same, and that the predictor(s) are normally distributed for each group. In other words, different $\mu_k$, one shared $\sigma$.
Step5: Recall Bayes Theorem
Step6: Iris Example
Step7: $P(Y=k|X=x) \propto \frac{-(x-\mu_k)^2}{2\sigma_k^2} - log(\sigma_k\sqrt{2\pi})$
Step8: The default approach was to predict 'Cheat' when $P(Cheater\big|X) > 0.5$.
Step9: Confusion matrix
Step10: Trying the same thing, but with QDA
Step11: Receiver Operator Characteristic (ROC) curve
Step12: The true positive rate, or Power (or Sensitivity) is $\frac{TP}{P}$ and the Type 1 error is $\frac{FP}{N}$. The ROC curve shows Power vs. Type 1 error. Ideally, we can achieve a high true positive rate at a very low false positive rate
Step13: With the hemocrit example
|
12,776
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# To generate GIFs
!python3 -m pip install -q imageio
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
#TODO 1
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
#TODO 1.
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
assert model.output_shape == (None, 1)
return model
make_generator_model().summary()
make_discriminator_model().summary()
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print(decision)
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
#TODO 2
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
#TODO 2
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = "./gan_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
# TODO 3
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(
gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(
disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(
zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(
zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(
epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5,
cmap='gray')
plt.axis('off')
plt.savefig('./gan_images/image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
!test -d ./gan_images || mkdir ./gan_images/
# TODO 4
train(train_dataset, EPOCHS)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('./gan_images/image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
anim_file = 'dcgan.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('./gan_images/image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the dataset
Step2: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
Step3: Create the generator and discriminator models
Step4: Let's use the (as yet untrained) generator to create an image.
Step5: The Discriminator
Step6: Using .summary() we can have a high-level summary of the generator and discriminator models.
Step7: Let's use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
Step8: Define the loss and optimizers
Step9: Discriminator loss
Step10: Generator loss
Step11: Optimizers for the generator and discriminator
Step12: Save checkpoints
Step13: Define the training loop
Step14: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
Step15: We use the train_step function above to define training of our GAN. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
Step16: Generate and save images.
Step17: Train the model
Step18: Restore the latest checkpoint.
Step19: Create a GIF
Step20: Use imageio to create an animated gif using the images saved during training.
|
12,777
|
<ASSISTANT_TASK:>
Python Code:
n_colors = 5 # number of possible colors
n_bags = 3 # number of bags
n_trials = 20 # number of draws from each bag
from bayespy import nodes
import numpy as np
p_colors = nodes.Dirichlet(n_colors * [0.5], plates=(n_bags,)).random()
import bayespy.plot as bpplt
bpplt.hinton(p_colors)
bpplt.pyplot.title("Original probability distributions of colors in the bags");
marbles = nodes.Multinomial(n_trials, p_colors).random()
print(marbles)
%%tikz -f svg
\usetikzlibrary{bayesnet}
\node [latent] (theta) {$\theta$};
\node [below=of theta, obs] (y) {$y$};
\edge {theta} {y};
\plate {trials} {(y)} {trials};
\plate {bags} {(theta)(y)(trials)} {bags};
theta = nodes.Dirichlet(n_colors * [0.5], plates=(n_bags,))
y = nodes.Multinomial(n_trials, theta)
y.observe(marbles)
from bayespy.inference import VB
Q = VB(y, theta)
Q.update(repeat=1000)
import bayespy.plot as bpplt
bpplt.hinton(theta)
bpplt.pyplot.title("Learned distribution of colors")
bpplt.pyplot.show()
from bayespy import nodes
import numpy as np
#The marbles drawn based on the distribution for 10 trials
# Using same p_color distribution as in the above example
draw_marbles = nodes.Categorical(p_colors,
plates=(n_trials, n_bags)).random()
from bayespy import nodes
import numpy as np
p_theta = nodes.Dirichlet(np.ones(n_colors),
plates=(n_bags,),
name='p_theta')
bag_model = nodes.Categorical(p_theta,
plates=(n_trials, n_bags),
name='bag_model')
bag_model.observe(draw_marbles)
from bayespy.inference import VB
Q = VB(bag_model, p_theta)
Q.update(repeat=1000)
%matplotlib inline
import bayespy.plot as bpplt
bpplt.hinton(p_theta)
bpplt.pyplot.tight_layout()
bpplt.pyplot.title("Learned Distribution of colors using Categorical Distribution")
bpplt.pyplot.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate randomly a color distribution for each bag
Step2: The concentration parameter $\begin{bmatrix}0.5 & \ldots & 0.5\end{bmatrix}$ makes the distributions very non-uniform within each bag, that is, the amount of each color can be very different. We can visualize the probability distribution of the colors in each bag
Step3: As one can see, the color distributions aren't very uniform in any of the bags because of the small concentration parameter. Next, make the ball draws
Step4: Model
Step5: The model is constructed equivalently to the generative model (except we don't use the nodes to draw random samples)
Step6: Data is provided by using the observe method
Step7: Performing Inference
Step8: Using categorical Distribution
Step9: Model
Step10: Inference
|
12,778
|
<ASSISTANT_TASK:>
Python Code:
# A dictionary of movie critics and their ratings of a small
# set of movies
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
critics['Lisa Rose']['Lady in the Water']
critics['Toby']['Snakes on a Plane']
critics['Toby']
# 欧几里得距离
import numpy as np
np.sqrt(np.power(5-4, 2) + np.power(4-1, 2))
1.0 /(1 + np.sqrt(np.power(5-4, 2) + np.power(4-1, 2)) )
# Returns a distance-based similarity score for person1 and person2
def sim_distance(prefs,person1,person2):
# Get the list of shared_items
si={}
for item in prefs[person1]:
if item in prefs[person2]:
si[item]=1
# if they have no ratings in common, return 0
if len(si)==0: return 0
# Add up the squares of all the differences
sum_of_squares=np.sum([np.power(prefs[person1][item]-prefs[person2][item],2)
for item in prefs[person1] if item in prefs[person2]])
#for item in si.keys()])#
return 1/(1+np.sqrt(sum_of_squares) )
sim_distance(critics, 'Lisa Rose','Toby')
# Returns the Pearson correlation coefficient for p1 and p2
def sim_pearson(prefs,p1,p2):
# Get the list of mutually rated items
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
# Find the number of elements
n=len(si)
# if they are no ratings in common, return 0
if n==0: return 0
# Add up all the preferences
sum1=np.sum([prefs[p1][it] for it in si])
sum2=np.sum([prefs[p2][it] for it in si])
# Sum up the squares
sum1Sq=np.sum([np.power(prefs[p1][it],2) for it in si])
sum2Sq=np.sum([np.power(prefs[p2][it],2) for it in si])
# Sum up the products
pSum=np.sum([prefs[p1][it]*prefs[p2][it] for it in si])
# Calculate Pearson score
num=pSum-(sum1*sum2/n)
den=np.sqrt((sum1Sq-np.power(sum1,2)/n)*(sum2Sq-np.power(sum2,2)/n))
if den==0: return 0
return num/den
sim_pearson(critics, 'Lisa Rose','Toby')
# Returns the best matches for person from the prefs dictionary.
# Number of results and similarity function are optional params.
def topMatches(prefs,person,n=5,similarity=sim_pearson):
scores=[(similarity(prefs,person,other),other)
for other in prefs if other!=person]
# Sort the list so the highest scores appear at the top
scores.sort( )
scores.reverse( )
return scores[0:n]
topMatches(critics,'Toby',n=3) # topN
# Gets recommendations for a person by using a weighted average
# of every other user's rankings
def getRecommendations(prefs,person,similarity=sim_pearson):
totals={}
simSums={}
for other in prefs:
# don't compare me to myself
if other==person: continue
sim=similarity(prefs,person,other)
# ignore scores of zero or lower
if sim<=0: continue
for item in prefs[other]:
# only score movies I haven't seen yet
if item not in prefs[person]:# or prefs[person][item]==0:
# Similarity * Score
totals.setdefault(item,0)
totals[item]+=prefs[other][item]*sim
# Sum of similarities
simSums.setdefault(item,0)
simSums[item]+=sim
# Create the normalized list
rankings=[(total/simSums[item],item) for item,total in totals.items()]
# Return the sorted list
rankings.sort()
rankings.reverse()
return rankings
# Now you can find out what movies I should watch next:
getRecommendations(critics,'Toby')
# You’ll find that the results are only affected very slightly by the choice of similarity metric.
getRecommendations(critics,'Toby',similarity=sim_distance)
# you just need to swap the people and the items.
def transformPrefs(prefs):
result={}
for person in prefs:
for item in prefs[person]:
result.setdefault(item,{})
# Flip item and person
result[item][person]=prefs[person][item]
return result
movies = transformPrefs(critics)
topMatches(movies,'Superman Returns')
def calculateSimilarItems(prefs,n=10):
# Create a dictionary of items showing which other items they
# are most similar to.
result={}
# Invert the preference matrix to be item-centric
itemPrefs=transformPrefs(prefs)
c=0
for item in itemPrefs:
# Status updates for large datasets
c+=1
if c%100==0:
print("%d / %d" % (c,len(itemPrefs)))
# Find the most similar items to this one
scores=topMatches(itemPrefs,item,n=n,similarity=sim_distance)
result[item]=scores
return result
itemsim=calculateSimilarItems(critics)
itemsim['Superman Returns']
def getRecommendedItems(prefs,itemMatch,user):
userRatings=prefs[user]
scores={}
totalSim={}
# Loop over items rated by this user
for (item,rating) in userRatings.items( ):
# Loop over items similar to this one
for (similarity,item2) in itemMatch[item]:
# Ignore if this user has already rated this item
if item2 in userRatings: continue
# Weighted sum of rating times similarity
scores.setdefault(item2,0)
scores[item2]+=similarity*rating
# Sum of all the similarities
totalSim.setdefault(item2,0)
totalSim[item2]+=similarity
# Divide each total score by total weighting to get an average
rankings=[(score/totalSim[item],item) for item,score in scores.items( )]
# Return the rankings from highest to lowest
rankings.sort( )
rankings.reverse( )
return rankings
getRecommendedItems(critics,itemsim,'Toby')
getRecommendations(movies,'Just My Luck')
getRecommendations(movies, 'You, Me and Dupree')
# https://github.com/ParticleWave/RecommendationSystemStudy/blob/d1960056b96cfaad62afbfe39225ff680240d37e/PersonalRank.py
import os
import random
class Graph:
def __init__(self):
self.G = dict()
def addEdge(self, p, q):
if p not in self.G: self.G[p] = dict()
if q not in self.G: self.G[q] = dict()
self.G[p][q] = 1
self.G[q][p] = 1
def getGraphMatrix(self):
return self.G
graph = Graph()
graph.addEdge('A', 'a')
graph.addEdge('A', 'c')
graph.addEdge('B', 'a')
graph.addEdge('B', 'b')
graph.addEdge('B', 'c')
graph.addEdge('B', 'd')
graph.addEdge('C', 'c')
graph.addEdge('C', 'd')
G = graph.getGraphMatrix()
print(G.keys())
G
for i, ri in G.items():
for j, wij in ri.items():
print(i, j, wij)
def PersonalRank(G, alpha, root, max_step):
# G is the biparitite graph of users' ratings on items
# alpha is the probability of random walk forward
# root is the studied User
# max_step if the steps of iterations.
rank = dict()
rank = {x:0.0 for x in G.keys()}
rank[root] = 1.0
for k in range(max_step):
tmp = {x:0.0 for x in G.keys()}
for i,ri in G.items():
for j,wij in ri.items():
if j not in tmp: tmp[j] = 0.0 #
tmp[j] += alpha * rank[i] / (len(ri)*1.0)
if j == root: tmp[j] += 1.0 - alpha
rank = tmp
print(k, rank)
return rank
PersonalRank(G, 0.8, 'A', 20)
# print(PersonalRank(G, 0.8, 'B', 20))
# print(PersonalRank(G, 0.8, 'C', 20))
def loadMovieLens(path='/Users/datalab/bigdata/cjc/ml-1m/'):
# Get movie titles
movies={}
for line in open(path+'movies.dat', encoding = 'iso-8859-15'):
(id,title)=line.split('::')[0:2]
movies[id]=title
# Load data
prefs={}
for line in open(path+'/ratings.dat'):
(user,movieid,rating,ts)=line.split('::')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs
prefs=loadMovieLens()
prefs['87']
getRecommendations(prefs,'87')[0:30]
itemsim=calculateSimilarItems(prefs,n=50)
getRecommendedItems(prefs,itemsim,'87')[0:30]
%matplotlib inline
import turicreate as tc
import matplotlib.pyplot as plt
sf = tc.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"],
'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"],
'rating': [1, 3, 2, 5, 4, 1, 4, 3]})
sf
m = tc.recommender.create(sf, target='rating')
recs = m.recommend()
recs
#train_file = 'http://s3.amazonaws.com/dato-datasets/millionsong/10000.txt'
train_file = '../data/ratings.dat'
sf = tc.SFrame.read_csv(train_file, header=False,
delimiter='|', verbose=False)
sf = sf.rename({'X1':'user_id', 'X2':'course_id', 'X3':'rating'})
sf.show()
sf
train_set, test_set = sf.random_split(0.8, seed=1)
popularity_model = tc.popularity_recommender.create(train_set, 'user_id', 'course_id', target = 'rating')
item_sim_model = tc.item_similarity_recommender.create(
train_set, 'user_id', 'course_id', target = 'rating',
similarity_type='cosine')
factorization_machine_model = tc.recommender.factorization_recommender.create(
train_set, 'user_id', 'course_id',
target='rating')
result = tc.recommender.util.compare_models(
test_set, [popularity_model, item_sim_model, factorization_machine_model],
user_sample=.5, skip_set=train_set)
K = 10
users = tc.SArray(sf['user_id'].unique().head(100))
users
recs = item_sim_model.recommend(users=users, k=K)
recs.head()
# Get the meta data of the courses
courses = tc.SFrame.read_csv('../data/cursos.dat', header=False, delimiter='|', verbose=False)
courses =courses.rename({'X1':'course_id', 'X2':'title', 'X3':'avg_rating',
'X4':'workload', 'X5':'university', 'X6':'difficulty', 'X7':'provider'})
courses.show()
courses = courses[['course_id', 'title', 'provider']]
results = recs.join(courses, on='course_id', how='inner')
#Populate observed user-course data with course info
userset = frozenset(users)
ix = sf['user_id'].apply(lambda x: x in userset, int)
user_data = sf[ix]
user_data = user_data.join(courses, on='course_id')[['user_id', 'title', 'provider']]
# Print out some recommendations
for i in range(5):
user = list(users)[i]
print("User: " + str(i + 1))
user_obs = user_data[user_data['user_id'] == user].head(K)
del user_obs['user_id']
user_recs = results[results['user_id'] == str(user)][['title', 'provider']]
print("We were told that the user liked these courses: ")
print (user_obs.head(K))
print ("We recommend these other courses:")
print (user_recs.head(K))
print ("")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. User-based filtering
Step2: This formula calculates the distance, which will be smaller for people who are more similar.
Step3: Pearson correlation coefficient
Step4: 1.1 Recommending Items
Step5: 2. Item-based filtering
Step6: 计算item的相似性
Step7: 给item推荐user
Step8: <img src = './img/itemcf1.png' width=800px>
Step9: <img src = './img/itemcfNetwork.png' width = 700px>
Step10: 3. MovieLens Recommender
Step11: user-based filtering
Step12: Item-based filtering
Step13: Buiding Recommendation System with Turicreate
Step14: The CourseTalk dataset
Step15: In order to evaluate the performance of our model, we randomly split the observations in our data set into two partitions
Step16: Popularity model
Step17: Item similarity Model
Step18: Factorization Recommender Model
Step19: Model Evaluation
Step20: Now let's ask the item similarity model for song recommendations on several users. We first create a list of users and create a subset of observations, users_ratings, that pertain to these users.
Step21: Next we use the recommend() function to query the model we created for recommendations. The returned object has four columns
Step22: To learn what songs these ids pertain to, we can merge in metadata about each song.
|
12,779
|
<ASSISTANT_TASK:>
Python Code:
# constants
k_B = Boltzmann
eta_air = 18.27e-6 # Pa # (J.T.R.Watson (1995)).
d_gas = 0.372e-9 #m #(Sone (2007)), ρSiO2
rho_SiO2 = 1800 # #kg/m^3 - Number told to us by
T0 = 300
R = 50e-9 # m
def mfp(P_gas):
mfp_val = k_B*T0/(2**0.5*pi*d_gas**2*P_gas)
return mfp_val
m_gas = 4.81e-26
def mfp_2(P_gas):
mfp_val = eta_air/P_gas * (pi*k_B*T0/(2*m_gas))**0.5
return mfp_val
s = mfp(300) # 3mbar = 300 Pascals
print(s)
s2 = mfp_2(300) # 3mbar = 300 Pascals
print(s2)
def Gamma_env(radius, Pressure_mbar):
mass = rho_SiO2 * 4/3*pi*radius**3
Pressure_pascals = 100*Pressure_mbar
s = mfp(Pressure_pascals)
K_n = s/radius
c_K = 0.31*K_n/(0.785 + 1.152*K_n + K_n**2)
Gamma_0 = 6*pi*eta_air*radius/mass * 0.619/(0.619 + K_n) * (1+c_K)
return Gamma_0
Gamma_env(R, 3)
def Gamma_env_simple(radius, Pressure_mbar):
Pressure_pascals = 100*Pressure_mbar
#Gamma_0 = 0.619*9*pi*eta_air*d_gas**2*Pressure_pascals/(2**0.5*rho_SiO2*k_B*T0*radius)
Gamma_0 = 0.619*9*pi*eta_air*d_gas**2*Pressure_pascals/(2**0.5*rho_SiO2*k_B*T0*radius)
return Gamma_0
Gamma_env_simple(R, 3)
def Gamma_alternative(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
mass= rho_SiO2*4/3*pi*radius**3
Gamma0 = 64*radius**2*Pressure/(3*mass*ave_velocity)
return Gamma0
Gamma_alternative(R, 3)
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
ave_velocity
def Gamma_chang(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
Gamma0 = 8*Pressure/(pi*ave_velocity*radius*rho_SiO2)/2
return 2*Gamma0
Gamma_chang(R, 3)
def Gamma_Millen_imp(radius, Pressure_mbar):
Pressure = 100*Pressure_mbar
ave_velocity = (8*k_B*T0/(pi*m_gas))**0.5
mass = rho_SiO2*4/3*pi*radius**3
N = Pressure/(k_B*T0)
Gamma0 = 4*pi*m_gas*N*radius**2*ave_velocity/(3*mass)
return Gamma0
Gamma_Millen_imp(R, 3)
Gamma_chang(R, 3)
def Gamma_Millen_em(radius, Pressure_mbar, T_em):
Pressure = 100*Pressure_mbar
h_prime = m_gas/(k_B*T_em)
mass = rho_SiO2*4/3*pi*radius**3
N = Pressure/(k_B*T_em)
Gamma0 = (m_gas*N*radius**2*pi**(3/2))/(3*np.sqrt(h_prime)*mass)
return Gamma0
def calc_surface_temp_Millen(T_em, T_imp=300):
accomodation_coef = 0.777 # accomodation coefficient of silica (from Nanoscale temp measurement paper)
T_surf = T_imp + (T_em + T_imp)/accomodation_coef
return T_surf
P_exp = np.load("Pressure_mbar.npy")
Gamma_exp = np.load("Gamma_radians.npy")
P_G_Dict = dict(zip(P_exp, Gamma_exp))
r = np.linspace(5e-9, 1000e-9, 1000)
P = 3.6 # mbar
alpha=0.5
plt.figure(figsize=[10, 10])
plt.loglog(r, Gamma_env_simple(r, P), 'k', label="Rashid/Gieseler Full form", alpha=alpha)
#plt.semilogy(r, Gamma_env_simple(r, P), 'grey', label="Rashid/Gieseler simplfied form", alpha=alpha)
plt.loglog(r, Gamma_alternative(r, P), label="Gieseler Thermal Non-linearities form", alpha=alpha)
plt.loglog(r, Gamma_chang(r, P), label="Chang form", alpha=alpha)
plt.loglog(r, Gamma_Millen_imp(r, P), label="Millen (imp) form", alpha=alpha)
plt.xlabel("radius (nm)")
plt.ylabel("Γ (radians/s)")
plt.legend(loc='best')
plt.show()
r = 50e-9
P = np.linspace(1e-2, 1000, 1000)
plt.figure(figsize=[10, 10])
plt.loglog(P, Gamma_env_simple(r, P), 'k', label="Rashid/Gieseler Full form", alpha=alpha)
#plt.loglog(P, Gamma_env_simple(r, P), 'grey', label="Rashid/Gieseler simplfied form", alpha=alpha)
plt.loglog(P, Gamma_alternative(r, P), label="Gieseler Thermal Non-linearities form", alpha=alpha)
plt.loglog(P, Gamma_chang(r, P), label="Chang form", alpha=alpha)
plt.loglog(P, Gamma_Millen_imp(r, P), label="Millen (imp) form", alpha=alpha)
plt.loglog(P_exp, Gamma_exp, label="Experiment", alpha=alpha)
plt.xlabel("P (mbar)")
plt.ylabel("Γ (radians/s)")
plt.legend(loc='best')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternativity one can use
Step2: Muddassar and Gieseler's simplified formula for the environmental damping is
Step3: Relation 2
Step4: Relation 3
Step5: Also relation 3 (different derivation by Millen et al.)
Step6: This agrees exactly with Chang's result
Step7: Relation 3+ (more damping due to considering emerging particles)
Step8: Plot of all 3 relations and measured data
|
12,780
|
<ASSISTANT_TASK:>
Python Code:
class User:
def __init__(self, user_id):
self.user_id = user_id
def __repr__(self):
return "User({})".format(self.user_id)
def sort_notcompare():
users = [User(23), User(3), User(99)]
print(users)
print(sorted(users, key = lambda u: u.user_id))
sort_notcompare()
from operator import attrgetter
users = [User(23), User(3), User(99)]
sorted(users, key = attrgetter("user_id"))
min(users, key = attrgetter("user_id"))
max(users, key = attrgetter("user_id"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 另外一种方式是使用 operator.attrgetter() 来代替 lambda 函数:
Step2: 讨论
|
12,781
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
from torch.autograd import Variable
import torchvision
import torchvision.transforms as T
import PIL
import numpy as np
from scipy.misc import imread
from collections import namedtuple
import matplotlib.pyplot as plt
from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD
%matplotlib inline
def preprocess(img, size=512):
transform = T.Compose([
T.Scale(size),
T.ToTensor(),
T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
std=SQUEEZENET_STD.tolist()),
T.Lambda(lambda x: x[None]),
])
return transform(img)
def deprocess(img):
transform = T.Compose([
T.Lambda(lambda x: x[0]),
T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]),
T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]),
T.Lambda(rescale),
T.ToPILImage(),
])
return transform(img)
def rescale(x):
low, high = x.min(), x.max()
x_rescaled = (x - low) / (high - low)
return x_rescaled
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def features_from_img(imgpath, imgsize):
img = preprocess(PIL.Image.open(imgpath), size=imgsize)
img_var = Variable(img.type(dtype))
return extract_features(img_var, cnn), img_var
# Older versions of scipy.misc.imresize yield different results
# from newer versions, so we check to make sure scipy is up to date.
def check_scipy():
import scipy
vnum = int(scipy.__version__.split('.')[1])
assert vnum >= 16, "You must install SciPy >= 0.16.0 to complete this notebook."
check_scipy()
answers = np.load('style-transfer-checks.npz')
dtype = torch.FloatTensor
# Uncomment out the following line if you're on a machine with a GPU set up for PyTorch!
# dtype = torch.cuda.FloatTensor
# Load the pre-trained SqueezeNet model.
cnn = torchvision.models.squeezenet1_1(pretrained=True).features
cnn.type(dtype)
# We don't want to train the model any further, so we don't want PyTorch to waste computation
# computing gradients on parameters we're never going to update.
for param in cnn.parameters():
param.requires_grad = False
# We provide this helper code which takes an image, a model (cnn), and returns a list of
# feature maps, one per layer.
def extract_features(x, cnn):
Use the CNN to extract features from the input image x.
Inputs:
- x: A PyTorch Variable of shape (N, C, H, W) holding a minibatch of images that
will be fed to the CNN.
- cnn: A PyTorch model that we will use to extract features.
Returns:
- features: A list of feature for the input images x extracted using the cnn model.
features[i] is a PyTorch Variable of shape (N, C_i, H_i, W_i); recall that features
from different layers of the network may have different numbers of channels (C_i) and
spatial dimensions (H_i, W_i).
features = []
prev_feat = x
for i, module in enumerate(cnn._modules.values()):
next_feat = module(prev_feat)
features.append(next_feat)
prev_feat = next_feat
return features
def content_loss(content_weight, content_current, content_original):
Compute the content loss for style transfer.
Inputs:
- content_weight: Scalar giving the weighting for the content loss.
- content_current: features of the current image; this is a PyTorch Tensor of shape
(1, C_l, H_l, W_l).
- content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).
Returns:
- scalar content loss
pass
def content_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
content_layer = 3
content_weight = 6e-2
c_feats, content_img_var = features_from_img(content_image, image_size)
bad_img = Variable(torch.zeros(*content_img_var.data.size()))
feats = extract_features(bad_img, cnn)
student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
content_loss_test(answers['cl_out'])
def gram_matrix(features, normalize=True):
Compute the Gram matrix from features.
Inputs:
- features: PyTorch Variable of shape (N, C, H, W) giving features for
a batch of N images.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: PyTorch Variable of shape (N, C, C) giving the
(optionally normalized) Gram matrices for the N input images.
pass
def gram_matrix_test(correct):
style_image = 'styles/starry_night.jpg'
style_size = 192
feats, _ = features_from_img(style_image, style_size)
student_output = gram_matrix(feats[5].clone()).data.numpy()
error = rel_error(correct, student_output)
print('Maximum error is {:.3f}'.format(error))
gram_matrix_test(answers['gm_out'])
# Now put it together in the style_loss function...
def style_loss(feats, style_layers, style_targets, style_weights):
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a PyTorch Variable giving the Gram matrix the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A PyTorch Variable holding a scalar giving the style loss.
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
pass
def style_loss_test(correct):
content_image = 'styles/tubingen.jpg'
style_image = 'styles/starry_night.jpg'
image_size = 192
style_size = 192
style_layers = [1, 4, 6, 7]
style_weights = [300000, 1000, 15, 3]
c_feats, _ = features_from_img(content_image, image_size)
feats, _ = features_from_img(style_image, style_size)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
student_output = style_loss(c_feats, style_layers, style_targets, style_weights).data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
style_loss_test(answers['sl_out'])
def tv_loss(img, tv_weight):
Compute total variation loss.
Inputs:
- img: PyTorch Variable of shape (1, 3, H, W) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: PyTorch Variable holding a scalar giving the total variation loss
for img weighted by tv_weight.
# Your implementation should be vectorized and not require any loops!
pass
def tv_loss_test(correct):
content_image = 'styles/tubingen.jpg'
image_size = 192
tv_weight = 2e-2
content_img = preprocess(PIL.Image.open(content_image), size=image_size)
content_img_var = Variable(content_img.type(dtype))
student_output = tv_loss(content_img_var, tv_weight).data.numpy()
error = rel_error(correct, student_output)
print('Error is {:.3f}'.format(error))
tv_loss_test(answers['tv_out'])
def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,
style_layers, style_weights, tv_weight, init_random = False):
Run style transfer!
Inputs:
- content_image: filename of content image
- style_image: filename of style image
- image_size: size of smallest image dimension (used for content loss and generated image)
- style_size: size of smallest style image dimension
- content_layer: layer to use for content loss
- content_weight: weighting on content loss
- style_layers: list of layers to use for style loss
- style_weights: list of weights to use for each layer in style_layers
- tv_weight: weight of total variation regularization term
- init_random: initialize the starting image to uniform random noise
# Extract features for the content image
content_img = preprocess(PIL.Image.open(content_image), size=image_size)
content_img_var = Variable(content_img.type(dtype))
feats = extract_features(content_img_var, cnn)
content_target = feats[content_layer].clone()
# Extract features for the style image
style_img = preprocess(PIL.Image.open(style_image), size=style_size)
style_img_var = Variable(style_img.type(dtype))
feats = extract_features(style_img_var, cnn)
style_targets = []
for idx in style_layers:
style_targets.append(gram_matrix(feats[idx].clone()))
# Initialize output image to content image or nois
if init_random:
img = torch.Tensor(content_img.size()).uniform_(0, 1)
else:
img = content_img.clone().type(dtype)
# We do want the gradient computed on our image!
img_var = Variable(img, requires_grad=True)
# Set up optimization hyperparameters
initial_lr = 3.0
decayed_lr = 0.1
decay_lr_at = 180
# Note that we are optimizing the pixel values of the image by passing
# in the img_var Torch variable, whose requires_grad flag is set to True
optimizer = torch.optim.Adam([img_var], lr=initial_lr)
f, axarr = plt.subplots(1,2)
axarr[0].axis('off')
axarr[1].axis('off')
axarr[0].set_title('Content Source Img.')
axarr[1].set_title('Style Source Img.')
axarr[0].imshow(deprocess(content_img.cpu()))
axarr[1].imshow(deprocess(style_img.cpu()))
plt.show()
plt.figure()
for t in range(200):
if t < 190:
img.clamp_(-1.5, 1.5)
optimizer.zero_grad()
feats = extract_features(img_var, cnn)
# Compute loss
c_loss = content_loss(content_weight, feats[content_layer], content_target)
s_loss = style_loss(feats, style_layers, style_targets, style_weights)
t_loss = tv_loss(img_var, tv_weight)
loss = c_loss + s_loss + t_loss
loss.backward()
# Perform gradient descents on our image values
if t == decay_lr_at:
optimizer = torch.optim.Adam([img_var], lr=decayed_lr)
optimizer.step()
if t % 100 == 0:
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.cpu()))
plt.show()
print('Iteration {}'.format(t))
plt.axis('off')
plt.imshow(deprocess(img.cpu()))
plt.show()
# Composition VII + Tubingen
params1 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/composition_vii.jpg',
'image_size' : 192,
'style_size' : 512,
'content_layer' : 3,
'content_weight' : 5e-2,
'style_layers' : (1, 4, 6, 7),
'style_weights' : (20000, 500, 12, 1),
'tv_weight' : 5e-2
}
style_transfer(**params1)
# Scream + Tubingen
params2 = {
'content_image':'styles/tubingen.jpg',
'style_image':'styles/the_scream.jpg',
'image_size':192,
'style_size':224,
'content_layer':3,
'content_weight':3e-2,
'style_layers':[1, 4, 6, 7],
'style_weights':[200000, 800, 12, 1],
'tv_weight':2e-2
}
style_transfer(**params2)
# Starry Night + Tubingen
params3 = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [300000, 1000, 15, 3],
'tv_weight' : 2e-2
}
style_transfer(**params3)
# Feature Inversion -- Starry Night + Tubingen
params_inv = {
'content_image' : 'styles/tubingen.jpg',
'style_image' : 'styles/starry_night.jpg',
'image_size' : 192,
'style_size' : 192,
'content_layer' : 3,
'content_weight' : 6e-2,
'style_layers' : [1, 4, 6, 7],
'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss
'tv_weight' : 2e-2,
'init_random': True # we want to initialize our image to be random
}
style_transfer(**params_inv)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data.
Step3: As in the last assignment, we need to set the dtype to select either the CPU or the GPU
Step5: Computing Loss
Step6: Test your content loss. You should see errors less than 0.001.
Step8: Style loss
Step9: Test your Gram matrix code. You should see errors less than 0.001.
Step11: Next, implement the style loss
Step12: Test your style loss implementation. The error should be less than 0.001.
Step14: Total-variation regularization
Step15: Test your TV loss implementation. Error should be less than 0.001.
Step17: Now we're ready to string it all together (you shouldn't have to modify this function)
Step18: Generate some pretty pictures!
Step19: Feature Inversion
|
12,782
|
<ASSISTANT_TASK:>
Python Code:
!mkdir cifar10
!curl -o cifar-10-python.tar.gz https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
!tar -xvzf cifar-10-python.tar.gz -C cifar10
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from cifar import load_CIFAR10
plt.rcParams['figure.figsize'] = (10.0, 8.0)
cifar10_dir = './cifar10/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8').transpose(1, 2, 0))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
import lasagne
from theano import tensor as T
from lasagne.nonlinearities import *
input_X = T.tensor4("X")
target_y = T.vector("target Y integer",dtype='int64')
net = lasagne.layers.InputLayer(shape=(None, 3, 32, 32), input_var=input_X)
# net = <сверочная нейросеть>
net = lasagne.layers.Conv2DLayer(net, 11, 5, pad='valid') # сверточный слой
net = lasagne.layers.Conv2DLayer(net, 7, 3, pad='valid') # сверточный слой
net = lasagne.layers.MaxPool2DLayer(net, 3)
net = lasagne.layers.Conv2DLayer(net, 3, 2, pad='valid') # сверточный слой
net = lasagne.layers.Conv2DLayer(net, 3, 2, pad='valid') # сверточный слой
net = lasagne.layers.Conv2DLayer(net, 3, 2, pad='valid') # сверточный слой
net = lasagne.layers.DenseLayer(net, num_units=300) # полносвязный слой
net = lasagne.layers.DropoutLayer(net, 0.5) # регуляризатор
net = lasagne.layers.DenseLayer(net, num_units=100) # полносвязный слой
# net = lasagne.layers.DenseLayer(net, num_units=10, nonlinearity=lasagne.nonlinearities.softmax) # полносвязный слой
net = lasagne.layers.DenseLayer(net,num_units = 10, nonlinearity=lasagne.nonlinearities.softmax)
y_predicted = lasagne.layers.get_output(net)
all_weights = lasagne.layers.get_all_params(net)
print all_weights
# loss = <функция потерь>
# accuracy = <вычисление точност>
loss = lasagne.objectives.categorical_crossentropy(y_predicted, target_y).mean()
accuracy = lasagne.objectives.categorical_accuracy(y_predicted, target_y).mean()
updates = lasagne.updates.momentum(loss, all_weights, learning_rate=0.1, momentum=0.9)
train_fun = theano.function([input_X,target_y],[loss, accuracy], updates=updates)
accuracy_fun = theano.function([input_X,target_y],accuracy)
def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
assert len(inputs) == len(targets)
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
import time
num_epochs = 10 #количество проходов по данным
batch_size = 50 #размер мини-батча
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train,batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch= train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_train, y_train, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 92.5:
print "Achievement unlocked: колдун 80 уровня"
else:
print "Нужно больше магии!"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1 align="center">First of all -- Checking Questions</h1>
Step2: Соберите нейронку
Step3: Вот и всё, пошли её учить
Step4: Процесс обучения
|
12,783
|
<ASSISTANT_TASK:>
Python Code:
import cobra
from cobra.solvers import get_solver_name
from cobra import Model, Reaction, Metabolite
from cobra.flux_analysis import parsimonious
import pandas as pd
from utils import show_map, findBiomarkers
# set escher map
map_loc = './maps/escher_map_geenen_2012.json'
M = cobra.io.load_json_model('./models/Geenen_cobra_model.json')
M.reactions.EX_v_v18.lower_bound = -1 # glut
M.reactions.EX_v_v20.lower_bound = -1 # gly
M.reactions.EX_v_v39.lower_bound = -1 # met
M.reactions.EX_v_v41.lower_bound = -1 # bcys
M.reactions.EX_v_v32.lower_bound = 0 # OPA
M.reactions.EX_v_v38.lower_bound = 0 # OXO
M.reactions.EX_v_v37.lower_bound = 0 # cysASG
M.reactions.EX_v_v22.lower_bound = 0 # CH2THF
exchanges = [rxn.id for rxn in M.reactions if rxn.products == [] or rxn.reactants == []]
model = M.copy()
model = M.copy()
model.reactions.EX_para.lower_bound = -1000; model.reactions.EX_para.upper_bound = -20
model.reactions.EX_v_v41.lower_bound = -10; model.reactions.EX_v_v41.upper_bound = 1000 # cys
model.reactions.EX_v_v39.lower_bound = -10; model.reactions.EX_v_v39.upper_bound = 1000 # met
model.reactions.EX_v_v18.lower_bound = -10; model.reactions.EX_v_v18.upper_bound = 1000 # glut
model.reactions.EX_v_v20.lower_bound = -10; model.reactions.EX_v_v20.upper_bound = 1000 # gly
# model.reactions.EX_v_v38.lower_bound = -10; model.reactions.EX_v_v38.upper_bound = 100 # oxo
# model.reactions.EX_v_v32.lower_bound = -10; model.reactions.EX_v_v39.upper_bound = 100 # opa
sol = cobra.flux_analysis.parsimonious.optimize_minimal_flux(model)
b = show_map(sol,map_loc)
b.save_html('./predictions/FBA_glu-gly-met-cys_loop.html',overwrite=True)
b.display_in_notebook()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analyze basic flux distributions
|
12,784
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from dcprogs.likelihood import QMatrix
tau = 0.2
qmatrix = QMatrix([[-1, 1, 0], [19, -29, 10], [0, 0.026, -0.026]], 1)
from dcprogs.likelihood._methods import exponential_pdfs
def plot_exponentials(qmatrix, tau, x0=None, x=None, ax=None, nmax=2, shut=False):
from dcprogs.likelihood import missed_events_pdf
from dcprogs.likelihood._methods import exponential_pdfs
if x is None: x = np.arange(0, 5*tau, tau/10)
if x0 is None: x0 = x
pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)
graphb = [x0, pdf(x0+tau), '-k']
functions = exponential_pdfs(qmatrix, tau, shut=shut)
plots = ['.r', '.b', '.g']
together = None
for f, p in zip(functions[::-1], plots):
if together is None: together = f(x+tau)
else: together = together + f(x+tau)
graphb.extend([x, together, p])
if ax is None: plot(*graphb)
else: ax.plot(*graphb)
from dcprogs.likelihood import missed_events_pdf
fig = plt.figure(figsize=(12, 10 ))
ax = fig.add_subplot(2, 2, 1)
x = np.arange(0, 10, tau/100)
pdf = missed_events_pdf(qmatrix, 0.2, nmax=2, shut=True)
ax.plot(x, pdf(x), '-k')
ax.set_xlabel('time $t$ (ms)')
ax.set_ylabel('Shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
ax = fig.add_subplot(2, 2, 2)
ax.set_xlabel('time $t$ (ms)')
tau = 0.2
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
ax = fig.add_subplot(2, 2, 3)
tau = 0.05
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax = fig.add_subplot(2, 2, 4)
tau = 0.5
x, x0 = np.arange(0, 3*tau, tau/10.0), np.arange(0, 3*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0)
ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax.set_xlabel('time $t$ (ms)')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We then create a function to plot each exponential component in the asymptotic expression. An explanation on how to get to these plots can be found in the CH82 notebook.
Step2: For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 9 from Hawkes, Jalali, Colquhoun (1992)
|
12,785
|
<ASSISTANT_TASK:>
Python Code:
import zipfile
with zipfile.ZipFile(path + "glove.6B.zip","r") as zip_ref:
zip_ref.extractall(path)
%ls $path
import pickle
def get_glove(name):
with open(path+ 'glove.' + name + '.txt', 'r') as f: lines = [line.split() for line in f]
words = [d[0] for d in lines]
vecs = np.stack(np.array(d[1:], dtype=np.float32) for d in lines)
wordidx = {o:i for i,o in enumerate(words)}
save_array(res_path+name+'.dat', vecs)
pickle.dump(words, open(res_path+name+'_words.pkl','wb'))
pickle.dump(wordidx, open(res_path+name+'_idx.pkl','wb'))
get_glove('6B.50d')
get_glove('6B.100d')
get_glove('6B.200d')
get_glove('6B.300d')
def load_glove(loc):
return (load_array(loc+'.dat'),
pickle.load(open(loc+'_words.pkl','rb')),
pickle.load(open(loc+'_idx.pkl','rb')))
vecs, words, wordidx = load_glove(res_path+'6B.50d')
vecs.shape
' '.join(words[:25])
def w2v(w): return vecs[wordidx[w]]
w2v('of')
## MDR: none of this seems to be needed?!
#reload(sys)
#sys.setdefaultencoding('utf8')
tsne = TSNE(n_components=2, random_state=0)
Y = tsne.fit_transform(vecs[:500])
start=0; end=400
dat = Y[start:end]
plt.figure(figsize=(15,15))
plt.scatter(dat[:, 0], dat[:, 1])
for label, x, y in zip(words[start:end], dat[:, 0], dat[:, 1]):
plt.text(x,y,label, color=np.random.rand(3)*0.7,
fontsize=10)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Process the data
Step2: Takes just under 2 min, no output.
Step3: Looking at the vectors
Step4: Here's the first 25 "words" in glove.
Step5: This is how you can look up a word vector.
Step6: Just for fun, let's take a look at a 2d projection of the first 350 words, using T-SNE.
|
12,786
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
!pip install -q apache_beam
!pip install -q 'scikit_learn~=0.23.0' # For gaussian_random_matrix.
!pip install -q annoy
import os
import sys
import pathlib
import pickle
from collections import namedtuple
from datetime import datetime
import numpy as np
import apache_beam as beam
import annoy
from sklearn.random_projection import gaussian_random_matrix
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
# TFT needs to be installed afterwards
!pip install -q tensorflow_transform==0.24
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('TF-Transform version: {}'.format(tft.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
!wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
!wc -l raw.tsv
!head raw.tsv
!rm -r corpus
!mkdir corpus
with open('corpus/text.txt', 'w') as out_file:
with open('raw.tsv', 'r') as in_file:
for line in in_file:
headline = line.split('\t')[1].strip().strip('"')
out_file.write(headline+"\n")
!tail corpus/text.txt
def load_module(module_url):
embed_module = hub.Module(module_url)
placeholder = tf.placeholder(dtype=tf.string)
embed = embed_module(placeholder)
session = tf.Session()
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
print('TF-Hub module is loaded.')
def _embeddings_fn(sentences):
computed_embeddings = session.run(
embed, feed_dict={placeholder: sentences})
return computed_embeddings
return _embeddings_fn
encoder = None
def embed_text(text, module_url, random_projection_matrix):
# Beam will run this function in different processes that need to
# import hub and load embed_fn (if not previously loaded)
global encoder
if not encoder:
encoder = hub.Module(module_url)
embedding = encoder(text)
if random_projection_matrix is not None:
# Perform random projection for the embedding
embedding = tf.matmul(
embedding, tf.cast(random_projection_matrix, embedding.dtype))
return embedding
def make_preprocess_fn(module_url, random_projection_matrix=None):
'''Makes a tft preprocess_fn'''
def _preprocess_fn(input_features):
'''tft preprocess_fn'''
text = input_features['text']
# Generate the embedding for the input text
embedding = embed_text(text, module_url, random_projection_matrix)
output_features = {
'text': text,
'embedding': embedding
}
return output_features
return _preprocess_fn
def create_metadata():
'''Creates metadata for the raw data'''
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
feature_spec = {'text': tf.FixedLenFeature([], dtype=tf.string)}
schema = schema_utils.schema_from_feature_spec(feature_spec)
metadata = dataset_metadata.DatasetMetadata(schema)
return metadata
def run_hub2emb(args):
'''Runs the embedding generation pipeline'''
options = beam.options.pipeline_options.PipelineOptions(**args)
args = namedtuple("options", args.keys())(*args.values())
raw_metadata = create_metadata()
converter = tft.coders.CsvCoder(
column_names=['text'], schema=raw_metadata.schema)
with beam.Pipeline(args.runner, options=options) as pipeline:
with tft_beam.Context(args.temporary_dir):
# Read the sentences from the input file
sentences = (
pipeline
| 'Read sentences from files' >> beam.io.ReadFromText(
file_pattern=args.data_dir)
| 'Convert to dictionary' >> beam.Map(converter.decode)
)
sentences_dataset = (sentences, raw_metadata)
preprocess_fn = make_preprocess_fn(args.module_url, args.random_projection_matrix)
# Generate the embeddings for the sentence using the TF-Hub module
embeddings_dataset, _ = (
sentences_dataset
| 'Extract embeddings' >> tft_beam.AnalyzeAndTransformDataset(preprocess_fn)
)
embeddings, transformed_metadata = embeddings_dataset
# Write the embeddings to TFRecords files
embeddings | 'Write embeddings to TFRecords' >> beam.io.tfrecordio.WriteToTFRecord(
file_path_prefix='{}/emb'.format(args.output_dir),
file_name_suffix='.tfrecords',
coder=tft.coders.ExampleProtoCoder(transformed_metadata.schema))
def generate_random_projection_weights(original_dim, projected_dim):
random_projection_matrix = None
if projected_dim and original_dim > projected_dim:
random_projection_matrix = gaussian_random_matrix(
n_components=projected_dim, n_features=original_dim).T
print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
print('Storing random projection matrix to disk...')
with open('random_projection_matrix', 'wb') as handle:
pickle.dump(random_projection_matrix,
handle, protocol=pickle.HIGHEST_PROTOCOL)
return random_projection_matrix
module_url = 'https://tfhub.dev/google/universal-sentence-encoder/2' #@param {type:"string"}
projected_dim = 64 #@param {type:"number"}
import tempfile
output_dir = pathlib.Path(tempfile.mkdtemp())
temporary_dir = pathlib.Path(tempfile.mkdtemp())
g = tf.Graph()
with g.as_default():
original_dim = load_module(module_url)(['']).shape[1]
random_projection_matrix = None
if projected_dim:
random_projection_matrix = generate_random_projection_weights(
original_dim, projected_dim)
args = {
'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
'runner': 'DirectRunner',
'batch_size': 1024,
'data_dir': 'corpus/*.txt',
'output_dir': output_dir,
'temporary_dir': temporary_dir,
'module_url': module_url,
'random_projection_matrix': random_projection_matrix,
}
print("Pipeline args are set.")
args
!rm -r {output_dir}
!rm -r {temporary_dir}
print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
!ls {output_dir}
import itertools
embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5
record_iterator = tf.io.tf_record_iterator(path=embed_file)
for string_record in itertools.islice(record_iterator, sample):
example = tf.train.Example()
example.ParseFromString(string_record)
text = example.features.feature['text'].bytes_list.value
embedding = np.array(example.features.feature['embedding'].float_list.value)
print("Embedding dimensions: {}".format(embedding.shape[0]))
print("{}: {}".format(text, embedding[:10]))
def build_index(embedding_files_pattern, index_filename, vector_length,
metric='angular', num_trees=100):
'''Builds an ANNOY index'''
annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
# Mapping between the item and its identifier in the index
mapping = {}
embed_files = tf.gfile.Glob(embedding_files_pattern)
print('Found {} embedding file(s).'.format(len(embed_files)))
item_counter = 0
for f, embed_file in enumerate(embed_files):
print('Loading embeddings in file {} of {}...'.format(
f+1, len(embed_files)))
record_iterator = tf.io.tf_record_iterator(
path=embed_file)
for string_record in record_iterator:
example = tf.train.Example()
example.ParseFromString(string_record)
text = example.features.feature['text'].bytes_list.value[0].decode("utf-8")
mapping[item_counter] = text
embedding = np.array(
example.features.feature['embedding'].float_list.value)
annoy_index.add_item(item_counter, embedding)
item_counter += 1
if item_counter % 100000 == 0:
print('{} items loaded to the index'.format(item_counter))
print('A total of {} items added to the index'.format(item_counter))
print('Building the index with {} trees...'.format(num_trees))
annoy_index.build(n_trees=num_trees)
print('Index is successfully built.')
print('Saving index to disk...')
annoy_index.save(index_filename)
print('Index is saved to disk.')
print("Index file size: {} GB".format(
round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
annoy_index.unload()
print('Saving mapping to disk...')
with open(index_filename + '.mapping', 'wb') as handle:
pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Mapping is saved to disk.')
print("Mapping file size: {} MB".format(
round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"
!rm {index_filename}
!rm {index_filename}.mapping
%time build_index(embedding_files, index_filename, embedding_dimension)
!ls
index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
mapping = pickle.load(handle)
print('Mapping file is loaded.')
def find_similar_items(embedding, num_matches=5):
'''Finds similar items to a given embedding in the ANN index'''
ids = index.get_nns_by_vector(
embedding, num_matches, search_k=-1, include_distances=False)
items = [mapping[i] for i in ids]
return items
# Load the TF-Hub module
print("Loading the TF-Hub module...")
g = tf.Graph()
with g.as_default():
embed_fn = load_module(module_url)
print("TF-Hub module is loaded.")
random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
print("Loading random projection matrix...")
with open('random_projection_matrix', 'rb') as handle:
random_projection_matrix = pickle.load(handle)
print('random projection matrix is loaded.')
def extract_embeddings(query):
'''Generates the embedding for the query'''
query_embedding = embed_fn([query])[0]
if random_projection_matrix is not None:
query_embedding = query_embedding.dot(random_projection_matrix)
return query_embedding
extract_embeddings("Hello Machine Learning!")[:10]
#@title { run: "auto" }
query = "confronting global challenges" #@param {type:"string"}
print("Generating embedding for the query...")
%time query_embedding = extract_embeddings(query)
print("")
print("Finding relevant items in the index...")
%time items = find_similar_items(query_embedding, 10)
print("")
print("Results:")
print("=========")
for item in items:
print(item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Semantic Search with Approximate Nearest Neighbors and Text Embeddings
Step2: Import the required libraries
Step3: 1. Download Sample Data
Step4: For simplicity, we only keep the headline text and remove the publication date
Step5: Helper function to load a TF-Hub module
Step6: 2. Generate Embeddings for the Data.
Step7: Make TFT preprocess_fn method
Step8: Create dataset metadata
Step9: Beam pipeline
Step10: Generaring Random Projection Weight Matrix
Step11: Set parameters
Step12: Run pipeline
Step13: Read some of the generated embeddings...
Step14: 3. Build the ANN Index for the Embeddings
Step15: 4. Use the Index for Similarity Matching
Step16: Similarity matching method
Step17: Extract embedding from a given query
Step18: Enter a query to find the most similar items
|
12,787
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from spacy.symbols import pobj
site_scrape_dict = {
# the following represents html selector to retrieve the header + 2 first test paragraphs
'aol.com': '#article-wrapper h1, #article-wrapper > div.article-content > p:nth-child(2) , #article-wrapper > div.article-content > p:nth-child(3)',
'homepage.aol.com': '#article-wrapper h1, #article-wrapper > div.article-content > p:nth-child(2) , #article-wrapper > div.article-content > p:nth-child(3)',
'hp-desktop.aol.com': '#article-wrapper h1, #article-wrapper > div.article-content > p:nth-child(2) , #article-wrapper > div.article-content > p:nth-child(3)',
'help.aol.com': '#article-wrapper h1, #article-wrapper > div.article-content > p:nth-child(2) , #articlex-wrapper > div.article-content > p:nth-child(3)', # we might need to exclude it
'aol.co.uk': 'body > div.lo-container > div > section > article > header > div.show-article-title > h1, body > div.lo-container > div > section > article > section:nth-child(2) > div > div > p:nth-child(2), body > div.lo-container > div > section > article > section:nth-child(2) > div > div > p:nth-child(3), body > div.lo-container > div > section > article > section:nth-child(2) > div > div > p:nth-child(4)',
'build.aol.com': '#build-video-player > div.video-content-main > div.videoplayer-info > div > div.videotext > h1, #build-video-player > div.video-content-main > div.videoplayer-info > div > div.videotext > span.videodesc',
}
def extract_locales(url, site):
returns a set of gpe unicode strings
raw_text = _scrape_site(url, site)
# print(raw_text) #debugging
gpe_list = _get_gpes(raw_text)
return gpe_list
import spacy
nlp = spacy.load('en')
def _get_gpes(raw_text):
gpe_list = set()
if raw_text is None:
return gpe_list
raw_text = raw_text.strip().replace("\n", " ").replace("\r", " ")
doc = nlp(raw_text)
for chunk in list(doc.noun_chunks):
gpe = None
isPobj = False
for sub_chunk in list(chunk.subtree):
if(sub_chunk.ent_type_ == 'GPE'):
gpe = sub_chunk.string
if(sub_chunk.dep == pobj):
isPobj = True
if ((gpe != None) & isPobj):
# print(gpe) # same value can be added more then once - chunk.subtree may return the same phrase more then once
gpe_list.add(gpe)
return gpe_list
# list(list(doc.noun_chunks)[6].subtree)[1].ent_type_
# list(list(doc.noun_chunks)[6].subtree)[2].dep_
import subprocess
def _scrape_site(url, site):
if site in site_scrape_dict:
html_selector = site_scrape_dict[site]
else:
html_selector = 'h1' # this might be dangerous - returning to many results ..
# return '' another option is to scrape only sites we know
command = "curl -s '" + url + "' |pup '" + html_selector + " text{}'"
# print("DEBUG scrape: {}".format(command))
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if out:
return out.decode('utf-8')
if err:
print("failed to scrape {}".format(url))
return ''
df = pd.read_csv('/Users/ezer/dev/ml/factorization_matrix/baseline/data/memsql/memsql_test3.csv', skiprows=1000, header=1, nrows=5, parse_dates=['reporttime'], names=['ip','sid','vid','seq','site','r','pid','countrycode','stateprovince','city','devType','max_vpt','max_t','max_pct','reporttime'])
print("num of rows (before unique): {}".format(df.shape[0]))
df = df.filter(['r','site'], axis=1) # df['seq'] == 1
df = df.groupby(['r', 'site']).count() #.reset_index()
df = df.reset_index()
print("columns: {}".format(df.columns))
print("num of rows for scraping: {}".format(df.shape[0]))
if df.shape[0] > 10:
print("WARNING! executing large number of rows may take a long while: {}".format(df.shape[0]))
total = df.shape[0]
current = 0
OUTPUT_FILE = '/tmp/locals_of_urls.csv'
with open(OUTPUT_FILE,'w') as f:
f.write('url,locations\n')
for index, row in df.iterrows():
url, site = row['r'], row['site']
local_set = extract_locales(url, site)
csv_locals = '|'.join(str(s).strip() for s in local_set)
line = "{},{}\n".format(url, csv_locals)
f.write(line)
current+=1
if current%10 == 0: # print every 10 urls (reduce garbage..)
print("adding [{} of {}], url: {}".format(current, total, url))
print "*** Done! ***"
locations_df = pd.read_csv(OUTPUT_FILE, na_filter='')
print("locations_df num of rows: {}".format(locations_df.shape[0]))
if (locations_df.shape[0] != df.shape[0]):
print("there is a count mismatch between original: {} and location urls: {}")
locations_df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: scraping video geo context
Step3: create a unique ['url', 'site']
Step4: create a new csv that will hold url to extracted locations (pipe delimited)
Step5: check the result of the new file
|
12,788
|
<ASSISTANT_TASK:>
Python Code:
# This imports the OpenContextAPI from the api.py file in the
# opencontext directory.
%run '../opencontext/api.py'
import matplotlib.pyplot as plt
import matplotlib.cm as cm
def make_group_markers_colors_for_df(df, group_col):
Makes group markers and colors for consistence in multiple plots
# Make a list of markers that we will associate with different
# grouping values for our scatter plots.
markers = [
'o',
'x',
'v',
'D',
'p',
'^',
's',
'*',
]
group_vals = df[group_col].unique().tolist()
group_vals.sort()
# Each value from the grouping column will get a color
# assigned.
colors = cm.rainbow(np.linspace(0, 1, len(group_vals)))
group_markers = {}
group_colors = {}
m_i = 0
for i, group_val in enumerate(group_vals):
group_markers[group_val] = markers[m_i]
group_colors[group_val] = colors[i].reshape(1,-1)
m_i += 1
if m_i >= len(markers):
# We ran out of markers, so restart
# the marker index.
m_i = 0
# Return a tuple of group markers and color dicts.
return (
group_markers,
group_colors,
)
def make_scatter_plot_from_oc_df(
df,
group_col,
x_col,
y_col,
group_markers=None,
group_colors=None,
):
Make a scatter plot from an Open Context dataframe
if not set([group_col, x_col, y_col]).issubset(set(df.columns.tolist())):
raise('Check for missing columns')
if not group_markers or not group_colors:
# These were't passed as arguments so make them.
group_markers, group_colors = make_group_markers_colors_for_df(
df,
group_col
)
group_vals = df[group_col].unique().tolist()
group_vals.sort()
ax = None
for group_val in group_vals:
act_index = (
(df[group_col] == group_val)
& ~df[x_col].isnull()
& ~df[y_col].isnull()
)
if df[act_index].empty:
# No data for this taxon
continue
label = '{} [n={}]'.format(group_val, len(df[act_index].index))
if not ax:
ax = df[act_index].plot.scatter(
x=x_col,
y=y_col,
marker=group_markers[group_val],
label=label,
color=group_colors[group_val],
)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
else:
plot = df[act_index].plot.scatter(
x=x_col,
y=y_col,
marker=group_markers[group_val],
label=label,
ax=ax,
color=group_colors[group_val],
)
plot.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
import numpy as np
import pandas as pd
oc_api = OpenContextAPI()
# The cache file prefix defaults to today's date. This means that, by default,
# the cache expires after a day. To keep cached files indefinately, we can
# change the cache file prefix to something else that won't change from day
# to day.
oc_api.set_cache_file_prefix('plot-demo')
# Clear old cached records.
oc_api.clear_api_cache()
# This is a search/query url to Open Context.
url = 'https://opencontext.org/subjects-search/?prop=obo-foodon-00001303---gbif-1---gbif-44---gbif-359---gbif-731&prop=oc-zoo-anatomical-meas---oc-zoo-von-den-driesch-bone-meas&prop=oc-zoo-has-anat-id---obo-uberon-0013588#4/46.07/16.17/8/any/Google-Satellite'
# Fetch the 'standard' (linked data identified) attributes in use with
# data at the url.
stnd_attribs_tuples = oc_api.get_standard_attributes(
url,
# The optional argument below gets popular standard
# zooarchaeological (bone) measurements.
add_von_den_driesch_bone_measures=True
)
# Make a list of only the slugs from the list of slug, label tuples.
stnd_attribs = [slug for slug, _ in stnd_attribs_tuples]
# Make a dataframe by fetching result records from Open Context.
# This will be slow until we finish improvements to Open Context's API.
# However, the results get cached by saving as files locally. That
# makes iterating on this notebook much less painful.
df = oc_api.url_to_dataframe(url, stnd_attribs)
group_markers, group_colors = make_group_markers_colors_for_df(
df,
group_col='Has taxonomic identifier'
)
# Make a plot of Bd verses DD for different taxa
make_scatter_plot_from_oc_df(
df,
group_col='Has taxonomic identifier',
x_col='Bd',
y_col='DD',
group_markers=group_markers,
group_colors=group_colors,
)
# Make a plot of Bd verses DD for different taxa, limiting DD to reasonable values.
make_scatter_plot_from_oc_df(
df[(df['DD'] < 80)],
group_col='Has taxonomic identifier',
x_col='Bd',
y_col='DD',
group_markers=group_markers,
group_colors=group_colors,
)
# Make a plot of Bp verses Dp for different taxa
make_scatter_plot_from_oc_df(
df,
group_col='Has taxonomic identifier',
x_col='Bp',
y_col='Dp',
group_markers=group_markers,
group_colors=group_colors,
)
# Make a plot of Bp verses Dp for different taxa, excluding pigs
make_scatter_plot_from_oc_df(
df[~df['Has taxonomic identifier'].str.startswith('Sus')],
group_col='Has taxonomic identifier',
x_col='Bp',
y_col='Dp',
group_markers=group_markers,
group_colors=group_colors,
)
# Check some relationships in distal end measurements, also excluding pigs
make_scatter_plot_from_oc_df(
df[~df['Has taxonomic identifier'].str.startswith('Sus')],
group_col='Has taxonomic identifier',
x_col='Bd',
y_col='Dd',
group_markers=group_markers,
group_colors=group_colors,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Below I define two little utility functions to make scatter plots from the data contained in a dataframe that was populated by the OpenContextAPI() class. The first function make_group_markers_colors_for_df makes dicts that associate markers and colors for different values in the group_col. The second function make_scatter_plot_from_oc_df makes scatter plots.
Step4: Making some Plots
Step5: Observing an outlier
Step6: Excluding the outlier
Step7: A more interesting plot, using proximal end measurements
Step8: Excluding suspect taxa
Step9: To further explore the data, we include this plot that illustrates some taxonomic patterning of distal end measurements.
|
12,789
|
<ASSISTANT_TASK:>
Python Code:
# Original book version
def vector_sum(vectors):
return reduce(vector_add, vectors)
vectors = [v,w,v,w,v,w]
vector_sum(vectors)
# Modified version by sc82.choi at Gachon - *은 여러개의 argument를 list로 전환해줌
def vector_sum_modified(vectors):
return [sum(value) for value in zip(*vectors)]
vectors = [v,w,v,w,v,w]
vector_sum_modified(vectors)
# Numpy operation
np.sum([v,w,v,w,v,w], axis=0)
# axis=0 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, column별로 sum operation을 하라는 의미
# axis=1 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, row별로 sum operation을 하라는 의미
# Original book verstion
def scalar_multiply(c, v):
return [c * v_i for v_i in v]
v = [5, 6, 7, 8]
scalar = 3
scalar_multiply(scalar, v)
# Numpy version: Numpy는 배열의 크기가 다르더라도 기본적인 vector연산을 가능하도록 지원해준다. 이를 broadcasting이라고 함
scalar * np.array(v)
# Original book version
def vector_mean(vectors):
compute the vector whose i-th element is the mean of the
i-th elements of the input vectors
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
vector_mean([v,v,v,v])
# Original book version
np.mean([v,v,v,v], axis=0)
# axis=0 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, column별로 mean operation을 하라는 의미
# axis=1 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, row별로 mean operation을 하라는 의미
# Original book version
def dot(v, w):
v_1 * w_1 + ... + v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
dot(v, w)
# Numpy version
np.dot(v,w)
# Original book version
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
v = [1,2,3,4]
sum_of_squares(v) # v * v = [1,4,9,16]
# Numpy version
np.dot(v,v) # or sum(np.square(v))
# Orginal book version
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(v)
# Numpy version
np.linalg.norm(v)
#original version
def squared_distance(v, w):
return sum_of_squares(vector_subtract(v, w))
def distance(v, w):
return math.sqrt(squared_distance(v, w))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
squared_distance(v,w)
distance(v,w)
# Numpy version
np.linalg.norm(np.subtract(v,w)) # or np.sqrt(np.sum(np.subtract(v,w)**2))
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
example_matrix = [[1,2,3,4,5], [11,12,13,14,15], [21,22,23,24,25]]
shape(example_matrix)
get_row(example_matrix, 0)
get_column(example_matrix,3)
# Numpy version
np.shape(example_matrix)
example_matrix = np.array(example_matrix)
example_matrix[0] #row slicing
example_matrix[:,3] #row slicing
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
whose (i,j)-th entry is entry_fn(i, j)
return [[entry_fn(i, j) for j in range(num_cols)]
for i in range(num_rows)]
def is_diagonal(i, j):
1's on the 'diagonal', 0's everywhere else
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
# Numpy version
np.identity(5)
friendships = [[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # user 0
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0], # user 1
[1, 1, 0, 1, 0, 0, 0, 0, 0, 0], # user 2
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0], # user 3
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # user 4
[0, 0, 0, 0, 1, 0, 1, 1, 0, 0], # user 5
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 6
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 7
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1], # user 8
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] # user 9
def matrix_add(A, B):
if shape(A) != shape(B):
raise ArithmeticError("cannot add matrices with different shapes")
num_rows, num_cols = shape(A)
def entry_fn(i, j): return A[i][j] + B[i][j]
return make_matrix(num_rows, num_cols, entry_fn)
A = [[ 1., 0., 0.], [ 0., 1., 2.]]
B = [[ 5., 4., 3.], [ 2., 2., 2.]]
matrix_add(A,B)
# Numpy version
np.add(A,B) # vector 마찬가지로 크기 같은 matrix 형태의 list가 돌아오면 자동으로 변환함
def make_graph_dot_product_as_vector_projection(plt):
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
vonw = scalar_multiply(c, w)
o = [0,0]
plt.arrow(0, 0, v[0], v[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("v", v, xytext=[v[0] + 0.1, v[1]])
plt.arrow(0 ,0, w[0], w[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("w", w, xytext=[w[0] - 0.1, w[1]])
plt.arrow(0, 0, vonw[0], vonw[1], length_includes_head=True)
plt.annotate(u"(v•w)w", vonw, xytext=[vonw[0] - 0.1, vonw[1] + 0.1])
plt.arrow(v[0], v[1], vonw[0] - v[0], vonw[1] - v[1],
linestyle='dotted', length_includes_head=True)
plt.scatter(*zip(v,w,o),marker='.')
plt.axis([0,2,0,2]) # 짤리는 부분이 있어서 변경
plt.show()
%pylab inline
make_graph_dot_product_as_vector_projection(plt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scalar * Vector의 연산 ex) 2 * [1,2,3,4] = [2,4,6,8]
Step3: vector 의 평균 구하기
Step5: Vector dot product
Step7: 하나의 vector에서 값 element들을 제곱하여 더한 후 값을 반환함
Step8: magnitude
Step9: Distance 구하기
Step10: Matrix indexing
Step13: Matrix operation
|
12,790
|
<ASSISTANT_TASK:>
Python Code:
from time import clock
from scipy.io import mmwrite
import matplotlib.pyplot as plt
from qutip import *
from qutip.piqs import *
nnn = 10
N = nnn
jj_mat = nnn/2
[jx_mat, jy_mat, jz_mat] = jmat(jj_mat)
jp_mat = jx_mat + 1j * jy_mat
jm_mat = jx_mat - 1j * jy_mat
w0 = 1
kappa = 2 * w0
gg = kappa/ jj_mat
ham = w0 * jx_mat
c_ops = [np.sqrt(gg) * jm_mat]
liouv_mat = liouvillian(ham, c_ops)
print(liouv_mat.shape)
eig_mat = liouv_mat.eigenenergies()
re_eigmat = np.real(eig_mat)
imag_eigmat = np.imag(eig_mat)
fig6 = plt.figure(6)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.')
label_size = 15
label_size2 = 15
label_size3 = 15
plt.rc('text', usetex = True)
plt.title(r'BTC - $\mathcal{L}$ spectrum, strong dissipation limit QuTiP jmat',
fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-20,15])
plt.xlim([-15,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_N{}_strong_jmat.pdf'.format(N)
savefile = False
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
#Saving for Mathematica
liouvd_jmat =liouv_mat.full()
liouvd_re_jmat = np.real(liouvd_jmat)
liouvd_imag_jmat = np.imag(liouvd_jmat)
#saveto_file_name2 = str("re_liouv_N={}".format(N))
#liouvd_re.astype('float32').tofile('{}.dat'.format(saveto_file_name2))
#saveto_file_name3 = str("imag_liouv_N={}".format(N))
#liouvd_imag.astype('float32').tofile('{}.dat'.format(saveto_file_name3))
#mmwrite('data/liouvrejmat.mtx', liouvd_re_jmat/kappa)
#mmwrite('data/liouvimjmat.mtx', liouvd_imag_jmat/kappa)
fig7 = plt.figure(7)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.', re_eigmat/kappa, 0*imag_eigmat/kappa, '-', lw = 0.5)
label_size = 15
label_size2 = 15
label_size3 = 15
plt.title(r'BTC - $\mathcal{L}$ spectrum, strong dissipation limit, Jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-1,1])
plt.xlim([-4,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_inset_N{}_strong_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
nnn = 36
N = nnn
jj_mat = nnn/2
[jx_mat, jy_mat, jz_mat] = jmat(jj_mat)
jp_mat = jx_mat + 1j * jy_mat
jm_mat = jx_mat - 1j * jy_mat
w0 = 1
kappa = 2/3 * w0
gg = kappa/ jj_mat
ham = w0 * jx_mat
c_ops = [np.sqrt(gg) * jm_mat]
liouv_mat = liouvillian(ham, c_ops)
print(liouv_mat.shape)
eig_mat = liouv_mat.eigenenergies()
re_eigmat = np.real(eig_mat)
imag_eigmat = np.imag(eig_mat)
fig8 = plt.figure(8)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.')
label_size = 15
label_size2 = 15
label_size3 = 15
plt.rc('text', usetex = True)
plt.title(r'BTC - $\mathcal{L}$ spectrum, weak dissipation limit QuTiP jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-50,35])
plt.xlim([-15,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_N{}_weak_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
fig9 = plt.figure(9)
plt.plot(re_eigmat/kappa, imag_eigmat/kappa, 'k.', re_eigmat/kappa, 0*imag_eigmat/kappa, '-', lw = 0.5)
label_size = 15
label_size2 = 15
label_size3 = 15
plt.title(r'BTC - $\mathcal{L}$ spectrum, weak dissipation limit, Jmat', fontsize = label_size2)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.ylim([-5,5])
plt.xlim([-0.4,0])
plt.xlabel(r'$\mathrm{Re}(\lambda)$', fontsize = label_size3)
plt.ylabel(r'$\mathrm{Im}(\lambda)$', fontsize = label_size3)
fname = 'figures/btc_eig_inset_N{}_weak_jmat.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
N = 20
ntls = N
nds = num_dicke_states(N)
print("System size: N = ", N, "| nds = ", nds, "| nds^2 = ", nds**2, "| 2^N = ", 2**N)
[jx, jy, jz] = jspin(N)
jp = jspin(N, "+")
jm = jp.dag()
jpjm = jp*jm
w0 = 1
kappa = 0.5 * w0
gCE = 2*kappa/N
gE = 0
gP = 0
gCD = 0
gCP = 0
h = w0 * jx
nt = 1001
td0 = kappa
tmax = 200 * td0
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
jzt_list = []
jpjmt_list = []
jz2t_list = []
gD_list = [0, 0.01, 0.1, 1]
for gD in gD_list:
print(gD)
system = Dicke(N=N)
system.collective_emission = gCE
system.emission = gE
system.dephasing = gD
system.pumping = gP
system.collective_pumping = gCP
system.collective_dephasing = gCD
# energy / dynamics numerical
system.hamiltonian = h
liouv = system.liouvillian()
result = mesolve(liouv, rho0, t, [], e_ops = [jz, jp*jm, jz*jz], options = Options(store_states=True))
rhot = result.states
jz_t = result.expect[0]
jpjm_t = result.expect[1]
jz2_t = result.expect[2]
jzt_list.append(jz_t)
jpjmt_list.append(jpjm_t)
jz2t_list.append(jz2_t)
# gD_list.append(gD)
plt.rc('text', usetex = True)
label_size = 20
label_size2 = 20
label_size3 = 20
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
lw = 1
i = 0
fig5 = plt.figure(figsize=(7,5))
for gD in gD_list:
plt.plot(w0*t, jzt_list[i]/(N/2), '-',
label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([-1,1])
#plt.title(r'Total inversion', fontsize = label_size2)
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.show()
plt.close()
#cooperativity
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig8 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, (jz2t_list[i] -jzt_list[i] + jpjmt_list[i])/((N/2*(N/2+1))),
'-', label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([0,2.])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.title(r'Cooperativity', fontsize = label_size2)
plt.show()
plt.close()
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig6 = plt.figure(figsize=(8,6))
i=0
for gD in gD_list:
plt.plot(w0*t, jpjmt_list[i]/(N/2)**2, label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
#plt.ylim([-1,1])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_{+}J_{-} \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.7)
plt.title(r'Light emission', fontsize = label_size2)
plt.show()
plt.close()
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig7 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, jz2t_list[i]/(N/2), '-', label = r"$\gamma_\phi/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
#plt.ylim([-1,1])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.7)
plt.title(r'Second moment', fontsize = label_size2)
plt.show()
plt.close()
# Study of local incoherent losses
N = 20
print(N)
w0 = 1
kappa = 0.5 * w0
gCE = 2*kappa /N
gE = 0
gP = 0
gD = 0
gCD = 0
gCP = 0
gD = 0
h = w0 * jx
nt = 1001
td0 = kappa
tmax = 200 * td0
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
jzt_list = []
jpjmt_list = []
jz2t_list = []
gE_list = [0, 0.01, 0.1, 1]
for gE in gE_list:
print(gE)
system = Dicke(N=N)
system.collective_emission = gCE
system.emission = gE
system.dephasing = gD
system.pumping = gP
system.collective_pumping = gCP
system.collective_dephasing = gCD
# energy / dynamics numerical
system.hamiltonian = h
liouv = system.liouvillian()
result = mesolve(liouv, rho0, t, [], e_ops = [jz, jp*jm, jz*jz], options = Options(store_states=True))
rhot = result.states
jz_t = result.expect[0]
jpjm_t = result.expect[1]
jz2_t = result.expect[2]
jzt_list.append(jz_t)
jpjmt_list.append(jpjm_t)
jz2t_list.append(jz2_t)
# gD_list.append(gD)
plt.rc('text', usetex = True)
label_size = 20
label_size2 = 20
label_size3 = 20
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
lw = 1
i = 0
fig5 = plt.figure(figsize=(7,5))
for gD in gD_list:
plt.plot(w0*t, jzt_list[i]/(N/2), '-', label = r"$\gamma_\downarrow/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([-1,1])
#plt.title(r'Total inversion', fontsize = label_size2)
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J_z \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
fname = 'figures/btc_jzt_N{}_gE.pdf'.format(N)
if savefile == True:
plt.savefig(fname, bbox_inches='tight')
plt.show()
plt.close()
#cooperativity
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig8 = plt.figure(figsize=(7,5))
i=0
for gD in gD_list:
plt.plot(w0*t, (jz2t_list[i] -jzt_list[i] + jpjmt_list[i])/((N/2*(N/2+1))),
'-', label = r"$\gamma_\downarrow/\omega_x={}$".format(gD), linewidth = 2*lw+0.4*i)
i = i+1
plt.ylim([0,2.])
plt.xlabel(r'$\omega_x t$', fontsize = label_size3)
plt.ylabel(r'$\langle J^2 \rangle (t)$', fontsize = label_size3)
plt.legend(fontsize = label_size3*0.8)
plt.title(r'Cooperativity', fontsize = label_size2)
plt.show()
plt.close()
qutip.about()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spectrum of the Liouvillian - Strong dissipation limit $\omega_{0} = 0.5 \kappa $
Step2: The Figure above reproduces qualitatively the study performed in Ref. [4].
Step3: The Figure above reproduces qualitatively the study performed in Ref. [4].
Step4: The Figure above reproduces qualitatively the study performed in Ref. [4].
Step5: The plots above integrate the study on the effect of local dissipation performed in Ref. [1]. The boundary time crystals were introduced in Ref. [4]. A study of the effect of inhomogenous broadening (non-identical two level systems) is performed in Ref. [7] with regard to boundary time crystals and in Ref. [8] with regards to Dicke superradiance.
|
12,791
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn import datasets, svm
import matplotlib.pyplot as plt
# Load the digit data
digits = datasets.load_digits()
# View the features of the first observation
digits.data[0:1]
# View the target of the first observation
digits.target[0:1]
# Create dataset 1
data1_features = digits.data[:1000]
data1_target = digits.target[:1000]
# Create dataset 2
data2_features = digits.data[1000:]
data2_target = digits.target[1000:]
parameter_candidates = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# Create a classifier object with the classifier and parameter candidates
clf = GridSearchCV(estimator=svm.SVC(), param_grid=parameter_candidates, n_jobs=-1)
# Train the classifier on data1's feature and target data
clf.fit(data1_features, data1_target)
# View the accuracy score
print('Best score for data1:', clf.best_score_)
# View the best parameters for the model found using grid search
print('Best C:',clf.best_estimator_.C)
print('Best Kernel:',clf.best_estimator_.kernel)
print('Best Gamma:',clf.best_estimator_.gamma)
# Apply the classifier trained using data1 to data2, and view the accuracy score
clf.score(data2_features, data2_target)
# Train a new classifier using the best parameters found by the grid search
svm.SVC(C=10, kernel='rbf', gamma=0.001).fit(data1_features, data1_target).score(data2_features, data2_target)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Two Datasets
Step2: The target data is a vector containing the image's true digit. For example, the first observation is a handwritten digit for '0'.
Step3: To demonstrate cross validation and parameter tuning, first we are going to divide the digit data into two datasets called data1 and data2. data1 contains the first 1000 rows of the digits data, while data2 contains the remaining ~800 rows. Note that this split is separate to the cross validation we will conduct and is done purely to demonstrate something at the end of the tutorial. In other words, don't worry about data2 for now, we will come back to it.
Step4: Create Parameter Candidates
Step5: Conduct Grid Search To Find Parameters Producing Highest Score
Step6: Success! We have our results! First, let's look at the accuracy score when we apply the model to the data1's test data.
Step7: Which parameters are the best? We can tell scikit-learn to display them
Step8: This tells us that the most accurate model uses C=10, the rbf kernel, and gamma=0.001.
|
12,792
|
<ASSISTANT_TASK:>
Python Code:
import deepchem as dc
from deepchem.models.tensorgraph.models.graph_models import GraphConvModel
# Load Tox21 dataset
tox21_tasks, tox21_datasets, transformers = dc.molnet.load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = tox21_datasets
model = GraphConvModel(
len(tox21_tasks), batch_size=50, mode='classification')
num_epochs = 10
losses = []
for i in range(num_epochs):
loss = model.fit(train_dataset, nb_epoch=1)
print("Epoch %d loss: %f" % (i, loss))
losses.append(loss)
import matplotlib.pyplot as plot
plot.ylabel("Loss")
plot.xlabel("Epoch")
x = range(num_epochs)
y = losses
plot.scatter(x, y)
plot
import numpy as np
metric = dc.metrics.Metric(
dc.metrics.roc_auc_score, np.mean, mode="classification")
print("Evaluating model")
train_scores = model.evaluate(train_dataset, [metric], transformers)
print("Training ROC-AUC Score: %f" % train_scores["mean-roc_auc_score"])
valid_scores = model.evaluate(valid_dataset, [metric], transformers)
print("Validation ROC-AUC Score: %f" % valid_scores["mean-roc_auc_score"])
from deepchem.models.tensorgraph.tensor_graph import TensorGraph
tg = TensorGraph(use_queue=False)
import tensorflow as tf
from deepchem.models.tensorgraph.layers import Feature
atom_features = Feature(shape=(None, 75))
degree_slice = Feature(shape=(None, 2), dtype=tf.int32)
membership = Feature(shape=(None,), dtype=tf.int32)
deg_adjs = []
for i in range(0, 10 + 1):
deg_adj = Feature(shape=(None, i + 1), dtype=tf.int32)
deg_adjs.append(deg_adj)
from deepchem.models.tensorgraph.layers import Dense, GraphConv, BatchNorm
from deepchem.models.tensorgraph.layers import GraphPool, GraphGather
batch_size = 50
gc1 = GraphConv(
64,
activation_fn=tf.nn.relu,
in_layers=[atom_features, degree_slice, membership] + deg_adjs)
batch_norm1 = BatchNorm(in_layers=[gc1])
gp1 = GraphPool(in_layers=[batch_norm1, degree_slice, membership] + deg_adjs)
gc2 = GraphConv(
64,
activation_fn=tf.nn.relu,
in_layers=[gp1, degree_slice, membership] + deg_adjs)
batch_norm2 = BatchNorm(in_layers=[gc2])
gp2 = GraphPool(in_layers=[batch_norm2, degree_slice, membership] + deg_adjs)
dense = Dense(out_channels=128, activation_fn=tf.nn.relu, in_layers=[gp2])
batch_norm3 = BatchNorm(in_layers=[dense])
readout = GraphGather(
batch_size=batch_size,
activation_fn=tf.nn.tanh,
in_layers=[batch_norm3, degree_slice, membership] + deg_adjs)
from deepchem.models.tensorgraph.layers import Dense, SoftMax, \
SoftMaxCrossEntropy, WeightedError, Stack
from deepchem.models.tensorgraph.layers import Label, Weights
costs = []
labels = []
for task in range(len(tox21_tasks)):
classification = Dense(
out_channels=2, activation_fn=None, in_layers=[readout])
softmax = SoftMax(in_layers=[classification])
tg.add_output(softmax)
label = Label(shape=(None, 2))
labels.append(label)
cost = SoftMaxCrossEntropy(in_layers=[label, classification])
costs.append(cost)
all_cost = Stack(in_layers=costs, axis=1)
weights = Weights(shape=(None, len(tox21_tasks)))
loss = WeightedError(in_layers=[all_cost, weights])
tg.set_loss(loss)
from deepchem.metrics import to_one_hot
from deepchem.feat.mol_graphs import ConvMol
def data_generator(dataset, epochs=1, predict=False, pad_batches=True):
for epoch in range(epochs):
if not predict:
print('Starting epoch %i' % epoch)
for ind, (X_b, y_b, w_b, ids_b) in enumerate(
dataset.iterbatches(
batch_size, pad_batches=pad_batches, deterministic=True)):
d = {}
for index, label in enumerate(labels):
d[label] = to_one_hot(y_b[:, index])
d[weights] = w_b
multiConvMol = ConvMol.agglomerate_mols(X_b)
d[atom_features] = multiConvMol.get_atom_features()
d[degree_slice] = multiConvMol.deg_slice
d[membership] = multiConvMol.membership
for i in range(1, len(multiConvMol.get_deg_adjacency_lists())):
d[deg_adjs[i - 1]] = multiConvMol.get_deg_adjacency_lists()[i]
yield d
# Epochs set to 1 to render tutorials online.
# Set epochs=10 for better results.
num_epochs = 10
losses = []
for i in range(num_epochs):
loss = tg.fit_generator(data_generator(train_dataset, epochs=1))
print("Epoch %d loss: %f" % (i, loss))
losses.append(loss)
plot.title("TensorGraph Version")
plot.ylabel("Loss")
plot.xlabel("Epoch")
x = range(num_epochs)
y = losses
plot.scatter(x, y)
plot
metric = dc.metrics.Metric(
dc.metrics.roc_auc_score, np.mean, mode="classification")
def reshape_y_pred(y_true, y_pred):
TensorGraph.Predict returns a list of arrays, one for each output
We also have to remove the padding on the last batch
Metrics taks results of shape (samples, n_task, prob_of_class)
n_samples = len(y_true)
retval = np.stack(y_pred, axis=1)
return retval[:n_samples]
print("Evaluating model")
train_predictions = tg.predict_on_generator(data_generator(train_dataset, predict=True))
train_predictions = reshape_y_pred(train_dataset.y, train_predictions)
train_scores = metric.compute_metric(train_dataset.y, train_predictions, train_dataset.w)
print("Training ROC-AUC Score: %f" % train_scores)
valid_predictions = tg.predict_on_generator(data_generator(valid_dataset, predict=True))
valid_predictions = reshape_y_pred(valid_dataset.y, valid_predictions)
valid_scores = metric.compute_metric(valid_dataset.y, valid_predictions, valid_dataset.w)
print("Valid ROC-AUC Score: %f" % valid_scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's use MoleculeNet to load the Tox21 dataset. We need to make sure to process the data in a way that graph convolutional networks can use For that, we make sure to set the featurizer option to 'GraphConv'. The MoleculeNet call will return a training set, an validation set, and a test set for us to use. The call also returns transformers, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.)
Step2: Let's now train a graph convolutional network on this dataset. DeepChem has the class GraphConvModel that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset.
Step3: Let's plot these losses so we can take a look at how the loss changes over the process of training.
Step4: We see that the losses fall nicely and give us stable learning.
Step5: What's going on under the hood? Could we build GraphConvModel ourselves? Of course! The first step is to create a TensorGraph object. This object will hold the "computational graph" that defines the computation that a graph convolutional network will perform.
Step6: Let's now define the inputs to our model. Conceptually, graph convolutions just requires a the structure of the molecule in question and a vector of features for every atom that describes the local chemical environment. However in practice, due to TensorFlow's limitations as a general programming environment, we have to have some auxiliary information as well preprocessed.
Step7: Let's now implement the body of the graph convolutional network. TensorGraph has a number of layers that encode various graph operations. Namely, the GraphConv, GraphPool and GraphGather layers. We will also apply standard neural network layers such as Dense and BatchNorm.
Step8: Let's now make predictions from the TensorGraph model. Tox21 is a multitask dataset. That is, there are 12 different datasets grouped together, which share many common molecules, but with different outputs for each. As a result, we have to add a separate output layer for each task. We will use a for loop over the tox21_tasks list to make this happen. We need to add labels for each
Step9: Now that we've successfully defined our graph convolutional model in TensorGraph, we need to train it. We can call fit(), but we need to make sure that each minibatch of data populates all four Feature objects that we've created. For this, we need to create a Python generator that given a batch of data generates a dictionary whose keys are the Feature layers and whose values are Numpy arrays we'd like to use for this step of training.
Step10: Now, we can train the model using TensorGraph.fit_generator(generator) which will use the generator we've defined to train the model.
Step11: Let's now plot these losses and take a quick look.
Step13: Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance.
|
12,793
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Install { display-mode: "form" }
TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']
if TF_Installation == 'TF Nightly':
!pip install -q --upgrade tf-nightly
print('Installation of `tf-nightly` complete.')
elif TF_Installation == 'TF Stable':
!pip install -q --upgrade tensorflow
print('Installation of `tensorflow` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "System" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
from six.moves import urllib
import matplotlib.pyplot as plt; plt.style.use('ggplot')
import numpy as np
import pandas as pd
import seaborn as sns; sns.set_context('notebook')
import tensorflow_datasets as tfds
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
if tf.test.gpu_device_name() != '/device:GPU:0':
print("We'll just use the CPU for this run.")
else:
print('Huzzah! Found GPU: {}'.format(tf.test.gpu_device_name()))
def load_and_preprocess_radon_dataset(state='MN'):
Load the Radon dataset from TensorFlow Datasets and preprocess it.
Following the examples in "Bayesian Data Analysis" (Gelman, 2007), we filter
to Minnesota data and preprocess to obtain the following features:
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
df['county'] = df.county.astype(pd.api.types.CategoricalDtype())
df['county_code'] = df.county.cat.codes
# Radon levels are all positive, but log levels are unconstrained
df['log_radon'] = df['radon'].apply(np.log)
# Drop columns we won't use and tidy the index
columns_to_keep = ['log_radon', 'floor', 'county', 'county_code']
df = df[columns_to_keep].reset_index(drop=True)
return df
df = load_and_preprocess_radon_dataset()
df.head()
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 4))
df.groupby('floor')['log_radon'].plot(kind='density', ax=ax1);
ax1.set_xlabel('Measured log(radon)')
ax1.legend(title='Floor')
df['floor'].value_counts().plot(kind='bar', ax=ax2)
ax2.set_xlabel('Floor where radon was measured')
ax2.set_ylabel('Count')
fig.suptitle("Distribution of log radon and floors in the dataset");
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = df['county'].value_counts()
county_freq.plot(kind='bar', ax=ax)
ax.set_xlabel('County')
ax.set_ylabel('Number of readings');
features = df[['county_code', 'floor']].astype(int)
labels = df[['log_radon']].astype(np.float32).values.flatten()
def make_joint_distribution_coroutine(floor, county, n_counties, n_floors):
def model():
county_scale = yield tfd.HalfNormal(scale=1., name='scale_prior')
intercept = yield tfd.Normal(loc=0., scale=1., name='intercept')
floor_weight = yield tfd.Normal(loc=0., scale=1., name='floor_weight')
county_prior = yield tfd.Normal(loc=tf.zeros(n_counties),
scale=county_scale,
name='county_prior')
random_effect = tf.gather(county_prior, county, axis=-1)
fixed_effect = intercept + floor_weight * floor
linear_response = fixed_effect + random_effect
yield tfd.Normal(loc=linear_response, scale=1., name='likelihood')
return tfd.JointDistributionCoroutineAutoBatched(model)
joint = make_joint_distribution_coroutine(
features.floor.values, features.county_code.values, df.county.nunique(),
df.floor.nunique())
# Define a closure over the joint distribution
# to condition on the observed labels.
def target_log_prob_fn(*args):
return joint.log_prob(*args, likelihood=labels)
# Initialize locations and scales randomly with `tf.Variable`s and
# `tfp.util.TransformedVariable`s.
_init_loc = lambda shape=(): tf.Variable(
tf.random.uniform(shape, minval=-2., maxval=2.))
_init_scale = lambda shape=(): tfp.util.TransformedVariable(
initial_value=tf.random.uniform(shape, minval=0.01, maxval=1.),
bijector=tfb.Softplus())
n_counties = df.county.nunique()
surrogate_posterior = tfd.JointDistributionSequentialAutoBatched([
tfb.Softplus()(tfd.Normal(_init_loc(), _init_scale())), # scale_prior
tfd.Normal(_init_loc(), _init_scale()), # intercept
tfd.Normal(_init_loc(), _init_scale()), # floor_weight
tfd.Normal(_init_loc([n_counties]), _init_scale([n_counties]))]) # county_prior
optimizer = tf.optimizers.Adam(learning_rate=1e-2)
losses = tfp.vi.fit_surrogate_posterior(
target_log_prob_fn,
surrogate_posterior,
optimizer=optimizer,
num_steps=3000,
seed=42,
sample_size=2)
(scale_prior_,
intercept_,
floor_weight_,
county_weights_), _ = surrogate_posterior.sample_distributions()
print(' intercept (mean): ', intercept_.mean())
print(' floor_weight (mean): ', floor_weight_.mean())
print(' scale_prior (approx. mean): ', tf.reduce_mean(scale_prior_.sample(10000)))
fig, ax = plt.subplots(figsize=(10, 3))
ax.plot(losses, 'k-')
ax.set(xlabel="Iteration",
ylabel="Loss (ELBO)",
title="Loss during training",
ylim=0);
county_counts = (df.groupby(by=['county', 'county_code'], observed=True)
.agg('size')
.sort_values(ascending=False)
.reset_index(name='count'))
means = county_weights_.mean()
stds = county_weights_.stddev()
fig, ax = plt.subplots(figsize=(20, 5))
for idx, row in county_counts.iterrows():
mid = means[row.county_code]
std = stds[row.county_code]
ax.vlines(idx, mid - std, mid + std, linewidth=3)
ax.plot(idx, means[row.county_code], 'ko', mfc='w', mew=2, ms=7)
ax.set(
xticks=np.arange(len(county_counts)),
xlim=(-1, len(county_counts)),
ylabel="County effect",
title=r"Estimates of county effects on log radon levels. (mean $\pm$ 1 std. dev.)",
)
ax.set_xticklabels(county_counts.county, rotation=90);
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(np.log1p(county_counts['count']), stds.numpy()[county_counts.county_code], 'o')
ax.set(
ylabel='Posterior std. deviation',
xlabel='County log-count',
title='Having more observations generally\nlowers estimation uncertainty'
);
%%shell
exit # Trick to make this block not execute.
radon = read.csv('srrs2.dat', header = TRUE)
radon = radon[radon$state=='MN',]
radon$radon = ifelse(radon$activity==0., 0.1, radon$activity)
radon$log_radon = log(radon$radon)
# install.packages('lme4')
library(lme4)
fit <- lmer(log_radon ~ 1 + floor + (1 | county), data=radon)
fit
# Linear mixed model fit by REML ['lmerMod']
# Formula: log_radon ~ 1 + floor + (1 | county)
# Data: radon
# REML criterion at convergence: 2171.305
# Random effects:
# Groups Name Std.Dev.
# county (Intercept) 0.3282
# Residual 0.7556
# Number of obs: 919, groups: county, 85
# Fixed Effects:
# (Intercept) floor
# 1.462 -0.693
print(pd.DataFrame(data=dict(intercept=[1.462, tf.reduce_mean(intercept_.mean()).numpy()],
floor=[-0.693, tf.reduce_mean(floor_weight_.mean()).numpy()],
scale=[0.3282, tf.reduce_mean(scale_prior_.sample(10000)).numpy()]),
index=['lme4', 'vi']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting Generalized Linear Mixed-effects Models Using Variational Inference
Step2: Abstract
Step3: We will also do a quick check for availablility of a GPU
Step5: Obtain Dataset
Step6: Specializing the GLMM Family
Step7: To make the model a little more sophisticated, including something about geography is probably even better
Step8: If we fit this model, the county_effect vector would likely end up memorizing the results for counties which had only a few training samples, perhaps overfitting and leading to poor generalization.
Step9: Specify Model
Step10: Specify surrogate posterior
Step11: Note that this cell can be replaced with tfp.experimental.vi.build_factored_surrogate_posterior, as in
Step12: We can plot the estimated mean county effects, along with the uncertainty of that mean. We have ordered this by number of observations, with the largest on the left. Notice that the uncertainty is small for the counties with many observations, but is larger for the counties that have only one or two observations.
Step13: Indeed, we can see this more directly by plotting the log-number of observations against the estimated standard deviation, and see the relationship is approximately linear.
Step14: Comparing to lme4 in R
Step15: The following table summarizes the results.
|
12,794
|
<ASSISTANT_TASK:>
Python Code:
!cat -n Pure.g4
!cat sum.sl
!cat -n Simple.g4
!cat sum.ast
!antlr4 -Dlanguage=Python3 Simple.g4
from SimpleLexer import SimpleLexer
from SimpleParser import SimpleParser
import antlr4
%run ../AST-2-Dot.ipynb
def main(file):
with open(file, 'r') as handle:
program_text = handle.read()
input_stream = antlr4.InputStream(program_text)
lexer = SimpleLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = SimpleParser(token_stream)
result = parser.program()
Statements = result.stmnt_list
ast = tuple2dot(Statements)
print(Statements)
display(ast)
ast.render('ast', view=True)
execute_tuple(Statements)
def execute_tuple(Statement_List, Values={}):
for stmnt in Statement_List:
execute(stmnt, Values)
L = [1,2,3,4,5]
a, b, *R = L
a, b, R
def execute(stmnt, Values):
op = stmnt[0]
if stmnt == 'program':
pass
elif op == ':=':
_, var, value = stmnt
Values[var] = evaluate(value, Values)
elif op == 'read':
_, var = stmnt
Values[var] = int(input())
elif op == 'print':
_, expr = stmnt
print(evaluate(expr, Values))
elif op == 'if':
_, test, *SL = stmnt
if evaluate(test, Values):
execute_tuple(SL, Values)
elif op == 'while':
_, test, *SL = stmnt
while evaluate(test, Values):
execute_tuple(SL, Values)
else:
assert False, f'{stmnt} unexpected'
def evaluate(expr, Values):
if isinstance(expr, int):
return expr
if isinstance(expr, str):
return Values[expr]
op = expr[0]
if op == '==':
_, lhs, rhs = expr
return evaluate(lhs, Values) == evaluate(rhs, Values)
if op == '<':
_, lhs, rhs = expr
return evaluate(lhs, Values) < evaluate(rhs, Values)
if op == '+':
_, lhs, rhs = expr
return evaluate(lhs, Values) + evaluate(rhs, Values)
if op == '-':
_, lhs, rhs = expr
return evaluate(lhs, Values) - evaluate(rhs, Values)
if op == '*':
_, lhs, rhs = expr
return evaluate(lhs, Values) * evaluate(rhs, Values)
if op == '/':
_, lhs, rhs = expr
return evaluate(lhs, Values) / evaluate(rhs, Values)
assert False, f'{stmnt} unexpected'
!cat sum.sl
main('sum.sl')
!cat factorial.sl
main('factorial.sl')
!rm *.py *.tokens *.interp
!rm -r __pycache__/
!rm *.pdf
!ls
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The grammar shown above does only contain skip actions. The corrsponding grammar that is enriched with actions is stored in the file Simple.g4.
Step2: The file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns
Step3: The parser shown above will transform the program sum.sl into the nested tuple stored in the file sum.ast.
Step4: The function main takes one parameter file. This parameter is a string specifying a program file.
Step5: The function execute_list takes two arguments
Step6: The function execute takes two arguments
Step7: The function evaluate takes two arguments
|
12,795
|
<ASSISTANT_TASK:>
Python Code:
import markovify
# Get raw text as string
with open("brown.txt") as f:
text = f.read()
# Build the model.
text_model = markovify.Text(text)
# Print three randomly-generated sentences of no more than 140 characters
for i in range(3):
print(text_model.make_short_sentence(140))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Corpus
Step2: Build Markov Chain
Step3: Generate One Tweet
|
12,796
|
<ASSISTANT_TASK:>
Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
m, n = dataset.shape
mins = [min(dataset[:,i]) for i in range(n)]
maxs = [max(dataset[:,i]) for i in range(n)]
centroid = []
for i in range(n):
centroid.append(np.random.uniform(mins[i],maxs[i],k))
centroid = list(zip(*centroid))
### END OF CODE ###
return np.array(centroid)
k = 3
centroids = calculate_initial_centers(dataset, k)
print(centroids)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
size = len(a)
soma = []
for i in range(size):
soma.append(pow((a[i]-b[i]),2))
distance = np.sqrt(sum(soma))
### END OF CODE ###
return distance
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
nearest_index = 0
dist = np.inf
for i in range(len(centroids)):
aux = euclidean_distance(a,centroids[i])
if(aux < dist):
dist = aux
nearest_index = i
### END OF CODE ###
return nearest_index
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]],
[a[1], centroids[idx_nearest_centroid,1]],c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],))
plt.show()
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
nearest_indexes = []
for ponto in dataset:
nearest_indexes.append(nearest_centroid(ponto, centroids))
### END OF CODE ###
return nearest_indexes
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
#### CODE HERE ####
vec = []
for i in range(len(dataset)):
vec.append(pow(euclidean_distance(dataset[i], centroids[nearest_indexes[i]]), 2))
inertia = sum(vec)
print(inertia)
### END OF CODE ###
return inertia
tmp_data = np.array([[1,2,3],[3,4,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if np.floor(inertia(tmp_data, tmp_centroide, tmp_nearest_indexes)) == 17:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
centroids = update_centroids(dataset, centroids, nearest_indexes)
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = [None]
# Computa o cluster de cada amostra
self.labels_ = [None]
# Calcula a inércia inicial
old_inertia = [None]
for index in [None]:
#### CODE HERE ####
### END OF CODE ###
return self
def predict(self, X):
return [None]
kmeans = KMeans(n_clusters=3)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
#### CODE HERE ####
#### CODE HERE ####
#### CODE HERE ####
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Implementar o algoritmo K-means
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os clusters
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Step23: 3. Método do cotovelo
Step24: 4. Dataset Real
|
12,797
|
<ASSISTANT_TASK:>
Python Code:
# ←此為Python的註解符號,在這之後的文字不會被當作程式碼執行
# Python不用宣告變數型態,在指定變數的值時即會動態決定其型態
n_solar_mass = 10 # 整數
MASS_SUN = 1.99 * 10 ** 30 # 浮點數
z = complex(3., -1.) # 複數
unit = "kg" #字串
yes = True #布林值
no = False #布林值
type(n_solar_mass)
print("伴星的質量為:", n_solar_mass * MASS_SUN, unit)
perid = input("請輸入雙星軌道周期 (單位為秒) : ")
MASS_SUN + unit
str(MASS_SUN) + unit
s = "Hello," + " " + "Python!"
#s.upper()
s.split()
x = (1,2)
x
x[0] = 12 #無法更改內容
# 延續上面範例
constants = (3.14159, 1.99 * 10 ** 30 , 6.67 * 10 ** -11) # tuple用小括號
print(constants)
constants.append(-342)
x = (10,11)
y = list(x) #把tuple製作成串列
y[0] = 20
y
slist = []
slist = ['A','B','C','D','E']
slist
slist[0] = 'B'
slist
# 延續上面範例
radial_velocity = [140, 220, 314, 244, 'km/s'] # list用中括號
print(radial_velocity[0]) # 元素的index從零開始
radial_velocity.append(-342)
radial_velocity.insert(2, 592)
print(radial_velocity)
print(len(radial_velocity))
dlist = [['A','B','C','D','E'],[1,2,3,4,5]]
dlist[1]
dlist[1][0]
a = [1,2,3]
a
b = a
b
b[0] = 10
b
a # list a 中的數值也一起改變
# 如何解決?
c = a[:]
c
c[0] = 40
c
a
clist = ['A','B','C','D','E']
'B' in clist
'F' in clist
wlist = ['D','A','F','C','E','D']
sorted(wlist)
nlist = [4,23,1,3,2,98,3]
sorted(nlist)
dic = {'Jack':84,'Ben':63,'Cathy':76, 'Bob': 83}
dic
dic['Jack']
dic = {'Jack':[84,23,34],'Ben':[63,12,74],'Cathy':[12,43,76],'Bob': [83,81,90]} # key 不能放list value可以
dic
dic['Jack']
dic['Jack'] = [1,2,3]
dic
dic.update({'William':[23,43,84],'Eric':[93,31,32]})
dic
'Ben' in dic
'William' in dic
dic.keys() #取得key not list
dic.values() #取得value not list
dic.items() #取得所有的key, value not list
# 延續上面範例
binaries = {'name':["GX 339-4","GRS 1915+105"], 'constants': constants} # dictionary用大括號
print(binaries['name'])
print(binaries['constants'])
binaries['mass'] = [ ]
print(binaries)
print(radial_velocity[2:5]) # [i:j] 從i開始到j-1
print(radial_velocity[:-2:3]) # [i:j:k] 從i開始每隔k個到j-1
if "GRS 1915+105" or "GX 339-4" in binaries['name']:
print('They are microquasars!')
elif "XTE J1550-564" not in binaries['name']:
print('XTE J1550-564 is not in the list!')
else:
binaries['name'].append( input("請輸入下一個microquasars的名稱 : ") )
binaries['name']
# Basic example 1.
for i in range(10):
print(i)
# Basic example 2.
arr = [1,3,5,7,9,2,4,6,8,10]
for i in arr:
print(i)
microquasars = ["GRS 1915+105" , "GX 339-4"]
for m in microquasars:
if m in binaries['name']:
print(m, "is a microquasar!")
for i in range(2, 100, 20):
print(i, i ** 2)
# Plot a light curve example
# (Read a LMC X-4 archival data collecting by RXTE/PCA)
import numpy as np
data = np.loadtxt('../files4examples/Tcol_10135-01-01-000_gx0') # a list of X-ray photon arrival times
# elements of data
n = len(data)
binsize = 20.0 # unit: sec
n_col = (data[n-1]-data[0]) // binsize + 1
bintime = np.arange(0, n_col)*binsize + data[0] + 0.5*binsize
crate = np.arange(0, n_col)*0.0 # an empty array
## for-loop case a.
#for i in range(int(n_col)):
# for j in range(n):
# if (data[j] >= bintime[i]-0.5*binsize) and (data[j] < bintime[i]+0.5*binsize):
# crate[i] = crate[i]+ 1.0
#crate = crate/binsize
## The value used in the function of 'range' must be a integral number.
## Or typing 'range?' to check.
## for-loop case b.
#for i in bintime:
# for j in data:
# if (j >= i-0.5*binsize) and (j < i+0.5*binsize):
# crate[np.where(bintime == i)] = crate[np.where(bintime == i)] + 1.0
#crate = crate/binsize
## for-loop case c.
for i,val in enumerate(bintime):
nn = np.where((data >= val - 0.5*binsize) & (data < (val + 0.5*binsize)))
crate[i] = len(nn[0])
crate = crate/binsize
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(bintime,crate)
#plt.show()
# Basic example
x = 10
while x > 0:
print (x)
x = x - 1
input_value = "yes"
while input_value == 'yes' :
input_value = input("請輸入下一個microquasars的名稱,結束請輸入no : ")
input_value == "yes"
while True :
input_value = input("請輸入下一個microquasars的名稱,結束請輸入no : ")
if input_value == "no":
break
for i in range(2, 100, 20):
if i == 2 or i == 42:
continue
print(i, i ** 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 基本輸出輸入
Step2: 強型別
Step3: 對變數的操作
Step4: 資料型態(List、Tuple、Dictionary)
Step5: tuple 是有順序但不可以變動
Step6: list 是有順序且可以變動
Step7: 雙重list
Step8: 須注意指定問題
Step9: 判斷項目是否存在於list中 => in
Step10: 排序
Step11: dictionary 是無順序且可以變動
Step12: 新增合併字典
Step13: 判斷key是否存在
Step14: 取得key, value
Step15: Slicing
Step16: 流程控制:條件判斷式及迴圈
Step17: for 迴圈
Step18: while 迴圈
Step19: break 及 continue
|
12,798
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
from IPython.display import Image
Image('./images/python_json_conversion_table.png')
dict_doe_family = {
"John": {
"first name": "John",
"last name": "Doe",
"gender": "male",
"age": 30,
"favorite_animal": "panda",
"married": True,
"children": ["James", "Jennifer"],
"hobbies": ["photography", "sky diving", "reading"]},
"Jane": {
"first name": "Jane",
"last name": "Doe",
"gender": "female",
"age": 27,
"favorite_animal": "zebra",
"married": False,
"children": None,
"hobbies": ["cooking", "gaming", "tennis"]}}
with open('../Data/json_data/fruits.json') as infile:
text = infile.read()
print(text)
import json
# open file just as you would open a txt file
with open("../Data/json_data/Doe.json", "r") as infile:
# read in file content as dict using the json module
dict_doe_family = json.load(infile)
print(type(dict_doe_family))
print(dict_doe_family)
str_doe_family =
{
"Jane": {
"age": 27,
"children": null,
"favorite_animal": "zebra",
"first name": "Jane",
"gender": "female",
"hobbies": [
"cooking",
"gaming",
"tennis"
],
"last name": "Doe",
"married": false
},
"John": {
"age": 30,
"children": [
"James",
"Jennifer"
],
"favorite_animal": "panda",
"first name": "John",
"gender": "male",
"hobbies": [
"photography",
"sky diving",
"reading"
],
"last name": "Doe",
"married": true
}
}
dict_doe_family = json.loads(str_doe_family)
print(type(dict_doe_family))
print(dict_doe_family)
dict_doe_family = {
"John": {
"first name": "John",
"last name": "Doe",
"gender": "male",
"age": 30,
"favorite_animal": "panda",
"married": True,
"children": ["James", "Jennifer"],
"hobbies": ["photography", "sky diving", "reading"]},
"Jane": {
"first name": "Jane",
"last name": "Doe",
"gender": "female",
"age": 27,
"favorite_animal": "zebra",
"married": False,
"children": None,
"hobbies": ["cooking", "gaming", "tennis"]}}
with open("../Data/json_data/Doe.json", "w") as outfile:
json.dump(dict_doe_family, outfile)
str_doe_family = json.dumps(dict_doe_family)
print(str_doe_family)
help(json.dumps)
#help(json.dump)
# Create the JSON file
with open("../Data/json_data/Doe.json", "w") as outfile:
json.dump(dict_doe_family,
outfile,
indent=4,
sort_keys=True)
# Read in the JSON file again
with open("../Data/json_data/Doe.json", "r") as infile:
json_string = infile.read()
print(json_string)
str_doe_family = json.dumps(dict_doe_family,
indent=4,
sort_keys=True)
print(str_doe_family)
dict_doe_family = {
"John": {
"first name": "John",
"last name": "Doe",
"gender": "male",
"age": 30,
"favorite_animal": "panda",
"married": True,
"children": ["James", "Jennifer"],
"hobbies": ["photography", "sky diving", "reading"]},
"Jane": {
"first name": "Jane",
"last name": "Doe",
"gender": "female",
"age": 27,
"favorite_animal": "zebra",
"married": False,
"children": None,
"hobbies": ["cooking", "gaming", "tennis"]}}
# access information about John
john_info = dict_doe_family['John']
print(john_info)
# access information about John's hobbies:
john_hobbies = john_info['hobbies']
print(john_hobbies)
# You can also do this in one go:
john_hobbies = dict_doe_family['John']['hobbies']
# iterate over family dict by accessing
#the family members (keys) and their information (values):
all_hobbies = []
for member, info_dict in dict_doe_family.items():
# check what we are accessing:
print(member, type(info_dict))
# access hobbies from info_dict
hobbies = info_dict['hobbies']
print(hobbies)
# your code here
dict_doe_family = {
"John": {
"first name": "John",
"last name": "Doe",
"gender": "male",
"age": 30,
"favorite_animal": "panda",
"married": True,
"children": ["James", "Jennifer"],
"hobbies": ["photography", "sky diving", "reading"]},
"Jane": {
"first name": "Jane",
"last name": "Doe",
"gender": "female",
"age": 27,
"favorite_animal": "zebra",
"married": False,
"children": None,
"hobbies": ["cooking", "gaming", "tennis"]}}
str_doe_family = json.dumps(dict_doe_family)
print(str_doe_family)
str_doe_family = str(dict_doe_family)
print(str_doe_family)
# Example: This will print the gender of "John"
print(dict_doe_family["John"]["gender"])
julia = {"Julia": {"first name": "Julia",
"last name": "Doe",
"age": 29,
"favorite_animal": "penguin",
"married": False,
"children": ["Jack"],
"hobbies": ["snowboarding", "hiking"]}}
# your code here
tv_show = # your code here
print(tv_show.keys())
print(tv_show["_embedded"].keys())
print(tv_show["_embedded"]["episodes"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 17
Step2: We will show how JSON looks like and how to deal with JSON in Python with the example dictionary shown below.
Step3: You can inspect any of the JSON files that we will generate or load below using a text editor (e.g. Atom, BBEdit or Notepad++).
Step4: However, since it is structured to correspond well to Python objects, we use an existing mudule, the json library, which provides an easy way to encode and decode data in JSON. Let's first import it
Step5: We will focus on the following methods
Step7: The loads() method is used to load a JSON formatted string as a Python dictionary. This is useful if you want to create a json dictionary from a string. There are not many situations in which you will have to use it. You may come accross json structures stored as strings in when working with corpora or collecting human annotations with annotation software.
Step8: 2.2 Writing JSON to file or string
Step9: The json.dump() method is used to write a Python dictionary to a JSON encoded file
Step10: The dumps() method is used to convert a Python dictionary to a JSON formatted string
Step11: Both dump() and dumps() use the same keyword parameters. You can check them out with help()
Step12: Two useful keyword arguments are for example indent and sort_keys. They are illustrated below
Step13: 3. Accessing data in a json dictionary
Step14: Often, we will need to extract information from such a nested structure. To do this, it is helpful to remember what you have learned in Block II about containers and looping.
Step15: Now let's extract the hobbies of all family members. Can you finish the code below?
Step16: Exercises
Step17: Exercise 2
Step18: Exercise 3
Step19: Exercise 4
Step20: To help you understand the structure a bit, first have a look at the following examples
|
12,799
|
<ASSISTANT_TASK:>
Python Code:
fruit_season = {
'raspberry': 'May',
'apple' : 'September',
'peach' : 'July',
'grape' : 'August'
}
print(type(fruit_season))
print(fruit_season)
raspberry_season = fruit_season['raspberry']
print(raspberry_season)
print(fruit_season['mangos'])
fruit_season['strawberry'] = 'May'
print(fruit_season)
fruit_season['strawberry']
del fruit_season['strawberry']
print(fruit_season)
duplicate_fruit_season = {
'raspberry': 'May',
'raspberry': 'June',
}
print(duplicate_fruit_season)
mutable_key = {
['watermelon', 'cantaloupe', 'honeydew']: 'July'
}
# The solution is to use a tuple instead
immutable_key = {
('watermelon', 'cantelope', 'honeydew'): 'July'
}
vegetable_season = {
'Eggplant': 'July',
'Onion': 'May'
}
print(vegetable_season)
print('raspberry' in fruit_season)
print('mangos' in fruit_season)
if 'pineapple' in fruit_season:
print('Lets eat tropical fruit')
else:
print("Temperate fruit it is.")
if 'broccoli' in vegetable_season:
print('Yum, little trees!')
else:
print('No little trees.')
for fruit in fruit_season:
print ("{0} is best in {1} (at least in Virginia)".format(fruit.title(), fruit_season[fruit]))
print(fruit_season.keys())
print(fruit_season.values())
print(fruit_season.items())
for key, value in fruit_season.items():
print ("In {0} eat a {1}".format(value, key))
print (sorted(fruit_season.keys()))
for fruit in sorted(fruit_season.keys()):
print('In {0} {1} is in season.'.format(fruit_season[fruit], fruit))
my_complicated_dictionary = {
(1, 2, 3): 6,
'weevil': {
'e': 2,
'i': 1,
'l': 1,
'v': 1,
'w': 1,
},
9: [3, 3]
}
print (my_complicated_dictionary)
true_fruit_season = {
'raspberry': ['May', 'June'],
'apple': ['September', 'October', 'November', 'December'],
'peach': ['July', 'August'],
'grape': ['August', 'September', 'October']
}
print (true_fruit_season)
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
for month in months:
print ('It is {0}'.format(month))
for fruit, season in true_fruit_season.items():
if month in season:
print ("\tEat {0}".format(fruit))
from random import randint #necessary for "challenge"
# step 1: create dictionary with one adjective per letter of the alphabet
myadjectives = {
'A':['admirable','aggressive','agile','agitated','agonizing','agreeable'],
'B':'biodegradable',
'C':['cloudy','creative'],
'D':'deserted',
'E':'everlasting',
'F':'flamboyant',
'G':'grotesque',
'H':'humming',
'I':'imperfect',
'J':'joyful',
'K':'kosher',
'L':'lively',
'M':'modest',
'N':'nervous',
'O':'ornate',
'P':'playful',
'Q':'quick',
'R':['restless','relieved','remarkable','remorseful', 'remote'],
'S':'strong',
'T':'tiny',
'U':'ugly',
'V':'vital',
'W':['wobbly','well-made'],
'X':'oops!',
'Y':'youthful',
'Z':'zesty'
}
type(myadjectives['C'])
# step 2: create funtion acrostic, takes name as an argument
def acrostic (name):
# step 3: capitalize name
capName = name.upper()
# step 4
for letter in capName:
current_adj_list = myadjectives[letter]
if type(current_adj_list) == list:
current_adj = current_adj_list[randint(0,len(current_adj_list)-1)]
else:
current_adj = current_adj_list
print("{0} - {1}".format(letter, current_adj))
acrostic('Lilly')
# If you have a list of adjectives
my_dict = {}
# Imaging this is the full alphabet
for i in ['A', 'B', 'C']:
my_dict[i] = []
for i in ['Adoreable', 'Acceptable', 'Bad', 'Cute', 'Basic', 'Dumb','Active']:
first_char = i[0]
if first_char in my_dict:
my_dict[first_char].append(i)
print (my_dict)
# Generating from a file
my_dict = {}
for i in ['A', 'B', 'C']:
my_dict[i] = []
# adjectives.txt has one adjective per line
with open('adjectives.txt') as fh:
for line in fh:
word = line.rstrip().title()
first_char = word[0]
if first_char in my_dict:
my_dict[first_char].append(word)
print (my_dict['A'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To access a value, you index into it similarly to a list using square brackets.
Step2: Trying to access a key not in the dictionary throws an error
Step3: To add an item to the dictionary set the value equal to the indexed keys
Step4: To delete a key, use the del keyword
Step5: Rules on keys
Step6: TRY IT
Step7: Dictionary Operators
Step8: You can use this in if statement
Step9: TRY IT
Step10: Dictionaries and Loops
Step11: Dictionary Methods
Step12: TRY IT
Step13: More complex dictionaries
Step14: Let's use this to create a more realistic fruit season dictionary
Step15: TRY IT
Step16: Bonus Material
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.