Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
3,300
|
<ASSISTANT_TASK:>
Python Code:
import hydrofunctions as hf
print("Hydrofunctions version: ", hf.__version__)
import numpy as np
print("Numpy version: ", np.__version__)
import pandas as pd
print("Pandas version: ", pd.__version__)
import requests
print("Requests version: ", requests.__version__)
import matplotlib as plt
%matplotlib inline
header = {'Accept-encoding': 'gzip','max-age': '120'}
JSON15min2day_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json%2C1.1&sites=03213700¶meterCd=00060&startDT=2016-09-01&endDT=2016-09-02", headers=header)
two_sites_two_params_iv_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000,01541200&period=P1D", headers=header)
nothing_avail_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&indent=on&stateCd=al¶meterCd=00001", headers=header)
mult_flags_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01542500&startDT=2019-01-24&endDT=2019-01-28¶meterCd=00060", headers=header)
diff_freq = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01570500,01541000&startDT=2018-06-01&endDT=2018-06-01", headers=header)
startDST_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-03-10&endDT=2018-03-12¶meterCd=00060", headers=header)
endDST_req = requests.get("https://waterservices.usgs.gov/nwis/iv/?format=json&sites=01541000&startDT=2018-11-03&endDT=2018-11-05¶meterCd=00060", headers=header)
JSON15min2day_req.json()
two_sites_two_params_iv_req.json()
nothing_avail_req.json()
mult_flags_req.json()
diff_freq.json()
startDST_req.json()
endDST_req.json()
# Import the 'JSON' dicts from our test_data module.
from tests.test_data import JSON15min2day, two_sites_two_params_iv, nothing_avail, mult_flags, diff_freq, startDST, endDST
json = hf.extract_nwis_df(JSON15min2day, interpolate=False)
print("json shape: ", json.shape)
two = hf.extract_nwis_df(two_sites_two_params_iv, interpolate=False)
print("two shape: ", two.shape)
try:
nothing = hf.extract_nwis_df(nothing_avail, interpolate=False)
print("nothing shape: ", nothing.shape)
except hf.HydroNoDataError as err:
print(err)
mult = hf.extract_nwis_df(mult_flags, interpolate=False)
# This has missing observations that get replaced.
print("mult shape: ", mult.shape, "orig json length series1: ", len(mult_flags['value']['timeSeries'][0]['values'][0]['value']))
# This version gets missing observations replaced with interpolated values.
mult_interp = hf.extract_nwis_df(mult_flags, interpolate=True)
print("mult_interp shape: ", mult_interp.shape, "orig json length series1: ", len(mult_flags['value']['timeSeries'][0]['values'][0]['value']))
diff = hf.extract_nwis_df(diff_freq, interpolate=False)
print("diff shape: ", diff.shape)
diff_interp = hf.extract_nwis_df(diff_freq, interpolate=True)
print("diff_interp shape: ", diff_interp.shape)
diff
mult['2019-01-24 16-05:00']
mult_interp['2019-01-24 16-05:00']
shapes = {'two':two.shape, 'json':json.shape, 'mult':mult.shape, 'diff':diff.shape}
shapes
json.index.is_unique
two.index.is_monotonic
mult.loc['2019-01-24T10:30', 'USGS:01542500:00060:00000_qualifiers']
mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000_qualifiers']
mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000']
type(mult.loc['2019-01-28T16:00:00.000-05:00', 'USGS:01542500:00060:00000'])
mult.plot()
mult_interp.plot()
start = mult.index.min()
stop = mult.index.max()
(stop-start)/pd.Timedelta('15 minutes')
# The length is the same as calculated above. No missing index values.
mult.shape
# Missing index values from Jan. 24th were filled in
mult['2019-01-24']
diff = hf.extract_nwis_df(diff_freq)
diff
diff['USGS:01570500:00060:00000'].plot()
startDSTdf = hf.extract_nwis_df(startDST, interpolate=False)
# Three days at the start of DST should be 3 * 24 * 4 = 288, minus an hour * 4 = 284.
print(startDSTdf.shape)
startDSTdf.plot()
endDSTdf = hf.extract_nwis_df(endDST, interpolate=False)
# Three days at the end of DST should be 3 * 24 * 4 = 288 long, plus an extra hour *4 = 292.
print(endDSTdf.shape)
endDSTdf.plot()
startDSTdf['2018-03-11 20':'2018-03-12 05']
startDSTdf['USGS:01541000:00060:00000_qualifiers'].describe()
endDSTdf['USGS:01541000:00060:00000_qualifiers'].describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create sample NWIS responses to requests.
Step2: Check values of test fixtures
Step3: Check individual values within the mult_flags fixture
Step4: Check that missing index values are filled.
Step5: Check that data requests with different frequencies of observations raise a warning.
Step6: Check Daylight Savings Time
|
3,301
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import os
# below is used to print out pretty pandas dataframes
from IPython.display import display, HTML
%matplotlib inline
def execute_query_safely(sql, con):
cur = con.cursor()
# try to execute the query
try:
cur.execute(sql)
except:
# if an exception, rollback, rethrow the exception - finally closes the connection
cur.execute('rollback;')
raise
finally:
cur.close()
return
# location of the queries to generate aline specific materialized views
aline_path = './'
# location of the queries to generate materialized views from the MIMIC code repository
concepts_path = '../../concepts/'
# specify user/password/where the database is
sqluser = 'postgres'
sqlpass = 'postgres'
dbname = 'mimic'
schema_name = 'mimiciii'
host = 'localhost'
# connect to the database
con = psycopg2.connect(dbname=dbname, user=sqluser, password=sqlpass, host=host)
# all queries are prepended by this statement to ensure we use the correct schema
query_schema = 'SET SEARCH_PATH TO public,' + schema_name + ';'
# note that by placing 'public' first, we create materialized views on the public schema
# ... but can still access tables on the `schema_name` table (usually mimiciii)
# Load in the query from file
f = os.path.join(concepts_path,'sepsis/angus.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating materialized view using {} ...'.format(f),end=' ')
execute_query_safely(query_schema + query, con)
print('done.')
# Load in the query from file
f = os.path.join(concepts_path,'demographics/HeightWeightQuery.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating materialized view using {} ...'.format(f),end=' ')
execute_query_safely(query_schema + query, con)
print('done.')
# Load in the query from file
f = os.path.join(aline_path,'aline_vaso_flag.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating materialized view using {} ...'.format(f),end=' ')
execute_query_safely(query_schema + query, con)
print('done.')
# Load in the query from file
f = os.path.join(aline_path,'aline_cohort.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating materialized view using {} ...'.format(f),end=' ')
execute_query_safely(query_schema + query, con)
print('done.')
query = query_schema +
select
icustay_id
, exclusion_readmission
, exclusion_shortstay
, exclusion_vasopressors
, exclusion_septic
, exclusion_aline_before_admission
, exclusion_not_ventilated_first24hr
, exclusion_service_surgical
from aline_cohort_all
# Load the result of the query into a dataframe
df = pd.read_sql_query(query, con)
# print out exclusions
idxRem = df['icustay_id'].isnull()
for c in df.columns:
if 'exclusion_' in c:
print('{:5d} - {}'.format(df[c].sum(), c))
idxRem[df[c]==1] = True
# final exclusion (excl sepsis/something else)
print('Will remove {} of {} patients.'.format(np.sum(idxRem), df.shape[0]))
print('')
print('')
print('Reproducing the flow of the flowchart from Chest paper.')
# first stay
idxRem = (df['exclusion_readmission']==1) | (df['exclusion_shortstay']==1)
print('{:5d} - removing {:5d} ({:2.2f}%) patients - short stay // readmission.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[~idxRem,:]
idxRem = df['exclusion_not_ventilated_first24hr']==1
print('{:5d} - removing {:5d} ({:2.2f}%) patients - not ventilated in first 24 hours.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[df['exclusion_not_ventilated_first24hr']==0,:]
print('{:5d}'.format(df.shape[0]))
idxRem = df['icustay_id'].isnull()
for c in ['exclusion_septic', 'exclusion_vasopressors',
'exclusion_aline_before_admission', 'exclusion_service_surgical']:
print('{:5s} - removing {:5d} ({:2.2f}%) patients - additional {:5d} {:2.2f}% - {}'.format(
'', df[c].sum(), 100.0*df[c].mean(),
np.sum((idxRem==0)&(df[c]==1)), 100.0*np.mean((idxRem==0)&(df[c]==1)),
c))
idxRem = idxRem | (df[c]==1)
df = df.loc[~idxRem,:]
print('{} - final cohort.'.format(df.shape[0]))
# get a list of all files in the subfolder
aline_queries = [f for f in os.listdir(aline_path)
# only keep the filename if it is actually a file (and not a directory)
if os.path.isfile(os.path.join(aline_path,f))
# and only keep the filename if it is an SQL file
& f.endswith('.sql')
# and we do *not* want aline_cohort - it's generated above
& (f != 'aline_cohort.sql') & (f != 'aline_vaso_flag.sql')]
for f in aline_queries:
print('Executing {} ...'.format(f), end=' ')
with open(os.path.join(aline_path,f)) as fp:
query = ''.join(fp.readlines())
execute_query_safely(query_schema + query, con)
print('done.')
# Load in the query from file
query = query_schema +
--FINAL QUERY
select
co.subject_id, co.hadm_id, co.icustay_id
-- static variables from patient tracking tables
, co.age
, co.gender
-- , co.gender_num -- gender, 0=F, 1=M
, co.intime as icustay_intime
, co.day_icu_intime -- day of week, text
--, co.day_icu_intime_num -- day of week, numeric (0=Sun, 6=Sat)
, co.hour_icu_intime -- hour of ICU admission (24 hour clock)
, case
when co.hour_icu_intime >= 7
and co.hour_icu_intime < 19
then 1
else 0
end as icu_hour_flag
, co.outtime as icustay_outtime
-- outcome variables
, co.icu_los_day
, co.hospital_los_day
, co.hosp_exp_flag -- 1/0 patient died within current hospital stay
, co.icu_exp_flag -- 1/0 patient died within current ICU stay
, co.mort_day -- days from ICU admission to mortality, if they died
, co.day_28_flag -- 1/0 whether the patient died 28 days after *ICU* admission
, co.mort_day_censored -- days until patient died *or* 150 days (150 days is our censor time)
, co.censor_flag -- 1/0 did this patient have 150 imputed in mort_day_censored
-- aline flags
-- , co.initial_aline_flag -- always 0, we remove patients admitted w/ aline
, co.aline_flag -- 1/0 did the patient receive an aline
, co.aline_time_day -- if the patient received aline, fractional days until aline put in
-- demographics extracted using regex + echos
, bmi.weight as weight_first
, bmi.height as height_first
, bmi.bmi
-- service patient was admitted to the ICU under
, co.service_unit
-- severity of illness just before ventilation
, so.sofa as sofa_first
-- vital sign value just preceeding ventilation
, vi.map as map_first
, vi.heartrate as hr_first
, vi.temperature as temp_first
, vi.spo2 as spo2_first
-- labs!
, labs.bun_first
, labs.creatinine_first
, labs.chloride_first
, labs.hgb_first
, labs.platelet_first
, labs.potassium_first
, labs.sodium_first
, labs.tco2_first
, labs.wbc_first
-- comorbidities extracted using ICD-9 codes
, icd.chf as chf_flag
, icd.afib as afib_flag
, icd.renal as renal_flag
, icd.liver as liver_flag
, icd.copd as copd_flag
, icd.cad as cad_flag
, icd.stroke as stroke_flag
, icd.malignancy as malignancy_flag
, icd.respfail as respfail_flag
, icd.endocarditis as endocarditis_flag
, icd.ards as ards_flag
, icd.pneumonia as pneumonia_flag
-- sedative use
, sed.sedative_flag
, sed.midazolam_flag
, sed.fentanyl_flag
, sed.propofol_flag
from aline_cohort co
-- The following tables are generated by code within this repository
left join aline_sofa so
on co.icustay_id = so.icustay_id
left join aline_bmi bmi
on co.icustay_id = bmi.icustay_id
left join aline_icd icd
on co.hadm_id = icd.hadm_id
left join aline_vitals vi
on co.icustay_id = vi.icustay_id
left join aline_labs labs
on co.icustay_id = labs.icustay_id
left join aline_sedatives sed
on co.icustay_id = sed.icustay_id
order by co.icustay_id
# Load the result of the query into a dataframe
df = pd.read_sql_query(query, con)
df.describe().T
# plot the rest of the distributions
for col in df.columns:
if df.dtypes[col] in ('int64','float64'):
plt.figure(figsize=[12,6])
plt.hist(df[col].dropna(), bins=50, normed=True)
plt.xlabel(col,fontsize=24)
plt.show()
# apply corrections
df.loc[df['age']>89, 'age'] = 91.4
df.to_csv('aline_data.csv',index=False)
con.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Generate materialized views
Step3: Now we generate the aline_cohort table using the aline_cohort.sql file.
Step4: The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filenames are the same as the created materialized view names for convenience.
Step6: Summarize the cohort exclusions before we pull all the data together.
Step7: Now we need to remove obvious outliers, including correcting ages > 200 to 91.4 (i.e. replace anonymized ages with 91.4, the median age of patients older than 89).
Step8: 3 - Write to file
Step9: 4 - Create a propensity score using this data
|
3,302
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Input layer
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer
net = tflearn.fully_connected(net, 800, activation='ReLU')
net = tflearn.fully_connected(net, 200, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
3,303
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
# YOUR CODE HERE
a = np.arange(1, n+1, 1) #Makes array from 1 to n+1
if n==0:
return 1 #If n is 1 or 0, returns value of 1.
elif n==1:
return 1
else:
return max(a.cumprod())#For all other n, takes max value of cumulative products
print np_fact(6)
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
def loop_fact(n):
Compute n! using a Python for loop.
# YOUR CODE HERE
f = n
if n == 0:
return 1 #Same as above.
elif n == 1:
return 1
while n > 1:
f *= (n-1) #For n > 1, takes continuous product of n to right before n = 0, otherwise it would all equal 0.
n -= 1
return f
print loop_fact(10)
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
# YOUR CODE HERE
%timeit -n1 -r1 loop_fact(50)
%timeit -n1 -r1 np_fact(50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Factorial
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is
|
3,304
|
<ASSISTANT_TASK:>
Python Code:
DEM_filepath = ""
sample_points_filepath = ""
import matplotlib.pylab as plt
%matplotlib inline
import rasterio
import fiona
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
with rasterio.drivers():
with rasterio.open(DEM_filepath) as source_dem:
array_dem = source_dem.read(1)
source_dem.close()
plt.imshow(array_dem)
plt.ylabel("pixels")
with fiona.open(sample_points_filepath, 'r') as source_points:
points = [f['geometry']['coordinates'] for f in source_points]
#plt.figure()
for f in source_points:
x, y = f['geometry']['coordinates']
plt.plot(x, y, 'ro')
plt.show()
source_points.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import statements
Step2: Examples
|
3,305
|
<ASSISTANT_TASK:>
Python Code:
%run db2.ipynb
%sql -sampledata
%%sql -q
DROP TABLE HC.PATIENTS;
CREATE TABLE HC.PATIENTS
(
SIN VARCHAR(11),
USERID VARCHAR(8),
NAME VARCHAR(8),
ADDRESS VARCHAR(12),
PHARMACY VARCHAR(12),
ACCT_BALANCE DEC(9,2),
PCP_ID VARCHAR(8)
);
INSERT INTO HC.PATIENTS VALUES
('123 551 234','MAX','Max','First St.','hypertension',89.7,'LEE'),
('123 589 812','MIKE','Mike','Long St.','diabetics',8.3,'JAMES'),
('123 119 856','SAM','Sam','Big St.','aspirin',12.5,'LEE'),
('123 191 454','DOUG','Doug','Good St.','influenza',7.68,'JAMES'),
('123 456 789','BOB','Bob','123 Some St.','hypertension',9,'LEE');
SELECT * FROM HC.PATIENTS;
%%sql
DROP TABLE HC.ROLES;
CREATE TABLE HC.ROLES
(
USERID VARCHAR(8),
ROLE VARCHAR(10)
);
%sql CREATE OR REPLACE VARIABLE HC.SESSION_USER VARCHAR(8);
%%sql -d
CREATE OR REPLACE FUNCTION
HC.VERIFY_ROLE_FOR_USER(UID VARCHAR(8), IN_ROLE VARCHAR(10))
SECURED NO EXTERNAL ACTION DETERMINISTIC
RETURNS INT
BEGIN ATOMIC
RETURN
SELECT COUNT(*) FROM HC.ROLES H
WHERE H.USERID = UID AND H.ROLE = IN_ROLE;
END@
%%sql
INSERT INTO HC.ROLES
VALUES
('LEE','PCP'),('JAMES','PCP'),
('MAX','PATIENT'),('MIKE','PATIENT'),('SAM','PATIENT'),
('DOUG','PATIENT'),('BOB','PATIENT'),
('JOHN','ACCOUNTING'),
('TOM','MEMBERSHIP'),
('JANE','RESEARCH'),
('FRED','DBA');
%%sql
SET HC.SESSION_USER = 'LEE';
VALUES
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'PCP');
%%sql
CREATE OR REPLACE PERMISSION HC.ROW_ACCESS ON HC.PATIENTS
FOR ROWS WHERE
(
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'PATIENT') = 1 AND
HC.SESSION_USER = USERID
)
OR
(
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'PCP') = 1 AND
HC.SESSION_USER = PCP_ID
)
OR
(
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'MEMBERSHIP') = 1 OR
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'ACCOUNTING') = 1 OR
HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'RESEARCH') = 1
)
ENFORCED FOR ALL ACCESS
ENABLE;
%sql ALTER TABLE HC.PATIENTS ACTIVATE ROW ACCESS CONTROL;
%%sql
SET HC.SESSION_USER = 'LEE';
SELECT * FROM HC.PATIENTS WHERE NAME = 'Sam';
%%sql
UPDATE HC.PATIENTS SET PHARMACY = 'Codeine' WHERE NAME = 'Sam';
SELECT * FROM HC.PATIENTS WHERE NAME = 'Sam'
%%sql
SET HC.SESSION_USER = 'LEE';
UPDATE HC.PATIENTS SET PHARMACY = 'Codeine' WHERE NAME = 'Doug';
%%sql
SET HC.SESSION_USER = 'JOHN';
SELECT * FROM HC.PATIENTS WHERE NAME = 'Doug'
%%sql
SET HC.SESSION_USER = 'LEE';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'JAMES';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'JOHN';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'BOB';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'FRED';
SELECT * FROM HC.PATIENTS;
%%sql
CREATE OR REPLACE MASK HC.ACCT_BALANCE_MASK ON HC.PATIENTS FOR
COLUMN ACCT_BALANCE RETURN
CASE
WHEN HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'ACCOUNTING') = 1
THEN ACCT_BALANCE
ELSE 0.00
END
ENABLE;
%%sql
CREATE OR REPLACE MASK HC.SIN_MASK ON HC.PATIENTS FOR
COLUMN SIN RETURN
CASE
WHEN HC.VERIFY_ROLE_FOR_USER(HC.SESSION_USER,'PATIENT') = 1
THEN SIN
ELSE
'XXX XXX ' || SUBSTR(SIN,9,3)
END
ENABLE;
%sql ALTER TABLE HC.PATIENTS ACTIVATE COLUMN ACCESS CONTROL;
%%sql
SET HC.SESSION_USER = 'JOHN';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'JANE';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'LEE';
SELECT * FROM HC.PATIENTS;
%%sql
SET HC.SESSION_USER = 'BOB';
SELECT * FROM HC.PATIENTS;
%sql -a SELECT * FROM SYSCAT.CONTROLS;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
Step2: Health Care Scenario
Step3: Setting Permissions for Access
Step4: We also need to create a SESSION_USER global variable that can be used to identify the "current" user. Normally you would just use the SESSION_USER variable in the rules, but since we don't want to use SECADM we need to fake the userid.
Step5: The HC.VERIFY_ROLE_FOR_USER function will mimic what the VERIFY_ROLE_FOR_USER
Step6: Create some ROLES for people in the Healthcare scenario.
Step7: Now we can test to see if a user has a particular role. This first test checks to see the Dr. Lee is a PCP (Primary Care Provider).
Step8: At this point in time we can set up some rules on what the various groups can see.
Step9: The rules now need to be activated in order for them to be enforced.
Step10: Updating a Patient Record
Step11: Dr. Lee decides to give Sam some codeine for his pain. The update is successful and we can see the results.
Step12: Update Failure
Step13: The UPDATE completes, but no records are modified. To see all of the records, we need to change our userid to someone who can see all records (John in accounting). Note there is no way around this restriction - you must have the proper clearance to see the records.
Step14: Selecting Rows from a Table
Step15: Changing the current user to Dr. James will change the results that are displayed.
Step16: Changing the current user to one of the accounting, research, or fund raising users will result in all records being displayed.
Step17: Patients are able to see only their row.
Step18: A DBA (Fred) who is not part of any of these groups will not be able to see any of the records, even though they may performance maintenance on the table itself.
Step19: Column Masks
Step20: The second mask will return the entire SIN number for the PATIENT, but only the last three digits of the SIN for all others.
Step21: In order for the MASKS to be effective, they need to be enabled for the table.
Step22: When someone from accounting now views the records, they will only see the last three digits of the SIN field but they will see all of the accounting data.
Step23: When a researcher looks at the data, they will also only see the last three digits of the SIN field, but they will get a zero balance in the accounting field.
Step24: Dr. Lee will only see his patients (ROW CONTROL) and will see the last three digits of the SIN field and zero for the account balance (COLUMN MASK).
Step25: Finally, the patients will be able to see their own SIN field, but the account balance will show as zero (presumably so they don't get sick over the amount!).
Step26: Catalog Views
|
3,306
|
<ASSISTANT_TASK:>
Python Code::
model.score(x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
3,307
|
<ASSISTANT_TASK:>
Python Code:
import sympy
from sympy import Symbol, sqrt
x = Symbol('x', real=True)
a = Symbol('a', real=True)
y = Symbol('y', real=True)
b = Symbol('b', real=True)
r = sqrt(x ** 2 + y ** 2)
sympy.init_printing()
z = sympy.integrate(1/r, x, conds='none')
print z
import numpy as np
import math
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks', palette='Set2')
sns.despine()
%matplotlib inline
a = -10
b = 20
n = 70
t = np.linspace(a, b, n)
h = t[1] - t[0]
w = np.ones(n) * h
w[0] = h/2
w[n-1] = h/2
w = w * np.exp(0.5 * t)/np.sqrt(math.pi)
ps = np.exp(t)
x = np.linspace(0.1, 1)
fun = 1.0/np.sqrt(x)
appr = 0 * x
for i in xrange(n):
appr = appr + w[i] * np.exp(-(x) * ps[i])
#plt.plot(x, fun)
ax = plt.subplot(1, 1, 1)
ax.plot(x, appr - fun)
#ax.set_title('Approximation error by %d Gaussians' % n)
ax.set_title('Error')
plt.tight_layout(.8)
import numpy as np
import matplotlib.pyplot as plt
n = 128
a = [[1.0/(abs(i - j) + 0.5) for i in xrange(n)] for j in xrange(n)]
a = np.array(a)
plt.plot(np.linalg.svd(a)[1])
#And for a block:
plt.semilogy(np.linalg.svd(a[:n/2, n/2:])[1])
from numba import jit
n = 32
t = np.linspace(0, 1, n)
h = t[1] - t[0]
x_src, y_src = np.meshgrid(t, t)
x_src, y_src = x_src.flatten(), y_src.flatten()
x_rec, y_rec = x_src + h * 0.5, y_src + 0.5 * h
N = n * n
mat = np.zeros((N, N))
@jit(nopython=True)
def compute_mat(mat, x_src, y_src, x_rec, y_rec):
for i in range(N):
for j in xrange(N):
r = (x_src[i] - x_rec[j]) ** 2 + (y_src[i] - y_rec[j]) ** 2
mat[i, j] = 1.0/np.sqrt(r)
%timeit compute_mat(mat, x_src, y_src, x_rec, y_rec)
#(x_rec - x_src)/h
plt.semilogy(np.linalg.svd(mat[:N/2, N/2:])[1])
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/alex.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nystrom method
Step2: A sidenote about fitting by sum of exponentials
Step3: Off-diagonal blocks correspond to "far" interaction
Step4: Plotting the singular values of the off-diagonal block shows that they do not decay that fast.
Step5: The general scheme
|
3,308
|
<ASSISTANT_TASK:>
Python Code:
from scipy.misc import imsave, toimage
from os import listdir
from os.path import basename, splitext
import glob
import numpy as np
npy_path = '../compressed-models/alexnet/npy/'
jpg_path = '../compressed-models/alexnet/jpegs/'
gif_path = '../compressed-models/alexnet/gifs/'
png_path = '../compressed-models/alexnet/pngs/'
txt_path = '../compressed-models/alexnet/txts/'
npy_list = glob.glob(npy_path + '*.npy')
min_max = {}
for file in npy_list:
f = np.load(file)
x = f.shape[0]
y = np.prod(f.shape[1:])
f_reshape = f.reshape(x, y)
#f_normalized = np.round((f_reshape + 1) / 2. * 255.)
filename = splitext(basename(file))[0]
#toimage(jpg_path + filename + '.jpg', f_resha)
min_max[filename] = (f_reshape.min(), f_reshape.max())
np.savetxt(txt_path + filename + '.txt', f_reshape)
#np.save(jpg_path + 'range.npy', min_max)
npy_list = glob.glob(npy_path + '*.npy')
min_max = {}
for file in npy_list:
f = np.load(file)
x = f.shape[0]
y = np.prod(f.shape[1:])
f_reshape = f.reshape(x, y)
#f_normalized = np.round((f_reshape + 1) / 2. * 255.)
filename = splitext(basename(file))[0]
#toimage(jpg_path + filename + '.jpg', f_resha)
min_max[filename] = (f_reshape.min(), f_reshape.max())
imsave(jpg_path + filename + '.jpg', f_reshape)
np.save(jpg_path + 'range.npy', min_max)
npy_list = glob.glob(npy_path + '*.npy')
min_max = {}
for file in npy_list:
f = np.load(file)
x = f.shape[0]
y = np.prod(f.shape[1:])
f_reshape = f.reshape(x, y)
#f_normalized = np.round((f_reshape + 1) / 2. * 255.)
filename = splitext(basename(file))[0]
#toimage(jpg_path + filename + '.jpg', f_resha)
min_max[filename] = (f_reshape.min(), f_reshape.max())
imsave(gif_path + filename + '.gif', f_reshape)
np.save(gif_path + 'range.npy', min_max)
npy_list = glob.glob(npy_path + '*.npy')
min_max = {}
for file in npy_list:
f = np.load(file)
x = f.shape[0]
y = np.prod(f.shape[1:])
f_reshape = f.reshape(x, y)
#f_normalized = np.round((f_reshape + 1) / 2. * 255.)
filename = splitext(basename(file))[0]
#toimage(jpg_path + filename + '.jpg', f_resha)
min_max[filename] = (f_reshape.min(), f_reshape.max())
imsave(png_path + filename + '.png', f_reshape)
np.save(png_path + 'range.npy', min_max)
from scipy.misc import imread
f = imread(jpg_path + 'conv1.jpg')
min_max = np.load('range.npy')
#f_normalized = (f / 255. * 2.) - 1
print f[0]
print min_max
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load all npys and convert them to JPEG
Step2: Load all npys and convert them to GIF
Step3: Load all npys and convert them to PNG
Step4: 2. Reference Sizes
|
3,309
|
<ASSISTANT_TASK:>
Python Code:
from rmtk.vulnerability.common import utils
import double_MSA_on_SDOF
import numpy
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
import MSA_utils
%matplotlib inline
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/2MSA/capacity_curves.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
gmrs_folder = '../../../../../rmtk_data/MSA_records'
number_models_in_DS = 1
no_bins = 2
no_rec_bin = 10
damping_ratio = 0.05
minT = 0.1
maxT = 2
filter_aftershocks = 'FALSE'
Mw_multiplier = 0.92
waveform_path = '../../../../../rmtk_data/2MSA/waveform.csv'
gmrs = utils.read_gmrs(gmrs_folder)
gmr_characteristics = MSA_utils.assign_Mw_Tg(waveform_path, gmrs, Mw_multiplier,
damping_ratio, filter_aftershocks)
#utils.plot_response_spectra(gmrs,minT,maxT)
damage_model_file = "/Users/chiaracasotto/GitHub/rmtk_data/2MSA/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
degradation = False
record_scaled_folder = "../../../../../rmtk_data/2MSA/Scaling_factors"
msa = MSA_utils.define_2MSA_parameters(no_bins,no_rec_bin,record_scaled_folder,filter_aftershocks)
PDM, Sds, gmr_info = double_MSA_on_SDOF.calculate_fragility(
capacity_curves, hysteresis, msa, gmrs, gmr_characteristics,
damage_model, damping_ratio,degradation, number_models_in_DS)
IMT = 'Sa'
T = 0.47
#T = numpy.arange(0.4,1.91,0.01)
regression_method = 'max likelihood'
fragility_model = MSA_utils.calculate_fragility_model_damaged( PDM,gmrs,gmr_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
minIML, maxIML = 0.01, 4
MSA_utils.plot_fragility_model(fragility_model,damage_model,minIML, maxIML)
output_type = "csv"
output_path = "../../../../../rmtk_data/2MSA/"
minIML, maxIML = 0.01, 4
tax = 'RC'
MSA_utils.save_mean_fragility(fragility_model,damage_model,tax,output_type,output_path,minIML, maxIML)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Load ground motion records
Step3: Load damage state thresholds
Step4: Calculate fragility function
Step5: Fit lognormal CDF fragility curves
Step6: Plot fragility functions
Step7: Save fragility functions
|
3,310
|
<ASSISTANT_TASK:>
Python Code:
# your function
# test your function
text = 'This is an example text. The text mentions a former president of the United States, Barack Obama.'
basename = 'test_text.tsv'
output_dir = 'test_dir'
text_to_conll_simple(text,
nlp,
output_dir,
basename,
start_with_index = False,
overwrite_existing_conll_file = True)
import spacy
nlp = spacy.load('en_core_web_sm')
test = 'This is a test.'
doc = nlp(test)
tok = doc[0]
tok.text
test = 'This is a test.'
doc = nlp(test)
tok = doc[0]
tok.text
dir(tok)
# Check if file exists
import os
a_path_to_a_file = '../Data/books/Macbeth.txt'
if os.path.isfile(a_path_to_a_file):
print('File exists:', a_path_to_a_file)
else:
print('File not found:', a_path_to_a_file)
another_path_to_a_file = '../Data/books/KingLear.txt'
if os.path.isfile(another_path_to_a_file):
print('File exists:', another_path_to_a_file)
else:
print('File not found:', another_path_to_a_file)
# check if directory exists
a_path_to_a_dir = '../Data/books/'
if os.path.isdir(a_path_to_a_dir):
print('Directory exists:', a_path_to_a_dir)
else:
print('Directory not found:', a_path_to_a_dir)
another_path_to_a_dir = '../Data/films/'
if os.path.isdir(another_path_to_a_dir):
print('Directory exists:', another_path_to_a_dir)
else:
print('Directory not found:', another_path_to_a_dir)
# Files in '../Data/Dreams':
%ls ../Data/Dreams/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tip 0
Step2: Tip 1
Step3: Tip 2
Step4: Tip 3
Step5: 3. Building python modules to process files in a directory
|
3,311
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
symbol = 'Security 1'
symbol2 = 'Security 2'
price_data = pd.DataFrame(np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.4], [0.4, 1.0]]), axis=0) + 100,
columns=[symbol, symbol2],
index=pd.date_range(start='01-01-2007', periods=150))
dates_actual = price_data.index.values
prices = price_data[symbol].values
from bqplot import *
from bqplot.interacts import (
FastIntervalSelector, IndexSelector, BrushIntervalSelector,
BrushSelector, MultiSelector, LassoSelector,
)
from ipywidgets import ToggleButtons, VBox, HTML
# Define scales for the rest of the notebook
scales = {'x': DateScale(), 'y': LinearScale()}
# The Mark we want to select subsamples of
scatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],
selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})
# Create the brush selector, passing it its corresponding scale.
# Notice that we do not pass it any marks for now
brushintsel = BrushIntervalSelector(scale=scales['x'])
x_ax = Axis(label='Index', scale=scales['x'])
x_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')
# Pass the Selector instance to the Figure
fig = Figure(marks=[scatter], axes=[x_ax, x_ay],
title='''Brush Interval Selector Example. Click and drag on the Figure to action.''',
interaction=brushintsel)
# The following text widgets are used to display the `selected` attributes
text_brush = HTML()
text_scatter = HTML()
# This function updates the text, triggered by a change in the selector
def update_brush_text(*args):
text_brush.value = "The Brush's selected attribute is {}".format(brushintsel.selected)
def update_scatter_text(*args):
text_scatter.value = "The scatter's selected indices are {}".format(scatter.selected)
brushintsel.observe(update_brush_text, 'selected')
scatter.observe(update_scatter_text, 'selected')
update_brush_text()
update_scatter_text()
# Display
VBox([fig, text_brush, text_scatter])
brushintsel.marks = [scatter]
def create_figure(selector, **selector_kwargs):
'''
Returns a Figure with a Scatter and a Selector.
Arguments
---------
selector: The type of Selector, one of
{'BrushIntervalSelector', 'BrushSelector', 'FastIntervalSelector', 'IndexSelector', 'LassoSelector'}
selector_kwargs: Arguments to be passed to the Selector
'''
scatter = Scatter(x=dates_actual, y=prices, scales=scales, colors=['orange'],
selected_style={'opacity': '1'}, unselected_style={'opacity': '0.2'})
sel = selector(marks=[scatter], **selector_kwargs)
text_brush = HTML()
if selector != LassoSelector:
def update_text(*args):
text_brush.value = '{}.selected = {}'.format(selector.__name__, sel.selected)
sel.observe(update_text, 'selected')
update_text()
x_ax = Axis(label='Index', scale=scales['x'])
x_ay = Axis(label=(symbol + ' Price'), scale=scales['y'], orientation='vertical')
fig = Figure(marks=[scatter], axes=[x_ax, x_ay], title='{} Example'.format(selector.__name__),
interaction=sel)
return VBox([fig, text_brush])
create_figure(BrushIntervalSelector, orientation='vertical', scale=scales['y'])
create_figure(BrushSelector, x_scale=scales['x'], y_scale=scales['y'])
create_figure(FastIntervalSelector, scale=scales['x'])
create_figure(LassoSelector)
create_figure(IndexSelector, scale=scales['x'])
create_figure(MultiSelector, scale=scales['x'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction <a class="anchor" id="introduction"></a>
Step2: Brush Selectors <a class="anchor" id="brushselectors"></a>
Step3: Linking the brush to the scatter
Step4: From now on we will stop printing out the selected indices, but rather use the selected_style and unselected_style attributes of the Marks to check which elements are selected.
Step5: BrushIntervalSelector on the y-axis
Step6: 2d BrushSelector
Step7: FastIntervalSelector <a class="anchor" id="fastintervalselector"></a>
Step8: As of the latest version, FastIntervalSelector is only supported for 1d interaction along the x-axis
Step9: IndexSelector <a class="anchor" id="indexselector"></a>
Step10: As of the latest version, IndexSelector is only supported for interaction along the x-axis.
|
3,312
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from qutip import *
Image(filename='images/optomechanical_setup.png', width=500, embed=True)
# System Parameters (in units of wm)
#-----------------------------------
Nc = 4 # Number of cavity states
Nm = 80 # Number of mech states
kappa = 0.3 # Cavity damping rate
E = 0.1 # Driving Amplitude
g0 = 2.4*kappa # Coupling strength
Qm = 1e4 # Mech quality factor
gamma = 1/Qm # Mech damping rate
n_th = 1 # Mech bath temperature
delta = -0.43 # Detuning
# Operators
#----------
a = tensor(destroy(Nc), qeye(Nm))
b = tensor(qeye(Nc), destroy(Nm))
num_b = b.dag()*b
num_a = a.dag()*a
# Hamiltonian
#------------
H = -delta*(num_a)+num_b+g0*(b.dag()+b)*num_a+E*(a.dag()+a)
# Collapse operators
#-------------------
cc = np.sqrt(kappa)*a
cm = np.sqrt(gamma*(1.0 + n_th))*b
cp = np.sqrt(gamma*n_th)*b.dag()
c_ops = [cc,cm,cp]
solvers = ['direct','eigen','power','iterative-gmres','iterative-bicgstab']
mech_dms = []
for ss in solvers:
if ss in ['iterative-gmres','iterative-bicgstab']:
use_rcm = True
else:
use_rcm = False
rho_ss, info = steadystate(H, c_ops, method=ss,use_precond=True,
use_rcm=use_rcm, tol=1e-15, return_info=True)
print(ss,'solution time =',info['solution_time'])
rho_mech = ptrace(rho_ss, 1)
mech_dms.append(rho_mech)
mech_dms = np.asarray(mech_dms)
for kk in range(len(mech_dms)):
print((mech_dms[kk]-mech_dms[0]).data.nnz)
for kk in range(len(mech_dms)):
print(any(abs((mech_dms[kk] - mech_dms[0]).data.data)>1e-11))
fig = plt.figure(figsize=(8,6))
plt.spy(rho_mech.data, ms=1);
diag = rho_mech.diag()
rho_mech2 = qdiags(diag, 0, dims=rho_mech.dims, shape=rho_mech.shape)
fig = plt.figure(figsize=(8,6))
plt.spy(rho_mech2.data, ms=1);
xvec = np.linspace(-20, 20, 256)
W = wigner(rho_mech2, xvec, xvec)
wmap = wigner_cmap(W, shift=-1e-5)
fig, ax = plt.subplots(figsize=(8,6))
c = ax.contourf(xvec, xvec, W, 256, cmap=wmap)
ax.set_xlim([-10, 10])
ax.set_ylim([-10, 10])
plt.colorbar(c, ax=ax);
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optomechanical Hamiltonian
Step2: Assuming that $a^{+}$, $a$ and $b^{+}$,$b$ are the raising and lowering operators for the cavity and mechanical oscillator, respectively, the Hamiltonian for an optomechanical system driven by a classical monochromatic pump term can be written as
Step3: Build Hamiltonian and Collapse Operators
Step4: Run Steady State Solvers
Step5: Check Consistancy of Solutions
Step6: It seems that the eigensolver solution is not exactly the same. Lets check the magnitude of the elements to see if they are small.
Step7: Plot the Mechanical Oscillator Wigner Function
Step8: Therefore, to remove this error, let use explicitly take the diagonal elements are form a new operator out of them
Step9: Now lets compute the oscillator Wigner function and plot it to see if there are any regions of negativity.
Step10: Versions
|
3,313
|
<ASSISTANT_TASK:>
Python Code:
# Tensorflow
import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
# Common imports
import numpy as np
import numpy.random as rnd
import os
import sys
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
rnd.seed(4)
m = 100
w1, w2 = 0.1, 0.3
noise = 0.1
angles = rnd.rand(m) * 3 * np.pi / 2 - 0.5
X_train = np.empty((m, 3))
X_train[:, 0] = np.cos(angles) + np.sin(angles)/2 + noise * rnd.randn(m) / 2
X_train[:, 1] = np.sin(angles) * 0.7 + noise * rnd.randn(m) / 2
X_train[:, 2] = X_train[:, 0] * w1 + X_train[:, 1] * w2 + noise * rnd.randn(m)
# Normalize the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter3D(X_train[:, 0], X_train[:, 1], X_train[:, 2])
plt.xlabel("$z_1$", fontsize=18)
plt.ylabel("$z_2$", fontsize=18, rotation=0)
pyplot.show()
##
tf.reset_default_graph()
n_inputs = 3
n_hidden = 2 # codings
n_outputs = n_inputs
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = fully_connected(X, n_hidden, activation_fn=None)
outputs = fully_connected(hidden, n_outputs, activation_fn=None)
mse = tf.reduce_mean(tf.square(outputs - X))
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
##
n_iterations = 10000
codings = hidden
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
training_op.run(feed_dict={X: X_train})
codings_val = codings.eval(feed_dict={X: X_train})
fig = plt.figure(figsize=(4,3))
plt.plot(codings_val[:,0], codings_val[:, 1], "b.")
plt.xlabel("$z_1$", fontsize=18)
plt.ylabel("$z_2$", fontsize=18, rotation=0)
plt.show()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_inputs = 28*28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.0001
initializer = tf.contrib.layers.variance_scaling_initializer() # initialization
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
with tf.contrib.framework.arg_scope([fully_connected],
activation_fn=tf.nn.elu,
weights_initializer=initializer,
weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg)):
hidden1 = fully_connected(X, n_hidden1)
hidden2 = fully_connected(hidden1, n_hidden2)
hidden3 = fully_connected(hidden2, n_hidden3)
outputs = fully_connected(hidden3, n_outputs, activation_fn=None)
mse = tf.reduce_mean(tf.square(outputs - X))
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([mse] + reg_losses)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 4
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch})
mse_train = mse.eval(feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_all_layers.ckpt")
def plot_image(image, shape=[28, 28]):
plt.imshow(image.reshape(shape), cmap="Greys", interpolation="nearest")
plt.axis("off")
def show_reconstructed_digits(X, outputs, model_path = None, n_test_digits = 2):
with tf.Session() as sess:
if model_path:
saver.restore(sess, model_path)
X_test = mnist.test.images[:n_test_digits]
outputs_val = outputs.eval(feed_dict={X: X_test})
fig = plt.figure(figsize=(8, 3 * n_test_digits))
for digit_index in range(n_test_digits):
plt.subplot(n_test_digits, 2, digit_index * 2 + 1)
plot_image(X_test[digit_index])
plt.subplot(n_test_digits, 2, digit_index * 2 + 2)
plot_image(outputs_val[digit_index])
show_reconstructed_digits(X, outputs, "./my_model_all_layers.ckpt",n_test_digits = 2)
tf.reset_default_graph()
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.0001
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.transpose( weights2, name ="weights3") # tied weights
weights4 = tf.transpose( weights1, name ="weights4") # tied weights
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name="biases4")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
reconstruction_loss = tf.reduce_mean( tf.square( outputs - X))
reg_loss = regularizer(weights1) + regularizer(weights2)
loss = reconstruction_loss + reg_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 4
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch})
mse_train = reconstruction_loss.eval(feed_dict={X: X_batch},session=sess)
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_tied.ckpt")
show_reconstructed_digits(X, outputs, "./my_model_tied.ckpt")
def train_autoencoder(X_train, n_neurons, n_epochs, batch_size, learning_rate = 0.01, l2_reg = 0.0005, activation_fn=tf.nn.elu):
graph = tf.Graph()
with graph.as_default():
n_inputs = X_train.shape[1]
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
with tf.contrib.framework.arg_scope(
[fully_connected],
activation_fn=activation_fn,
weights_initializer=tf.contrib.layers.variance_scaling_initializer(),
weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg)):
hidden = fully_connected(X, n_neurons, scope="hidden")
outputs = fully_connected(hidden, n_inputs, activation_fn=None, scope="outputs")
mse = tf.reduce_mean(tf.square(outputs - X))
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([mse] + reg_losses)
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session(graph=graph) as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
indices = rnd.permutation(len(X_train))[:batch_size]
X_batch = X_train[indices]
sess.run(training_op, feed_dict={X: X_batch})
mse_train = mse.eval(feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", mse_train)
params = dict([(var.name, var.eval()) for var in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)])
hidden_val = hidden.eval(feed_dict={X: X_train})
return hidden_val, params["hidden/weights:0"], params["hidden/biases:0"], params["outputs/weights:0"], params["outputs/biases:0"]
hidden_output, W1, b1, W4, b4 = train_autoencoder(mnist.train.images, n_neurons=300, n_epochs=4, batch_size=150)
_, W2, b2, W3, b3 = train_autoencoder(hidden_output, n_neurons=150, n_epochs=4, batch_size=150)
tf.reset_default_graph()
n_inputs = 28*28
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden1 = tf.nn.elu(tf.matmul(X, W1) + b1)
hidden2 = tf.nn.elu(tf.matmul(hidden1, W2) + b2)
hidden3 = tf.nn.elu(tf.matmul(hidden2, W3) + b3)
outputs = tf.matmul(hidden3, W4) + b4
show_reconstructed_digits(X, outputs)
tf.reset_default_graph()
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.0001
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights3_init = initializer([n_hidden2, n_hidden3])
weights4_init = initializer([n_hidden3, n_outputs])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.Variable(weights3_init, dtype=tf.float32, name="weights3")
weights4 = tf.Variable(weights4_init, dtype=tf.float32, name="weights4")
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name="biases4")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
with tf.name_scope("phase1"):
optimizer = tf.train.AdamOptimizer(learning_rate)
phase1_outputs = tf.matmul(hidden1, weights4) + biases4 # bypass hidden2 and hidden3
phase1_mse = tf.reduce_mean(tf.square(phase1_outputs - X))
phase1_reg_loss = regularizer(weights1) + regularizer(weights4)
phase1_loss = phase1_mse + phase1_reg_loss
phase1_training_op = optimizer.minimize(phase1_loss)
with tf.name_scope("phase2"):
optimizer = tf.train.AdamOptimizer(learning_rate)
phase2_mse = tf.reduce_mean(tf.square(hidden3 - hidden1))
phase2_reg_loss = regularizer(weights2) + regularizer(weights3)
phase2_loss = phase2_mse + phase2_reg_loss
phase2_training_op = optimizer.minimize(phase2_loss,
var_list=[weights2, biases2, weights3, biases3]) # freeze hidden1
init = tf.global_variables_initializer()
saver = tf.train.Saver()
training_ops = [phase1_training_op, phase2_training_op]
mses = [phase1_mse, phase2_mse]
n_epochs = [4, 4]
batch_sizes = [150, 150]
with tf.Session() as sess:
init.run()
for phase in range(2):
print("Training phase #{}".format(phase + 1))
for epoch in range(n_epochs[phase]):
n_batches = mnist.train.num_examples // batch_sizes[phase]
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_sizes[phase])
sess.run(training_ops[phase], feed_dict={X: X_batch})
mse_train = mses[phase].eval(feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_one_at_a_time.ckpt")
mse_test = mses[phase].eval(feed_dict={X: mnist.test.images})
print("Test MSE:", mse_test)
show_reconstructed_digits(X, outputs, "./my_model_one_at_a_time.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_one_at_a_time.ckpt")
weights1_val = weights1.eval()
for i in range(5):
plt.subplot(1, 5, i + 1)
plot_image(weights1_val.T[i])
plt.show()
training_ops = [phase1_training_op, phase2_training_op, training_op]
mses = [phase1_mse, phase2_mse, mse]
n_epochs = [4, 4]
batch_sizes = [150, 150]
with tf.Session() as sess:
init.run()
for phase in range(2):
print("Training phase #{}".format(phase + 1))
if phase == 1:
mnist_hidden1 = hidden1.eval(feed_dict={X: mnist.train.images})
for epoch in range(n_epochs[phase]):
n_batches = mnist.train.num_examples // batch_sizes[phase]
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
if phase == 1:
indices = rnd.permutation(len(mnist_hidden1))
hidden1_batch = mnist_hidden1[indices[:batch_sizes[phase]]]
feed_dict = {hidden1: hidden1_batch}
sess.run(training_ops[phase], feed_dict=feed_dict)
else:
X_batch, y_batch = mnist.train.next_batch(batch_sizes[phase])
feed_dict = {X: X_batch}
sess.run(training_ops[phase], feed_dict=feed_dict)
mse_train = mses[phase].eval(feed_dict=feed_dict)
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_cache_frozen.ckpt")
mse_test = mses[phase].eval(feed_dict={X: mnist.test.images})
print("Test MSE:", mse_test)
show_reconstructed_digits(X, outputs, "./my_model_cache_frozen.ckpt")
tf.reset_default_graph()
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150
n_outputs = 10
learning_rate = 0.01
l2_reg = 0.0005
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
y = tf.placeholder(tf.int32, shape=[None])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights3_init = initializer([n_hidden2, n_hidden3])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.Variable(weights3_init, dtype=tf.float32, name="weights3")
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
logits = tf.matmul(hidden2, weights3) + biases3
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
reg_loss = regularizer(weights1) + regularizer(weights2) + regularizer(weights3)
loss = cross_entropy + reg_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
pretrain_saver = tf.train.Saver([weights1, weights2, biases1, biases2])
saver = tf.train.Saver()
n_epochs = 4
batch_size = 150
n_labeled_instances = 20000
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = n_labeled_instances // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
indices = rnd.permutation(n_labeled_instances)[:batch_size]
X_batch, y_batch = mnist.train.images[indices], mnist.train.labels[indices]
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy:", accuracy_val, end=" ")
saver.save(sess, "./my_model_supervised.ckpt")
accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print("Test accuracy:", accuracy_val)
n_epochs = 4
batch_size = 150
n_labeled_instances = 20000
#training_op = optimizer.minimize(loss, var_list=[weights3, biases3]) # Freeze layers 1 and 2 (optional)
with tf.Session() as sess:
init.run()
pretrain_saver.restore(sess, "./my_model_cache_frozen.ckpt")
for epoch in range(n_epochs):
n_batches = n_labeled_instances // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
indices = rnd.permutation(n_labeled_instances)[:batch_size]
X_batch, y_batch = mnist.train.images[indices], mnist.train.labels[indices]
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy:", accuracy_val, end="\t")
saver.save(sess, "./my_model_supervised_pretrained.ckpt")
accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print("Test accuracy:", accuracy_val)
import math
tf.reset_default_graph()
from tensorflow.contrib.layers import dropout
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.00001
keep_prob = 0.7
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
is_training = tf.placeholder_with_default(False, shape=(), name='is_training')
X_noisy = tf.cond(is_training,
lambda: X + tf.random_normal(shape=tf.shape(X),mean=0,stddev=1/math.sqrt(n_hidden1)), ## Gaussian noise
lambda: X)
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.transpose(weights2, name="weights3") # tied weights
weights4 = tf.transpose(weights1, name="weights4") # tied weights
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name="biases4")
hidden1 = activation(tf.matmul(X_noisy, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
optimizer = tf.train.AdamOptimizer(learning_rate)
mse = tf.reduce_mean(tf.square(outputs - X))
reg_loss = regularizer(weights1) + regularizer(weights2)
loss = mse + reg_loss
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, is_training: True})
mse_train = mse.eval(feed_dict={X: X_batch, is_training: False})
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_stacked_denoising.ckpt")
show_reconstructed_digits(X, outputs, "./my_model_stacked_denoising.ckpt")
tf.reset_default_graph()
from tensorflow.contrib.layers import dropout
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.00001
keep_prob = 0.7
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
is_training = tf.placeholder_with_default(False, shape=(), name='is_training')
X_drop = dropout(X, keep_prob, is_training=is_training)
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.transpose(weights2, name="weights3") # tied weights
weights4 = tf.transpose(weights1, name="weights4") # tied weights
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name="biases4")
hidden1 = activation(tf.matmul(X_drop, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
optimizer = tf.train.AdamOptimizer(learning_rate)
mse = tf.reduce_mean(tf.square(outputs - X))
reg_loss = regularizer(weights1) + regularizer(weights2)
loss = mse + reg_loss
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, is_training: True})
mse_train = mse.eval(feed_dict={X: X_batch, is_training: False})
print("\r{}".format(epoch), "Train MSE:", mse_train)
saver.save(sess, "./my_model_stacked_denoising.ckpt")
show_reconstructed_digits(X, outputs, "./my_model_stacked_denoising.ckpt")
p = 0.1
q = np.linspace(0, 1, 500)
kl_div = p * np.log(p / q) + (1 - p) * np.log((1 - p) / (1 - q))
mse = (p - q)**2
plt.plot([p, p], [0, 0.3], "k:")
plt.text(0.05, 0.32, "Target\nsparsity", fontsize=14)
plt.plot(q, kl_div, "b-", label="KL divergence")
plt.plot(q, mse, "r--", label="MSE")
plt.legend(loc="upper left")
plt.xlabel("Actual sparsity")
plt.ylabel("Cost", rotation=0)
plt.axis([0, 1, 0, 0.95])
def kl_divergence(p, q):
Kullback Leibler divergence
return p * tf.log(p / q) + (1 - p) * tf.log((1 - p) / (1 - q))
tf.reset_default_graph()
n_inputs = 28 * 28
n_hidden1 = 1000 # sparse codings
n_outputs = n_inputs
learning_rate = 0.01
sparsity_target = 0.1
sparsity_weight = 0.2
#activation = tf.nn.softplus # soft variant of ReLU
activation = tf.nn.sigmoid
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_outputs])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_outputs), name="biases2")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
outputs = tf.matmul(hidden1, weights2) + biases2
optimizer = tf.train.AdamOptimizer(learning_rate)
mse = tf.reduce_mean(tf.square(outputs - X))
hidden1_mean = tf.reduce_mean(hidden1, axis=0) # batch mean
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden1_mean))
loss = mse + sparsity_weight * sparsity_loss
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 100
batch_size = 1000
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch})
mse_val, sparsity_loss_val, loss_val = sess.run([mse, sparsity_loss, loss], feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", mse_val, "\tSparsity loss:", sparsity_loss_val, "\tTotal loss:", loss_val)
saver.save(sess, "./my_model_sparse.ckpt")
show_reconstructed_digits(X, outputs, "./my_model_sparse.ckpt")
tf.reset_default_graph()
n_inputs = 28*28
n_hidden1 = 500
n_hidden2 = 500
n_hidden3 = 20 # codings
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.001
activation = tf.nn.elu
initializer = tf.contrib.layers.variance_scaling_initializer(mode="FAN_AVG", uniform=True)
X = tf.placeholder(tf.float32, [None, n_inputs])
weights1 = tf.Variable(initializer([n_inputs, n_hidden1]))
weights2 = tf.Variable(initializer([n_hidden1, n_hidden2]))
weights3_mean = tf.Variable(initializer([n_hidden2, n_hidden3]))
weights3_log_sigma = tf.Variable(initializer([n_hidden2, n_hidden3]))
weights4 = tf.Variable(initializer([n_hidden3, n_hidden4]))
weights5 = tf.Variable(initializer([n_hidden4, n_hidden5]))
weights6 = tf.Variable(initializer([n_hidden5, n_inputs]))
biases1 = tf.Variable(tf.zeros([n_hidden1], dtype=tf.float32))
biases2 = tf.Variable(tf.zeros([n_hidden2], dtype=tf.float32))
biases3_mean = tf.Variable(tf.zeros([n_hidden3], dtype=tf.float32))
biases3_log_sigma = tf.Variable(tf.zeros([n_hidden3], dtype=tf.float32))
biases4 = tf.Variable(tf.zeros([n_hidden4], dtype=tf.float32))
biases5 = tf.Variable(tf.zeros([n_hidden5], dtype=tf.float32))
biases6 = tf.Variable(tf.zeros([n_inputs], dtype=tf.float32))
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3_mean = tf.matmul(hidden2, weights3_mean) + biases3_mean
hidden3_log_sigma = tf.matmul(hidden2, weights3_log_sigma) + biases3_log_sigma
noise = tf.random_normal(tf.shape(hidden3_log_sigma), dtype=tf.float32)
hidden3 = hidden3_mean + tf.sqrt(tf.exp(hidden3_log_sigma)) * noise
hidden4 = activation(tf.matmul(hidden3, weights4) + biases4)
hidden5 = activation(tf.matmul(hidden4, weights5) + biases5)
logits = tf.matmul(hidden5, weights6) + biases6
outputs = tf.sigmoid(logits)
reconstruction_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits))
latent_loss = 0.5 * tf.reduce_sum(tf.exp(hidden3_log_sigma) + tf.square(hidden3_mean) - 1 - hidden3_log_sigma)
cost = reconstruction_loss + latent_loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(cost)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
tf.reset_default_graph()
n_inputs = 28*28
n_hidden1 = 500
n_hidden2 = 500
n_hidden3 = 20 # codings
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.001
initializer = tf.contrib.layers.variance_scaling_initializer()
with tf.contrib.framework.arg_scope([fully_connected],
activation_fn=tf.nn.elu,
weights_initializer=initializer):
X = tf.placeholder(tf.float32, [None, n_inputs])
hidden1 = fully_connected(X, n_hidden1)
hidden2 = fully_connected(hidden1, n_hidden2)
hidden3_mean = fully_connected(hidden2, n_hidden3, activation_fn=None)
hidden3_gamma = fully_connected(hidden2, n_hidden3, activation_fn=None)
noise = tf.random_normal(tf.shape(hidden3_gamma), dtype=tf.float32)
hidden3 = hidden3_mean + tf.exp(0.5 * hidden3_gamma) * noise
hidden4 = fully_connected(hidden3, n_hidden4)
hidden5 = fully_connected(hidden4, n_hidden5)
logits = fully_connected(hidden5, n_outputs, activation_fn=None)
outputs = tf.sigmoid(logits)
reconstruction_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits))
latent_loss = 0.5 * tf.reduce_sum(tf.exp(hidden3_gamma) + tf.square(hidden3_mean) - 1 - hidden3_gamma)
cost = reconstruction_loss + latent_loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(cost)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 50
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = mnist.train.num_examples // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch})
cost_val, reconstruction_loss_val, latent_loss_val = sess.run([cost, reconstruction_loss, latent_loss], feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train cost:", cost_val, "\tReconstruction loss:", reconstruction_loss_val, "\tLatent loss:", latent_loss_val)
saver.save(sess, "./my_model_variational.ckpt")
n_digits = 3
X_test, y_test = mnist.test.next_batch(batch_size)
codings = hidden3
with tf.Session() as sess:
saver.restore(sess, "./my_model_variational.ckpt")
codings_val = codings.eval(feed_dict={X: X_test})
with tf.Session() as sess:
saver.restore(sess, "./my_model_variational.ckpt")
outputs_val = outputs.eval(feed_dict={codings: codings_val})
fig = plt.figure(figsize=(8, 2.5 * n_digits))
for iteration in range(n_digits):
plt.subplot(n_digits, 2, 1 + 2 * iteration)
plot_image(X_test[iteration])
plt.subplot(n_digits, 2, 2 + 2 * iteration)
plot_image(outputs_val[iteration])
n_rows = 6
n_cols = 10
n_digits = n_rows * n_cols
codings_rnd = np.random.normal(size=[n_digits, n_hidden3])
with tf.Session() as sess:
saver.restore(sess, "./my_model_variational.ckpt")
outputs_val = outputs.eval(feed_dict={codings: codings_rnd})
def plot_multiple_images(images, n_rows, n_cols, pad=2):
images = images - images.min() # make the minimum == 0, so the padding looks white
w,h = images.shape[1:]
image = np.zeros(((w+pad)*n_rows+pad, (h+pad)*n_cols+pad))
for y in range(n_rows):
for x in range(n_cols):
image[(y*(h+pad)+pad):(y*(h+pad)+pad+h),(x*(w+pad)+pad):(x*(w+pad)+pad+w)] = images[y*n_cols+x]
plt.imshow(image, cmap="Greys", interpolation="nearest")
plt.axis("off")
plot_multiple_images(outputs_val.reshape(-1, 28, 28), n_rows, n_cols)
plt.show()
n_iterations = 3
n_digits = 6
codings_rnd = np.random.normal(size=[n_digits, n_hidden3])
with tf.Session() as sess:
saver.restore(sess, "./my_model_variational.ckpt")
target_codings = np.roll(codings_rnd, -1, axis=0)
for iteration in range(n_iterations + 1):
codings_interpolate = codings_rnd + (target_codings - codings_rnd) * iteration / n_iterations
outputs_val = outputs.eval(feed_dict={codings: codings_interpolate})
plt.figure(figsize=(11, 1.5*n_iterations))
for digit_index in range(n_digits):
plt.subplot(1, n_digits, digit_index + 1)
plot_image(outputs_val[digit_index])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Autoencoder
Step2: Stacked Autoencoders on MNIST
Step3: Train all layers at once
Step4: Now let's train it! Note that we don't feed target values (y_batch is not used). This is unsupervised training.
Step5: This function loads the model, evaluates it on the test set (it measures the reconstruction error), then it displays the original image and its reconstruction
Step6: Tying Weights
Step7: Highlights
Step8: Now let's train two Autoencoders. The first one is trained on the training data, and the second is trained on the previous Autoencoder's hidden layer output
Step9: Finally, we can create a Stacked Autoencoder by simply reusing the weights and biases from the Autoencoders we just trained
Step10: Training one Autoencoder at a time in a single graph
Step11: Visualizing Features
Step12: Cache the frozen layer outputs
Step13: Unsupervised Pretraining Using Stacked Autoencoders
Step14: Regular training (without pretraining)
Step15: Now reusing the first two layers of the autoencoder we pretrained
Step16: One of the triggers of the current Deep Learning tsunami is the discovery in (Hinton et al., 2006) that deep neural networks can be pretrained in an unsupervised fashion. They used restricted Boltzmann machines for that, but in (Bengio et al., 2006) showed that autoencoders worked just as well. There is nothing special about the TensorFlow implementation
Step17: Note
Step19: Sparse Autoencoders
Step20: Variational Autoencoders
Step21: Encode
Step22: Decode
Step23: Let's plot the reconstructions
Step24: Generate digits
Step25: Interpolate digits
|
3,314
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import, division, print_function
import glob
import imageio
import os
import PIL
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from IPython import display
np.random.seed(1)
tf.random.set_seed(1)
BATCH_SIZE = 128
BUFFER_SIZE = 60000
EPOCHS = 60
LR = 1e-2
EMBED_DIM = 64 # intermediate_dim
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(BATCH_SIZE*4)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
test_images = (test_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
#TODO 1.
def make_encoder(embed_dim):
model = tf.keras.Sequential(name="encoder")
# TODO: Your code goes here.
assert model.output_shape == (None, embed_dim)
return model
#TODO 1.
def make_decoder(embed_dim):
model = tf.keras.Sequential(name="decoder")
# TODO: Your code goes here.
assert model.output_shape == (None, 28, 28, 1)
return model
ae_model = tf.keras.models.Sequential([make_encoder(EMBED_DIM), make_decoder(EMBED_DIM)])
ae_model.summary()
make_encoder(EMBED_DIM).summary()
make_decoder(EMBED_DIM).summary()
#TODO 2.
def loss(model, original):
reconstruction_error = # TODO: Your code goes here.
return reconstruction_error
optimizer = tf.keras.optimizers.SGD(lr=LR)
checkpoint_dir = "./ae_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=ae_model)
#TODO 3.
@tf.function
def train_step(images):
with tf.GradientTape() as tape:
ae_gradients = # TODO: Your code goes here.
gradient_variables = # TODO: Your code goes here.
# TODO: Your code goes here.
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(ae_model,
epoch + 1,
test_images[:16, :, :, :])
# Save the model every 5 epochs
if (epoch + 1) % 5 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(
epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(ae_model,
epochs,
test_images[:16, :, :, :])
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
pixels = predictions[i, :, :] * 127.5 + 127.5
pixels = np.array(pixels, dtype='float')
pixels = pixels.reshape((28,28))
plt.imshow(pixels, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(ae_model, 4, test_images[:16, :, :, :])
#TODO 4.
# TODO: Your code goes here.
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('./ae_images/image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
anim_file = 'autoencoder.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('./ae_images/image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.
Step2: Load and prepare the dataset
Step3: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
Step4: Create the encoder and decoder models
Step5: The Decoder
Step6: Finally, we stitch the encoder and decoder models together to create our autoencoder.
Step7: Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.
Step8: Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.
Step9: Optimizer for the autoencoder
Step10: Save checkpoints
Step11: Define the training loop
Step12: We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
Step13: Generate and save images.
Step14: Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.
Step15: Train the model
Step16: Create a GIF
|
3,315
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated-nightly
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
federated_float_on_clients = tff.type_at_clients(tf.float32)
str(federated_float_on_clients.member)
str(federated_float_on_clients.placement)
str(federated_float_on_clients)
federated_float_on_clients.all_equal
str(tff.type_at_clients(tf.float32, all_equal=True))
simple_regression_model_type = (
tff.StructType([('a', tf.float32), ('b', tf.float32)]))
str(simple_regression_model_type)
str(tff.type_at_clients(
simple_regression_model_type, all_equal=True))
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
return tff.federated_mean(sensor_readings)
str(get_average_temperature.type_signature)
get_average_temperature([68.5, 70.3, 69.8])
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
print ('Getting traced, the argument is "{}".'.format(
type(sensor_readings).__name__))
return tff.federated_mean(sensor_readings)
@tff.tf_computation(tf.float32)
def add_half(x):
return tf.add(x, 0.5)
str(add_half.type_signature)
@tff.federated_computation(tff.type_at_clients(tf.float32))
def add_half_on_clients(x):
return tff.federated_map(add_half, x)
str(add_half_on_clients.type_signature)
add_half_on_clients([1.0, 3.0, 2.0])
try:
# Eager mode
constant_10 = tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + constant_10
except Exception as err:
print (err)
def get_constant_10():
return tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + get_constant_10()
add_ten(5.0)
float32_sequence = tff.SequenceType(tf.float32)
str(float32_sequence)
@tff.tf_computation(tff.SequenceType(tf.float32))
def get_local_temperature_average(local_temperatures):
sum_and_count = (
local_temperatures.reduce((0.0, 0), lambda x, y: (x[0] + y, x[1] + 1)))
return sum_and_count[0] / tf.cast(sum_and_count[1], tf.float32)
str(get_local_temperature_average.type_signature)
@tff.tf_computation(tff.SequenceType(tf.int32))
def foo(x):
return x.reduce(np.int32(0), lambda x, y: x + y)
foo([1, 2, 3])
get_local_temperature_average([68.5, 70.3, 69.8])
@tff.tf_computation(tff.SequenceType(collections.OrderedDict([('A', tf.int32), ('B', tf.int32)])))
def foo(ds):
print('element_structure = {}'.format(ds.element_spec))
return ds.reduce(np.int32(0), lambda total, x: total + x['A'] * x['B'])
str(foo.type_signature)
foo([{'A': 2, 'B': 3}, {'A': 4, 'B': 5}])
@tff.federated_computation(
tff.type_at_clients(tff.SequenceType(tf.float32)))
def get_global_temperature_average(sensor_readings):
return tff.federated_mean(
tff.federated_map(get_local_temperature_average, sensor_readings))
str(get_global_temperature_average.type_signature)
get_global_temperature_average([[68.0, 70.0], [71.0], [68.0, 72.0, 70.0]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 自定义联合算法,第 1 部分:Federated Core 简介
Step2: 联合数据
Step3: 更普遍的是,TFF 中的联合类型是通过指定其成员组成(留驻在各个设备上的数据项)的类型 T 和托管此类型联合值的设备组 G(加上我们会在稍后提及的第三个可选位)来定义的。我们将托管联合值的设备组 G 称为该值的布局。因此,tff.CLIENTS 是布局示例。
Step4: 带有成员组成 T 和布局 G 的联合类型可以紧凑地表示为 {T}@G,如下所示。
Step5: 此简明表示法中的大括号 {} 提醒您成员组成(不同设备上的数据项)可能有所不同,例如您所期望的温度传感器读数,因此,客户端会作为整体共同托管 T 型项的多重集,它们共同构成联合值。
Step6: 可以将带有布局 G,且其中所有 T 型成员组成已知相等的联合类型紧凑地表示为 T@G(与 {T}@G 相对,也就是说,去掉大括号表示成员组成的多重集由单个项目构成)。
Step7: 在实际场景中可能会出现的此类联合值的一个示例是超参数(例如学习率、裁剪范数等),该超参数已由服务器广播到参与联合训练的一组设备。
Step8: 请注意,虽然我们在上文中仅指定了 dtype,但也支持非标量类型。在上面的代码中,tf.float32 是更通用的 tff.TensorType(dtype=tf.float32, shape=[]) 的快捷表示法。
Step9: 根据上面的联合浮点的对称性,我们将这种类型称为联合元组。一般来说,我们会经常使用术语联合 XYZ 来指代联合值,其中成员组成类似 XYZ。因此,我们将对联合元组、联合序列、联合模型等进行讨论。
Step10: 查看上面的代码,此时您可能会问:TensorFlow 中不是已经有用于定义可组合单元的装饰器构造(如 tf.function)了吗?既然如此,为什么还要引入另一个构造?它们有什么区别?
Step11: 类型签名告诉我们,该计算接受客户端设备上不同传感器读数的集合,并在服务器上返回单个平均值。
Step12: 在模拟模式下运行此类计算时,您将充当具有系统范围视图的外部观察者,您能够在网络中的任何位置提供输入和使用输出,这里正是如此,您提供了客户端值作为输入,并使用了服务器结果。
Step13: 您可以将定义了联合计算的 Python 代码想象成在非 Eager 上下文中构建了 TensorFlow 计算图的 Python 代码(如果您不熟悉 TensorFlow 的非 Eager 用法,请想象您的 Python 代码定义了运算的计算图以稍后执行,但实际上并不立即运行它们)。TensorFlow 中的非 Eager 计算图构建代码是 Python,但用此代码构建的 TensorFlow 计算图与平台无关且可序列化。
Step14: 再次看到此内容,您可能想知道我们为什么应该定义另一个装饰器 tff.tf_computation,而不是简单地使用现有机制(如 tf.function)。与前一部分不同,我们在这里处理的是一个普通的 TensorFlow 代码块。
Step15: 请注意,此类型签名没有布局。TensorFlow 计算无法使用或返回联合类型。
Step16: 执行 TensorFlow 计算
Step17: 同样值得注意的是,以这种方式调用 add_half_on_clients 计算会模拟分布式过程。数据会在客户端上使用,并在客户端上返回。实际上,此计算会让每个客户端执行一次本地操作。此系统中没有明确提及 tff.SERVER(但在实践中,编排此类处理可能会用到)。可以将以这种方式定义的计算理解为在概念上类似于 MapReduce 中的 Map 阶段。
Step18: 上述代码失败的原因是,constant_10 在计算图外部构造,而该计算图是 tff.tf_computation 在序列化过程中在 add_ten 的主体内部构造的。
Step19: 请注意,TensorFlow 中的序列化机制正在逐步完善,我们期望 TFF 对计算进行序列化方式的细节也将逐步完善。
Step20: 假设在温度传感器示例中,每个传感器包含不只一个温度读数,而是多个。您可以使用下面的代码在 TensorFlow 中定义 TFF 计算,该计算将使用 tf.data.Dataset.reduce 算子在单个本地数据集中计算温度的平均值。
Step21: 在用 tff.tf_computation 装饰的方法的主体中,TFF 序列类型的形式参数简单地表示为行为类似 tf.data.Dataset 的对象(即支持相同的属性和方法,它们目前未作为该类型的子类实现,随着 TensorFlow 中对数据集的支持不断发展,这可能会发生变化)。
Step22: 请记住,与普通的 tf.data.Dataset 不同,这些类似数据集的对象是占位符。它们不包含任何元素,因为它们表示抽象的序列类型参数,在具体上下文中使用时将绑定到具体数据。目前,对抽象定义的占位符数据的支持仍有一定局限,在早期的 TFF 中,您可能会遇到某些限制,但在本教程中不必担心这个问题(有关详细信息,请参阅文档页面)。
Step23: 与其他 TFF 类型一样,上面定义的序列可以使用 tff.StructType 构造函数定义嵌套结构。例如,下面是一个声明计算的方法,该计算接受 A、B 的序列对,并返回其乘积的和。我们将跟踪语句包含在计算主体中,以便您能够看到 TFF 类型签名如何转换为数据集的 output_types 和 output_shapes。
Step24: 尽管将 tf.data.Datasets 用作形式参数在简单场景(如本教程中的场景)中有效,但对它的支持仍有局限且在不断发展。
Step25: 请注意,这并非是来自所有客户端的所有本地温度读数的简单平均,因为这需要根据不同客户端本地维护的读数数量权衡其贡献。我们将其作为练习留给读者来更新上面的代码;tff.federated_mean 算子接受权重作为可选的第二个参数(预计为联合浮点)。
Step26: 下面是如何用 Python 在数据样本上本地执行计算的方法。请注意,我们现在提供输入的方式是作为 list 的 list。外部列表在通过 tff.CLIENTS 表示的组中对设备进行迭代,内部列表在每个设备的本地序列中对元素进行迭代。
|
3,316
|
<ASSISTANT_TASK:>
Python Code:
symbols = '$#%^&'
[ord(s) for s in symbols]
tuple(ord(s) for s in symbols)
(ord(s) for s in symbols)
for x in (ord(s) for s in symbols):
print(x)
import array
array.array('I', (ord(s) for s in symbols))
colors = ['black', 'white']
sizes = ['S', 'M', 'L']
for tshirt in ((c, s) for c in colors for s in sizes):
print(tshirt)
for tshirt in ('%s %s' % (c, s) for c in colors for s in sizes):
print(tshirt)
lax_coordinates = (33.9425, -118.408056)
city, year, pop, chg, area = ('Tokyo', 2003, 32450, 0.66, 8014)
traveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'), ('ESP', 'XDA205856')]
for passport in sorted(traveler_ids):
print('%s/%s' % passport)
for country, _ in traveler_ids:
print(country)
import os
_, filename = os.path.split('/home/kyle/afile.txt')
print(filename)
a, b, *rest = range(5)
a, b, rest
a, b, *rest = range(3)
a, b, rest
a, b, *rest = range(2)
a, b, rest
a, *body, c, d = range(5)
a, body, c, d
*head, b, c, d = range(5)
head, b, c, d
metro_areas = [('Tokyo','JP',36.933,(35.689722,139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
print('{:15} | {:^9} | {:^9}'.format('', 'lat.', 'long.'))
fmt = '{:15} | {:9.4f} | {:9.4f}'
fmt
for name, cc, pop, (latitude, longitude) in metro_areas:
if longitude <= 0:
print(fmt.format(name, latitude, longitude))
from collections import namedtuple
City = namedtuple('City', 'name country population coordinates')
tokyo = City('Tokyo', 'JP', 36.933, (35.689722, 139.691667))
tokyo
tokyo.population
tokyo.name
tokyo.coordinates
tokyo[1]
# a few useful methods on namedtuple
City._fields
LatLong = namedtuple('LatLong', 'lat long')
delhi_data = ('Delhi NCR', 'IN', 21.935, LatLong(28.613889, 77.208889))
delhi = City._make(delhi_data) # instantiate a named tuple from an iterable
delhi._asdict()
for key, value in delhi._asdict().items():
print(key + ':', value)
# why slices and range exclude the last item
l = [10,20,30,40,50,60]
l[:2]
l[2:]
# slice objects
s = 'bicycle'
s[::3]
s[::-1]
s[::-2]
invoice =
0.....6.................................40........52...55........
1909 Pimoroni PiBrella $17.50 3 $52.50
1489 6mm Tactile Switch x20 $4.95 2 $9.90
1510 Panavise Jr. - PV-201 $28.00 1 $28.00
1601 PiTFT Mini Kit 320x240 $34.95 1 $34.95
SKU = slice(0,6)
DESCRIPTION = slice(6, 40)
UNIT_PRICE = slice(40, 52)
QUANTITY = slice(52, 55)
ITEM_TOTAL = slice(55, None)
line_items = invoice.split('\n')[2:]
for item in line_items:
print(item[UNIT_PRICE], item[DESCRIPTION])
l = list(range(10))
l
l[2:5] = [20, 30]
l
del l[5:7]
l
l[3::2] = [11, 22]
l
l[2:5] = 100
l
l[2:5] = [100]
l
l = [1, 2, 3]
l * 5
5 * 'abcd'
board = [['_'] *3 for i in range(3)]
board
board[1][2] = 'X'
board
l = [1, 2, 3]
id(l)
l *= 2
id(l) # same list
t=(1,2,3)
id(t)
t *= 2
id(t) # new tuple was created
import dis
dis.dis('s[a] += b')
fruits = ['grape', 'raspberry', 'apple', 'banana']
sorted(fruits)
fruits
sorted(fruits, reverse=True)
sorted(fruits, key=len)
sorted(fruits, key=len, reverse=True)
fruits
fruits.sort() # note that sort() returns None
fruits
breakpoints=[60, 70, 80, 90]
grades='FDCBA'
bisect.bisect(breakpoints, 99)
bisect.bisect(breakpoints, 59)
bisect.bisect(breakpoints, 75)
def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'):
i = bisect.bisect(breakpoints, score)
return grades[i]
[grade(score) for score in [33, 99, 77, 70, 89, 90, 100]]
grade(4)
grade(93)
import bisect
import random
SIZE = 7
random.seed(1729)
my_list = []
for i in range(SIZE):
new_item = random.randrange(SIZE*2)
bisect.insort(my_list, new_item)
print('%2d ->' % new_item, my_list)
from array import array
from random import random
floats = array('d', (random() for i in range(10**7)))
floats[-1]
fp = open('floats.bin', 'wb')
floats.tofile(fp)
fp.close()
floats2 = array('d')
fp = open('floats.bin', 'rb')
floats2.fromfile(fp, 10**7)
fp.close()
floats2[-1]
floats2 == floats
# Changing the value of an array item by poking one of its bytes
import array
numbers = array.array('h', [-2, -1, 0, 1, 2])
memv = memoryview(numbers)
len(memv)
memv[0]
memv_oct = memv.cast('B') # ch type of array to unsigned char
memv_oct.tolist()
memv_oct[5] = 4
numbers
import numpy
a = numpy.arange(12)
a
type(a)
a.shape
a.shape = 3, 4 # turn a into three units of 4
a
a[2]
a[2, 1]
a[:, 1]
a.transpose()
from collections import deque
dq = deque(range(10), maxlen=10)
dq
dq.rotate(3)
dq
dq.rotate(-4)
dq
dq.appendleft(-1)
dq
dq.extend([11, 22, 33])
dq
dq.extendleft([10, 20, 30, 40])
dq
# but a workaround with `key`
l = [28, 14, '28', 5, '9', '1', 0, 6, '23', 19]
sorted(l, key=int)
sorted(l, key=str)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tuples as Records
Step2: Tuple Unpacking
Step3: Named tuples
Step5: Slicing
Step6: Assigning to Slices
Step7: Using + and * with Sequences
Step8: Building Lists of Lists
Step9: Augmented Assignment with Sequences
Step10: A += Assignment Puzzler
Step11: • Putting mutable items in tuples is not a good idea.
Step12: Next
Step13: Inserting with bisect.insort
Step14: Arrays
Step15: To sort an array, use a = array.array(a.typecode, sorted(a)). To keep it sorted while adding to it, use bisect.insort.
Step16: NumPy and SciPy
Step17: Loading, saving, and operating
Step18: a hidden cost
|
3,317
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import seaborn as sns
import numpy as np
from scipy import stats, integrate
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn import linear_model
os.getcwd()
a = pd.read_csv('per_scholas_data.csv', skiprows=range(1,2))
b = pd.read_csv('per_scholas_data2.csv')
c = pd.read_csv('per_scholas_data3.csv', skiprows=range(1,2))
a.head()
b.head()
c.head()
a.describe()
b.describe()
c.describe()
a.shape
b.shape
c.shape
merged_df = a.merge(b,on='Record ID').merge(c,on='Record ID')
merged_df.head()
merged_df.describe()
merged_df
labels = a.columns.values
labels
sum(a['Placed'])
a_num = a[['First Post Training Wage', 'Retained (Months)', 'Current Wage']]
a_num_na = a_num.fillna(0)
first_wage = a_num_na['First Post Training Wage']
retained = a_num_na['Retained (Months)']
current_wage = a_num_na['Current Wage']
sns.jointplot(x=first_wage, y=current_wage)
plt.show()
sns.jointplot(x=first_wage, y=retained, kind="kde");
plt.show()
x = first_wage
y = current_wage
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
lr = linear_model.LinearRegression()
X_train, X_test, y_train, y_test = X_train.reshape(-1,1), X_test.reshape(-1, 1), y_train.reshape(-1, 1), y_test.reshape(-1, 1)
y_train.shape
lr.fit(X_train, y_train)
lr.score(X_test, y_test)
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, lr.predict(X_test), color='blue',
linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Merged dataframe only containes 9 records...may not be very useful. Go back to the first data frame.
Step2: There seems to be a good correlation between first post training wage and current wage
Step3: Not much of correlation between first wage and retained duration
|
3,318
|
<ASSISTANT_TASK:>
Python Code:
%reset -f
%matplotlib notebook
%load_ext autoreload
%autoreload 1
%aimport functions
import numpy as np
import copy
import acoustics
from functions import *
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
mpl.rcParams['lines.linewidth']=0.5
# uncomment next line to connect a qtconsole to the same session
# %qtconsole
def defIntervals(tp):
Intervals = {
'full': (-np.inf,np.inf)
#'vorbei': (tp.min(),tp.max()),
}
t = tp.reshape(len(tp)//4,4).mean(1)
for n, (t1,t2) in enumerate(zip(t[:-1],t[1:])):
Intervals[int(n+1)] = (t1,t2)
return Intervals
%psource cut_third_oct_spectrum
%psource level_from_octBank
%%capture c
import json
passby = json.load(open('Tabellen\passby.json','r+'))
fill_passby_with_signals(passby)
passbyID = '5'
pb = copy.deepcopy({k:passby[passbyID][k] for k in ['Q1','Q4']})
#
for k, v in pb.items():
v['signals']['bandpass'] = v['signals']['MIC'].bandpass(20,20000)
v['signals']['A'] = v['signals']['MIC'].weigh()
v['tPeaks'] = detect_weel_times(v['signals']['LS'])
v.update( {k:v for k,v in zip(['vAv', 'dv', 'ti_vi'], train_speed(v['tPeaks'], axleDistance=2))} )
# Intervalle
v['intervals'] = defIntervals(v['tPeaks'])
f, ax = plt.subplots(nrows=2,sharey=True)
ax2 = []
for n,(k,v) in enumerate(pb.items()):
sn = v['signals']
axis = ax[n]
ax2.append(axis.twinx())
ax2[n].grid(False)
sn['A'].plot(ax = ax2[n], label = 'A', title='', alpha = 0.6, lw = 0.5 )
sn['A'].plot_levels(ax = axis,color = 'grey', label = 'LAF' ,lw = 2)
for tb in v['tPeaks']:
axis.axvline(tb, color = 'red',alpha = 0.8 )
for k,(t1,t2) in v['intervals'].items():
if isinstance(k,int):
axis.axvline(t1, color = 'blue', alpha = 1 )
axis.axvline(t2, color = 'blue', alpha = 1 )
axis.set_xbound(v['tPeaks'].min()-1,v['tPeaks'].max()+1)
axis.set_title('Abschnitt {}'.format(k))
axis.legend()
ax2[n].set_ybound(30,-30)
ax[0].set_xlabel('')
%%time
Bands = acoustics.signal.OctaveBand(fstart = 100,fstop=20000, fraction=3)
for absch ,v in pb.items():
# calc Octave
sn = v['signals']['bandpass']
f , octFilterBank = sn.third_octaves(frequencies = Bands.nominal)
# sel
spektrum, sel = cut_third_oct_spectrum( octFilterBank, v['intervals'],lType= 'leq')
v.setdefault('spektrum_sel',{}).update(spektrum)
v.setdefault('sel',{}).update(sel)
v['spektrum_sel']['f'] = f.nominal
# leq
spektrum, leq = cut_third_oct_spectrum( octFilterBank, v['intervals'], lType= 'leq')
v['spektrum'].update(spektrum)
v.setdefault('leq',{}).update(leq)
v['spektrum']['f'] = f.nominal
#leq
hexcol = ['#332288', '#88CCEE', '#44AA99', '#117733', '#999933', '#DDCC77',
'#CC6677', '#882255', '#AA4499', '#661100', '#6699CC', '#AA4466',
'#4477AA']
f, axes= plt.subplots(ncols = 2, sharey = True)
f.suptitle('Spektrum leq')
for n,(a,v) in enumerate(pb.items()):
ax = axes[n]
ax.set_xscale('log')
spektrum = v['spektrum']
level = v['leq']
for name in list(v['intervals'].keys()):
if name == 'full':
opt = {'ls':':', 'color':'b','lw' : 2 ,'alpha' : 0.8, 'label': str(name)}
l, = ax.plot(spektrum['f'], spektrum[name] , **opt )
ax.axhline(y = level[name], xmin = 100 , xmax = 2000, color = l.get_color(), lw= 1.1, alpha = 1 )
elif type(name)==int:
opt= {'ls':'-', 'color' : hexcol[int(name)],'lw' : 1.5,'label': 'int. {}'.format(name)}
l, = ax.plot(spektrum['f'], spektrum[name] , **opt )
ax.axhline(y = level[name], xmin = 0 , xmax = 0.1, color = l.get_color(), lw= 1.1, alpha = 1 )
ax.set_xbound(100,10000)
ax.set_xlabel('f Hz')
ax.set_title('Abschnitt {}'.format(a))
axes[0].set_ybound(65,105)
axes[1].legend(loc='upper center', bbox_to_anchor=(1.19, 1),
ncol=1, fancybox=True, shadow=True)
axes[0].set_ylabel('leq dB')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intervalle
Step2: Integration
Step3: Beispeiel
Step4: Auswahl einer Vorbeifahrt und zusammenstellung der Daten
Step5: Plotten der Mikrophon Signal
Step6: Bemerkung
Step7: Graphische Darstellung der Spektren für unterschiedlichen Auswertung Intervallen
|
3,319
|
<ASSISTANT_TASK:>
Python Code:
# Import SPI Rack, D5a module and D4 module
from spirack import SPI_rack, D4_module, D5a_module
from time import sleep
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
COM_speed = 1e6 # Baud rate, doesn't matter much
timeout = 1 # In seconds
spi_rack = SPI_rack('COM4', COM_speed, timeout)
spi_rack.unlock() # Unlock the controller to be able to send data
D5a = D5a_module(spi_rack, module=2, reset_voltages=True)
D4 = D4_module(spi_rack, 6)
setting = 16
D4.set_filter(adc=0, filter_type='sinc3', filter_setting=setting)
D4.set_filter(adc=1, filter_type='sinc3', filter_setting=setting)
# First the offset, put 50 Ohm termination on the inputs
D4.offset_calibration(0)
D4.offset_calibration(1)
# Next the gain error, apply 4V from the D5a module
D5a.set_voltage(0, 4)
D5a.set_voltage(1, 4)
sleep(1)
D4.gain_calibration(0)
D4.gain_calibration(1)
no_points = 20
input_voltage = np.linspace(-4, 4, no_points)
data_ADC1 = np.zeros(no_points)
data_ADC2 = np.zeros(no_points)
for i, value in enumerate(input_voltage):
D5a.set_voltage(0, value)
D5a.set_voltage(1, -value)
D4.start_conversion(0)
D4.start_conversion(1)
data_ADC1[i] = D4.get_result(0)
data_ADC2[i] = D4.get_result(1)
plt.figure()
plt.plot(input_voltage, data_ADC1, '.-', label='ADC1')
plt.plot(input_voltage, data_ADC2, '.-', label='ADC2')
plt.xlabel('D5a Voltage (V)')
plt.ylabel('D4 Voltage (V)')
plt.legend()
plt.show()
no_points = 100
D5a.set_voltage(0, 1)
data_ADC1 = np.zeros(no_points)
for i in range(no_points):
data_ADC1[i] = D4.single_conversion(0)
plt.figure()
plt.plot(np.arange(no_points), data_ADC1/1e-6, '.-', label='ADC1')
plt.xlabel('Sample')
plt.ylabel('ADC1 Voltage (uV)')
plt.show()
spi_rack.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialisation
Step2: Create a new D5a module object at the correct (set) module address using the SPI object. Here we reset the voltages to zero Volt. For information on the D5a module, see the corresponding webpage and the D5a example notebook. We keep the D5a at the default span of +-4 Volt.
Step3: We now create a new D4 module object in the same way. Make sure that the module number corresponds to the address set in the hardware.
Step4: Next we set the filters inside the ADCs. These filters determine the analog bandwidth, the 50 Hz rejection, the resolution and the data rate. There are two filter types
Step5: Calibration
Step6: For the gain calibration we connect DAC output 1 to ADC input 1, and DAC output 2 to ADC input 2.
Step7: Measurements
Step8: Now we just plot the results to see the expected lines.
Step9: For the second measurement we only look at channel 1. We put the DAC channel at a fixed voltage and take a number of samples. This allows us to take a look at the noise performance. The ADC filter is in setting 16 out of 20 (higher values give better performance with longer conversion times). Setting 16 should give us a bandwith of 13 Hz, 100 dB 50 hz suppression, 23.5 bit resolution and a data rate of 16.67 samples per second.
Step10: Now we plot the results on uV scale.
Step11: When done with the measurement, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to acces the device.
|
3,320
|
<ASSISTANT_TASK:>
Python Code:
'''
This variables MUST not be changed.
They represent the movements of the masterball.
'''
R_0 = "Right 0"
R_1 = "Right 1"
R_2 = "Right 2"
R_3 = "Right 3"
V_0 = "Vertical 0"
V_1 = "Vertical 1"
V_2 = "Vertical 2"
V_3 = "Vertical 3"
V_4 = "Vertical 4"
V_5 = "Vertical 5"
V_6 = "Vertical 6"
V_7 = "Vertical 7"
import search
class MasterballProblem(search.SearchProblem):
def __init__(self, startState):
'''
Store the initial state in the problem representation and any useful
data.
Here are some examples of initial states:
[[0, 1, 4, 5, 6, 2, 3, 7], [0, 1, 3, 4, 5, 6, 3, 7], [1, 2, 4, 5, 6, 2, 7, 0], [0, 1, 4, 5, 6, 2, 3, 7]]
[[0, 7, 4, 5, 1, 6, 2, 3], [0, 7, 4, 5, 0, 5, 2, 3], [7, 6, 3, 4, 1, 6, 1, 2], [0, 7, 4, 5, 1, 6, 2, 3]]
[[0, 1, 6, 4, 5, 2, 3, 7], [0, 2, 6, 5, 1, 3, 4, 7], [0, 2, 6, 5, 1, 3, 4, 7], [0, 5, 6, 4, 1, 2, 3, 7]]
'''
self.expanded = 0
### your code here ###
pass
def isGoalState(self, state):
'''
Define when a given state is a goal state (A correctly colored masterball)
'''
### your code here ###
pass
def getStartState(self):
'''
Implement a method that returns the start state according to the SearchProblem
contract.
'''
### your code here ###
pass
def getSuccessors(self, state):
'''
Implement a successor function: Given a state from the masterball
return a list of the successors and their corresponding actions.
This method *must* return a list where each element is a tuple of
three elements with the state of the masterball in the first position,
the action (according to the definition above) in the second position,
and the cost of the action in the last position.
Note that you should not modify the state.
'''
self.expanded += 1
### your code here ###
pass
def iterativeDeepeningSearch(problem):
return []
def aStarSearch(problem, heuristic):
return []
def myHeuristic(state):
return 0
def solveMasterBall(problem, search_function):
'''
This function receives a Masterball problem instance and a
search_function (IDS or A*S) and must return a list of actions that solve the problem.
'''
return []
problem = MasterballProblem([ [0, 4, 3, 2, 1, 5, 6, 7],
[0, 3, 2, 1, 0, 5, 6, 7],
[7, 4, 3, 2, 1, 4, 5, 6],
[0, 4, 3, 2, 1, 5, 6, 7]])
print solveMasterBall(problem, iterativeDeepeningSearch(problem))
print solveMasterBall(problem, aStarSearch(problem, myHeuristic))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: R_i moves the ith row to the right. For instance, R_2 applied to the solved state will produce
Step2: 2. Implement iterative deepening search
Step3: Evaluate it to see what is the maximum depth that it could explore in a reasonable time. Report the results.
|
3,321
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
Example:
- Minimize Rosenbrock's Function with Nelder-Mead.
- Plot of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using retall
# Nelder-Mead solver
from mystic.solvers import fmin
# Rosenbrock function
from mystic.models import rosen
# tools
import pylab
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
# use Nelder-Mead to minimize the Rosenbrock function
solution = fmin(rosen, x0, disp=0, retall=1)
allvecs = solution[-1]
# plot the parameter trajectories
pylab.plot([i[0] for i in allvecs])
pylab.plot([i[1] for i in allvecs])
pylab.plot([i[2] for i in allvecs])
# draw the plot
pylab.title("Rosenbrock parameter convergence")
pylab.xlabel("Nelder-Mead solver iterations")
pylab.ylabel("parameter value")
pylab.legend(["x", "y", "z"])
pylab.show()
Example:
- Minimize Rosenbrock's Function with Nelder-Mead.
- Dynamic plot of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using callback
- solver interactivity
# Nelder-Mead solver
from mystic.solvers import fmin
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.tools import getch
import pylab
pylab.ion()
# draw the plot
def plot_frame():
pylab.title("Rosenbrock parameter convergence")
pylab.xlabel("Nelder-Mead solver iterations")
pylab.ylabel("parameter value")
pylab.draw()
return
iter = 0
step, xval, yval, zval = [], [], [], []
# plot the parameter trajectories
def plot_params(params):
global iter, step, xval, yval, zval
step.append(iter)
xval.append(params[0])
yval.append(params[1])
zval.append(params[2])
pylab.plot(step,xval,'b-')
pylab.plot(step,yval,'g-')
pylab.plot(step,zval,'r-')
pylab.legend(["x", "y", "z"])
pylab.draw()
iter += 1
return
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
# suggest that the user interacts with the solver
print "NOTE: while solver is running, press 'Ctrl-C' in console window"
getch()
plot_frame()
# use Nelder-Mead to minimize the Rosenbrock function
solution = fmin(rosen, x0, disp=1, callback=plot_params, handler=True)
print solution
# don't exit until user is ready
getch()
Example:
- Minimize Rosenbrock's Function with Powell's method.
- Dynamic print of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using callback
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
iter = 0
# plot the parameter trajectories
def print_params(params):
global iter
from numpy import asarray
print "Generation %d has best fit parameters: %s" % (iter,asarray(params))
iter += 1
return
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
print_params(x0)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, disp=1, callback=print_params, handler=False)
print solution
Example:
- Minimize Rosenbrock's Function with Powell's method.
Demonstrates:
- standard models
- minimal solver interface
- customized monitors
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.monitors import VerboseLoggingMonitor
if __name__ == '__main__':
print "Powell's Method"
print "==============="
# initial guess
x0 = [1.5, 1.5, 0.7]
# configure monitor
stepmon = VerboseLoggingMonitor(1,1)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, itermon=stepmon)
print solution
import mystic
mystic.log_reader('log.txt')
import mystic
mystic.model_plotter(mystic.models.rosen, 'log.txt', depth=True, scale=1, bounds="-2:2:.1, -2:2:.1, 1")
Example:
- Solve 8th-order Chebyshev polynomial coefficients with DE.
- Callable plot of fitting to Chebyshev polynomial.
- Monitor Chi-Squared for Chebyshev polynomial.
Demonstrates:
- standard models
- expanded solver interface
- built-in random initial guess
- customized monitors and termination conditions
- customized DE mutation strategies
- use of monitor to retrieve results information
# Differential Evolution solver
from mystic.solvers import DifferentialEvolutionSolver2
# Chebyshev polynomial and cost function
from mystic.models.poly import chebyshev8, chebyshev8cost
from mystic.models.poly import chebyshev8coeffs
# tools
from mystic.termination import VTR
from mystic.strategy import Best1Exp
from mystic.monitors import VerboseMonitor
from mystic.tools import getch, random_seed
from mystic.math import poly1d
import pylab
pylab.ion()
# draw the plot
def plot_exact():
pylab.title("fitting 8th-order Chebyshev polynomial coefficients")
pylab.xlabel("x")
pylab.ylabel("f(x)")
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
exact = chebyshev8(x)
pylab.plot(x,exact,'b-')
pylab.legend(["Exact"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
# plot the polynomial
def plot_solution(params,style='y-'):
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
f = poly1d(params)
y = f(x)
pylab.plot(x,y,style)
pylab.legend(["Exact","Fitted"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
if __name__ == '__main__':
print "Differential Evolution"
print "======================"
# set range for random initial guess
ndim = 9
x0 = [(-100,100)]*ndim
random_seed(123)
# draw frame and exact coefficients
plot_exact()
# configure monitor
stepmon = VerboseMonitor(50)
# use DE to solve 8th-order Chebyshev coefficients
npop = 10*ndim
solver = DifferentialEvolutionSolver2(ndim,npop)
solver.SetRandomInitialPoints(min=[-100]*ndim, max=[100]*ndim)
solver.SetGenerationMonitor(stepmon)
solver.enable_signal_handler()
solver.Solve(chebyshev8cost, termination=VTR(0.01), strategy=Best1Exp, \
CrossProbability=1.0, ScalingFactor=0.9, \
sigint_callback=plot_solution)
solution = solver.Solution()
# use monitor to retrieve results information
iterations = len(stepmon)
cost = stepmon.y[-1]
print "Generation %d has best Chi-Squared: %f" % (iterations, cost)
# use pretty print for polynomials
print poly1d(solution)
# compare solution with actual 8th-order Chebyshev coefficients
print "\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs)
# plot solution versus exact coefficients
plot_solution(solution)
from mystic.solvers import DifferentialEvolutionSolver
print "\n".join([i for i in dir(DifferentialEvolutionSolver) if not i.startswith('_')])
from mystic.termination import VTR, ChangeOverGeneration, And, Or
stop = Or(And(VTR(), ChangeOverGeneration()), VTR(1e-8))
from mystic.models import rosen
from mystic.monitors import VerboseMonitor
from mystic.solvers import DifferentialEvolutionSolver
solver = DifferentialEvolutionSolver(3,40)
solver.SetRandomInitialPoints([-10,-10,-10],[10,10,10])
solver.SetGenerationMonitor(VerboseMonitor(10))
solver.SetTermination(stop)
solver.SetObjective(rosen)
solver.SetStrictRanges([-10,-10,-10],[10,10,10])
solver.SetEvaluationLimits(generations=600)
solver.Solve()
print solver.bestSolution
from mystic.constraints import *
from mystic.penalty import quadratic_equality
from mystic.coupler import inner
from mystic.math import almostEqual
from mystic.tools import random_seed
random_seed(213)
def test_penalize():
from mystic.math.measures import mean, spread
def mean_constraint(x, target):
return mean(x) - target
def range_constraint(x, target):
return spread(x) - target
@quadratic_equality(condition=range_constraint, kwds={'target':5.0})
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
def penalty(x):
return 0.0
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
from numpy import array
x = array([1,2,3,4,5])
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_solve():
from mystic.math.measures import mean
def mean_constraint(x, target):
return mean(x) - target
def parameter_constraint(x):
return x[-1] - x[0]
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
@quadratic_equality(condition=parameter_constraint)
def penalty(x):
return 0.0
x = solve(penalty, guess=[2,3,1])
assert round(mean_constraint(x, 5.0)) == 0.0
assert round(parameter_constraint(x)) == 0.0
assert issolution(penalty, x)
def test_solve_constraint():
from mystic.math.measures import mean
@with_mean(1.0)
def constraint(x):
x[-1] = x[0]
return x
x = solve(constraint, guess=[2,3,1])
assert almostEqual(mean(x), 1.0, tol=1e-15)
assert x[-1] == x[0]
assert issolution(constraint, x)
def test_as_constraint():
from mystic.math.measures import mean, spread
def mean_constraint(x, target):
return mean(x) - target
def range_constraint(x, target):
return spread(x) - target
@quadratic_equality(condition=range_constraint, kwds={'target':5.0})
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
def penalty(x):
return 0.0
ndim = 3
constraints = as_constraint(penalty, solver='fmin')
#XXX: this is expensive to evaluate, as there are nested optimizations
from numpy import arange
x = arange(ndim)
_x = constraints(x)
assert round(mean(_x)) == 5.0
assert round(spread(_x)) == 5.0
assert round(penalty(_x)) == 0.0
def cost(x):
return abs(sum(x) - 5.0)
npop = ndim*3
from mystic.solvers import diffev
y = diffev(cost, x, npop, constraints=constraints, disp=False, gtol=10)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 5.0*(ndim-1)
def test_as_penalty():
from mystic.math.measures import mean, spread
@with_spread(5.0)
@with_mean(5.0)
def constraint(x):
return x
penalty = as_penalty(constraint)
from numpy import array
x = array([1,2,3,4,5])
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_with_penalty():
from mystic.math.measures import mean, spread
@with_penalty(quadratic_equality, kwds={'target':5.0})
def penalty(x, target):
return mean(x) - target
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
from numpy import array
x = array([1,2,3,4,5])
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_with_mean():
from mystic.math.measures import mean, impose_mean
@with_mean(5.0)
def mean_of_squared(x):
return [i**2 for i in x]
from numpy import array
x = array([1,2,3,4,5])
y = impose_mean(5, [i**2 for i in x])
assert mean(y) == 5.0
assert mean_of_squared(x) == y
def test_with_mean_spread():
from mystic.math.measures import mean, spread, impose_mean, impose_spread
@with_spread(50.0)
@with_mean(5.0)
def constrained_squared(x):
return [i**2 for i in x]
from numpy import array
x = array([1,2,3,4,5])
y = impose_spread(50.0, impose_mean(5.0,[i**2 for i in x]))
assert almostEqual(mean(y), 5.0, tol=1e-15)
assert almostEqual(spread(y), 50.0, tol=1e-15)
assert constrained_squared(x) == y
def test_constrained_solve():
from mystic.math.measures import mean, spread
@with_spread(5.0)
@with_mean(5.0)
def constraints(x):
return x
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin_powell
from numpy import array
x = array([1,2,3,4,5])
y = fmin_powell(cost, x, constraints=constraints, disp=False)
assert almostEqual(mean(y), 5.0, tol=1e-15)
assert almostEqual(spread(y), 5.0, tol=1e-15)
assert almostEqual(cost(y), 4*(5.0), tol=1e-6)
if __name__ == '__main__':
test_penalize()
test_solve()
test_solve_constraint()
test_as_constraint()
test_as_penalty()
test_with_penalty()
test_with_mean()
test_with_mean_spread()
test_constrained_solve()
Example:
- Minimize Rosenbrock's Function with Powell's method.
Demonstrates:
- standard models
- minimal solver interface
- parameter constraints solver and constraints factory decorator
- statistical parameter constraints
- customized monitors
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.monitors import VerboseMonitor
from mystic.math.measures import mean, impose_mean
from mystic.math import almostEqual
if __name__ == '__main__':
print "Powell's Method"
print "==============="
# initial guess
x0 = [0.8,1.2,0.7]
# use the mean constraints factory decorator
from mystic.constraints import with_mean
# define constraints function
@with_mean(1.0)
def constraints(x):
# constrain the last x_i to be the same value as the first x_i
x[-1] = x[0]
return x
# configure monitor
stepmon = VerboseMonitor(1)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, constraints=constraints, itermon=stepmon)
print solution
%%file spring.py
"a Tension-Compression String"
def objective(x):
x0,x1,x2 = x
return x0**2 * x1 * (x2 + 2)
bounds = [(0,100)]*3
# with penalty='penalty' applied, solution is:
xs = [0.05168906, 0.35671773, 11.28896619]
ys = 0.01266523
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
equations =
1.0 - (x1**3 * x2)/(71785*x0**4) <= 0.0
(4*x1**2 - x0*x1)/(12566*x0**3 * (x1 - x0)) + 1./(5108*x0**2) - 1.0 <= 0.0
1.0 - 140.45*x0/(x2 * x1**2) <= 0.0
(x0 + x1)/1.5 - 1.0 <= 0.0
pf = generate_penalty(generate_conditions(equations), k=1e12)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, npop=40,
gtol=500, disp=True, full_output=True)
print result[0]
equations =
1.0 - (x1**3 * x2)/(71785*x0**4) <= 0.0
(4*x1**2 - x0*x1)/(12566*x0**3 * (x1 - x0)) + 1./(5108*x0**2) - 1.0 <= 0.0
1.0 - 140.45*x0/(x2 * x1**2) <= 0.0
(x0 + x1)/1.5 - 1.0 <= 0.0
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
ineql, eql = generate_conditions(equations)
print "CONVERTED SYMBOLIC TO SINGLE CONSTRAINTS FUNCTIONS"
print ineql
print eql
print "\nTHE INDIVIDUAL INEQUALITIES"
for f in ineql:
print f.__doc__
print "\nGENERATED THE PENALTY FUNCTION FOR ALL CONSTRAINTS"
pf = generate_penalty((ineql, eql))
print pf.__doc__
x = [-0.1, 0.5, 11.0]
print "\nPENALTY FOR {}: {}".format(x, pf(x))
"a Tension-Compression String"
from spring import objective, bounds, xs, ys
from mystic.constraints import as_constraint
from mystic.penalty import quadratic_inequality
def penalty1(x): # <= 0.0
return 1.0 - (x[1]**3 * x[2])/(71785*x[0]**4)
def penalty2(x): # <= 0.0
return (4*x[1]**2 - x[0]*x[1])/(12566*x[0]**3 * (x[1] - x[0])) + 1./(5108*x[0]**2) - 1.0
def penalty3(x): # <= 0.0
return 1.0 - 140.45*x[0]/(x[2] * x[1]**2)
def penalty4(x): # <= 0.0
return (x[0] + x[1])/1.5 - 1.0
@quadratic_inequality(penalty1, k=1e12)
@quadratic_inequality(penalty2, k=1e12)
@quadratic_inequality(penalty3, k=1e12)
@quadratic_inequality(penalty4, k=1e12)
def penalty(x):
return 0.0
solver = as_constraint(penalty)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=40,
gtol=500, disp=True, full_output=True)
print result[0]
Crypto problem in Google CP Solver.
Prolog benchmark problem
'''
Name : crypto.pl
Original Source: P. Van Hentenryck's book
Adapted by : Daniel Diaz - INRIA France
Date : September 1992
'''
def objective(x):
return 0.0
nletters = 26
bounds = [(1,nletters)]*nletters
# with penalty='penalty' applied, solution is:
# A B C D E F G H I J K L M N O P Q
xs = [ 5, 13, 9, 16, 20, 4, 24, 21, 25, 17, 23, 2, 8, 12, 10, 19, 7, \
# R S T U V W X Y Z
11, 15, 3, 1, 26, 6, 22, 14, 18]
ys = 0.0
# constraints
equations =
B + A + L + L + E + T - 45 == 0
C + E + L + L + O - 43 == 0
C + O + N + C + E + R + T - 74 == 0
F + L + U + T + E - 30 == 0
F + U + G + U + E - 50 == 0
G + L + E + E - 66 == 0
J + A + Z + Z - 58 == 0
L + Y + R + E - 47 == 0
O + B + O + E - 53 == 0
O + P + E + R + A - 65 == 0
P + O + L + K + A - 59 == 0
Q + U + A + R + T + E + T - 50 == 0
S + A + X + O + P + H + O + N + E - 134 == 0
S + C + A + L + E - 51 == 0
S + O + L + O - 37 == 0
S + O + N + G - 61 == 0
S + O + P + R + A + N + O - 82 == 0
T + H + E + M + E - 72 == 0
V + I + O + L + I + N - 100 == 0
W + A + L + T + Z - 34 == 0
var = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
# Let's say we know the vowels.
bounds[0] = (5,5) # A
bounds[4] = (20,20) # E
bounds[8] = (25,25) # I
bounds[14] = (10,10) # O
bounds[20] = (1,1) # U
from mystic.constraints import unique, near_integers, has_unique
from mystic.symbolic import generate_penalty, generate_conditions
pf = generate_penalty(generate_conditions(equations,var),k=1)
from mystic.constraints import as_constraint
cf = as_constraint(pf)
from mystic.penalty import quadratic_equality
@quadratic_equality(near_integers)
@quadratic_equality(has_unique)
def penalty(x):
return pf(x)
from numpy import round, hstack, clip
def constraint(x):
x = round(x).astype(int) # force round and convert type to int
x = clip(x, 1,nletters) #XXX: hack to impose bounds
x = unique(x, range(1,nletters+1))
return x
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
from mystic.monitors import Monitor, VerboseMonitor
mon = VerboseMonitor(10)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf,
constraints=constraint, npop=52, ftol=1e-8, gtol=1000,
disp=True, full_output=True, cross=0.1, scale=0.9, itermon=mon)
print result[0]
Eq 10 in Google CP Solver.
Standard benchmark problem.
def objective(x):
return 0.0
bounds = [(0,10)]*7
# with penalty='penalty' applied, solution is:
xs = [6., 0., 8., 4., 9., 3., 9.]
ys = 0.0
# constraints
equations =
98527*x0 + 34588*x1 + 5872*x2 + 59422*x4 + 65159*x6 - 1547604 - 30704*x3 - 29649*x5 == 0.0
98957*x1 + 83634*x2 + 69966*x3 + 62038*x4 + 37164*x5 + 85413*x6 - 1823553 - 93989*x0 == 0.0
900032 + 10949*x0 + 77761*x1 + 67052*x4 - 80197*x2 - 61944*x3 - 92964*x5 - 44550*x6 == 0.0
73947*x0 + 84391*x2 + 81310*x4 - 1164380 - 96253*x1 - 44247*x3 - 70582*x5 - 33054*x6 == 0.0
13057*x2 + 42253*x3 + 77527*x4 + 96552*x6 - 1185471 - 60152*x0 - 21103*x1 - 97932*x5 == 0.0
1394152 + 66920*x0 + 55679*x3 - 64234*x1 - 65337*x2 - 45581*x4 - 67707*x5 - 98038*x6 == 0.0
68550*x0 + 27886*x1 + 31716*x2 + 73597*x3 + 38835*x6 - 279091 - 88963*x4 - 76391*x5 == 0.0
76132*x1 + 71860*x2 + 22770*x3 + 68211*x4 + 78587*x5 - 480923 - 48224*x0 - 82817*x6 == 0.0
519878 + 94198*x1 + 87234*x2 + 37498*x3 - 71583*x0 - 25728*x4 - 25495*x5 - 70023*x6 == 0.0
361921 + 78693*x0 + 38592*x4 + 38478*x5 - 94129*x1 - 43188*x2 - 82528*x3 - 69025*x6 == 0.0
from mystic.symbolic import generate_penalty, generate_conditions
pf = generate_penalty(generate_conditions(equations))
from numpy import round as npround
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf,
constraints=npround, npop=40, gtol=50, disp=True, full_output=True)
print result[0]
"Pressure Vessel Design"
def objective(x):
x0,x1,x2,x3 = x
return 0.6224*x0*x2*x3 + 1.7781*x1*x2**2 + 3.1661*x0**2*x3 + 19.84*x0**2*x2
bounds = [(0,1e6)]*4
# with penalty='penalty' applied, solution is:
xs = [0.72759093, 0.35964857, 37.69901188, 240.0]
ys = 5804.3762083
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
equations =
-x0 + 0.0193*x2 <= 0.0
-x1 + 0.00954*x2 <= 0.0
-pi*x2**2*x3 - (4/3.)*pi*x2**3 + 1296000.0 <= 0.0
x3 - 240.0 <= 0.0
pf = generate_penalty(generate_conditions(equations), k=1e12)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, npop=40, gtol=500,
disp=True, full_output=True)
print result[0]
Minimize: f = 2*x[0] + 1*x[1]
Subject to: -1*x[0] + 1*x[1] <= 1
1*x[0] + 1*x[1] >= 2
1*x[1] >= 0
1*x[0] - 2*x[1] <= 4
where: -inf <= x[0] <= inf
def objective(x):
x0,x1 = x
return 2*x0 + x1
equations =
-x0 + x1 - 1.0 <= 0.0
-x0 - x1 + 2.0 <= 0.0
x0 - 2*x1 - 4.0 <= 0.0
bounds = [(None, None),(0.0, None)]
# with penalty='penalty' applied, solution is:
xs = [0.5, 1.5]
ys = 2.5
from mystic.symbolic import generate_conditions, generate_penalty
pf = generate_penalty(generate_conditions(equations), k=1e3)
if __name__ == '__main__':
from mystic.solvers import fmin_powell
from mystic.math import almostEqual
result = fmin_powell(objective, x0=[0.0,0.0], bounds=bounds,
penalty=pf, disp=True, full_output=True, gtol=3)
print result[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: mystic
Step4: Diagnostic tools
Step6: NOTE IPython does not handle shell prompt interactive programs well, so the above should be run from a command prompt. An IPython-safe version is below.
Step8: Monitors
Step9: Solution trajectory and model plotting
Step11: Solver "tuning" and extension
Step12: Algorithm configurability
Step14: Solver population
Step17: Range (i.e. 'box') constraints
Step18: Penatly functions
Step21: "Operators" that directly constrain search space
Step24: Special cases
Step26: EXERCISE
Step29: Linear and quadratic constraints
|
3,322
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from fbprophet import Prophet
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
%matplotlib inline
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')
sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True)
sales_df.head()
df = sales_df.reset_index()
df.head()
df=df.rename(columns={'date':'ds', 'sales':'y'})
df.head()
df.set_index('ds').y.plot()
model = Prophet(weekly_seasonality=True)
model.fit(df);
future = model.make_future_dataframe(periods=24, freq = 'm')
future.tail()
forecast = model.predict(future)
forecast.tail()
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
model.plot(forecast);
model.plot_components(forecast);
metric_df = forecast.set_index('ds')[['yhat']].join(df.set_index('ds').y).reset_index()
metric_df.tail()
metric_df.dropna(inplace=True)
metric_df.tail()
r2_score(metric_df.y, metric_df.yhat)
mean_squared_error(metric_df.y, metric_df.yhat)
mean_absolute_error(metric_df.y, metric_df.yhat)
import ml_metrics as metrics
metrics.mae(metric_df.y, metric_df.yhat)
metrics.ae(metric_df.y, metric_df.yhat)
metrics.rmse(metric_df.y, metric_df.yhat)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the data
Step2: Prepare for Prophet
Step3: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.
Step4: Now's a good time to take a look at your data. Plot the data using pandas' plot function
Step5: Running Prophet
Step6: We've instantiated the model, now we need to build some future dates to forecast into.
Step7: To forecast this future data, we need to run it through Prophet's model.
Step8: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe
Step9: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with
Step10: Plotting Prophet results
Step11: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here
Step12: Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE).
Step13: You can see from the above, that the last part of the dataframe has "NaN" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these "NaN" values.
Step14: Now let's take a look at our R-Squared value
Step15: An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit).
Step16: That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better.
Step17: Not good. Not good at all. BUT...the purpose of this particular post is to show some usage of R-Squared, MAE and MSE's as metrics and I think we've done that.
Step18: Same value for MAE as before...which is a good sign for this new metrics library. Let's take a look at a few more.
Step19: Let's look at Root Mean Square Error
|
3,323
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
print("Pandas version: {}".format(pd.__version__))
# опции отображения
pd.options.display.max_rows = 6
pd.options.display.max_columns = 6
pd.options.display.width = 100
import gzip
# датасет на 47 мегабайт, мы возьмем только 10
review_lines = gzip.open('data/reviews/reviews_Clothing_Shoes_and_Jewelry_5.json.gz', 'rt').readlines(10*1024*1024)
len(review_lines)
import json
df = pd.DataFrame(list(map(json.loads, review_lines)))
df
df.head()
df.tail()
df.describe()
# ваш код здесь, используйте tab для того, чтобы увидеть список доступных для вызова функций
df.info()
df['unixReviewTime'] = pd.to_datetime(df['unixReviewTime'], unit='s')
pd.to_datetime?
df.info()
df.summary
df.summary.str.len()
# Your code here
df.summary.str.lower()
df.summary.str.upper()
pattern = 'durable'
df.summary.str.contains(pattern)
df.unixReviewTime.dt.dayofweek
df.unixReviewTime.dt.weekofyear
# ваш код
df.overall < 5
df[df.overall < 5]
df.loc[df.overall < 5, ['overall', 'reviewText']]
df.loc[((df.overall == 5) & (df.reviewText.str.contains('awesome'))) | ((df.overall == 1) & (df.reviewText.str.contains('terrible'))), ['overall', 'reviewText']]
# Your code here
# возвращает столбец, содержащий количество уникальных значений asin
products = df.asin.value_counts()
products
products[0:3].index
df[df.asin.isin(products[0:3].index)]
# df[df.asin.isin(['B0000C321X', 'B0001ZNZJM', 'B00012O12A'])] - даст тот же результат
# ваш код
days = df.unixReviewTime.value_counts()
days
df[df.unixReviewTime.isin(days[0:1].index)]
df.groupby('asin')['reviewText'].agg('count').sort_values()
# ваш код
# ваш код
df.groupby([pd.Grouper(key='unixReviewTime',freq='D')])['reviewerID'].count()
df.groupby([pd.Grouper(key='unixReviewTime',freq='M')])['reviewerID'].count()
%matplotlib inline
import seaborn as sns; sns.set()
df.groupby([pd.Grouper(key='unixReviewTime',freq='A')])['reviewerID'].count().plot(figsize=(6,6))
# Your code here
# Your code here
import matplotlib.pyplot as plt
by_weekday = df.groupby([df.unixReviewTime.dt.year,
df.unixReviewTime.dt.dayofweek]).mean()
by_weekday.columns.name = None # remove label for plot
fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True)
by_weekday.loc[2013].plot(title='Average Reviews Rating by Day of Week (2013)', ax=ax[0]);
by_weekday.loc[2014].plot(title='Average Reviews Rating by Day of Week (2014)', ax=ax[1]);
for axi in ax:
axi.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'])
import matplotlib.pyplot as plt
by_month = df.groupby([df.unixReviewTime.dt.year,
df.unixReviewTime.dt.day])['reviewerID'].count()
fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True)
by_month.loc[2012].plot(title='Average Reviews by Month (2012)', ax=ax[0]);
by_month.loc[2013].plot(title='Average Reviews by Month (2013)', ax=ax[1]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Чтение и запись данных
Step2: Теперь мы получили list с текстовыми строками, нам нужно преобразовать их в dict и передать в DataFrame. <br/>
Step3: Теперь мы можем взглянуть, что собой представляют наши данные. DataFrame позволяет их вывести в такой наглядной таблице.
Step4: Данные вначале нашего df
Step5: Данные вконце df
Step6: Упражнение
Step7: http
Step8: Теперь мы видим, что столбец был преобразован в нужный нам тип данных.
Step9: Работа с текстовыми данными.
Step10: Таким простым вызовом мы получаем новый столбец с длинной строки описания товара, который может быть хорошим индикатором для вашей модели.
Step11: Упражнение
Step12: Нижний регистр.
Step13: Верхний регистр.
Step14: Поиск строк, которые содержат определенную подстроку или regex
Step15: Работа с timeseries
Step16: Неделя в году
Step17: Упражнение
Step18: Выбор данных
Step19: Передав их как ключ, мы получим сами строки.
Step20: Полученные индексы мы можем передать в метод loc, вторым аргументом он принимает список столбцов, которые мы хотим видеть.
Step21: Также мы можем передать более сложные условия для выборки, например, здесь мы выбираем отзывы с оценкой 5, содержащие слово awesome и отзывы с оценкой 1, содержащие слово terrible.
Step22: Упражнение
Step23: isin
Step24: Выбираем строки, которые содержат топ 3 популярные товары.
Step25: Упражнение
Step26: Группировка
Step27: Упражнение
Step28: Упражение
Step29: pd.Grouper
Step30: Plotting
Step31: EXERCISE
Step32: EXERCISE
Step33: EXERCISE
|
3,324
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
mtxFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
"depmtx_ref.m")
import serpentTools
reader = serpentTools.read(mtxFile)
reader
reader.n0
reader.zai
reader.sparse
reader.depmtx
reader.plotDensity()
reader.plotDensity(
what='n0', # plot initial value
markers='>', # marker for scatter plot
labels='$N_0$', # labels for each entry plotted
ylim=1E-30, # set the lower y-axis limit
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Depletion Matrix
Step2: We have access to all the data present in the file directly on the reader.
Step3: This input file did not include fission yield libraries for depletion in order to reduce the size of the depletion matrix from ~1000 x 1000 to 74 x 74.
Step4: One can easily check if the depletion matrix is sparse by using the sparse attribute on the reader
Step5: A simple plot method can be used to plot initial concentrations, final concentrations, or both.
Step6: Some options can be passed to improve the look and feel of the plot
|
3,325
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);
%%sql
-- Select all
SELECT *
-- From the criminals table
FROM criminals
%%sql
-- Update the criminals table
UPDATE criminals
-- To say city: 'Palo Alto'
SET City='Palo Alto'
-- If the prisoner ID number is 412
WHERE pid=412;
%%sql
-- Update the criminals table
UPDATE criminals
-- To say minor: 'No'
SET minor = 'No'
-- If age is greater than 12
WHERE age > 12;
%%sql
-- Select all
SELECT *
-- From the criminals table
FROM criminals
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: View Table
Step3: Update One Row
Step4: Update Multiple Rows Using A Conditional
Step5: View Table Again
|
3,326
|
<ASSISTANT_TASK:>
Python Code:
import urllib.request
import wfdb
import psycopg2
from psycopg2.extensions import AsIs
target_url = "https://physionet.org/physiobank/database/mimic3wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[2])
line
line = line.replace('b\'','').replace('\'','').replace('\\n','')
splited = line.split("/")
carpeta,subCarpeta,onda = line.split("/")
carpeta = carpeta+"/"+subCarpeta
subject_id = subCarpeta.replace('p','')
recordDate = onda.replace(subCarpeta+"-","")
print("subject_id: ",subject_id)
print("recordDate: ",recordDate)
print("onda: ",onda)
print("carpeta: ",carpeta)
try:
sig, fields = wfdb.srdsamp(onda,pbdir='mimic3wdb/matched/'+carpeta, sampto=1)
print(fields)
except Exception as inst:
print("onda vacia")
fields['subject_id'] = subject_id
fields['recordDate'] = recordDate
columns = fields.keys()
values = [fields[column] for column in columns]
print(columns)
conn = psycopg2.connect("dbname=mimic")
cur = conn.cursor()
table = "waveformFields"
#cur.execute("DROP TABLE "+table)
cur.execute("CREATE TABLE IF NOT EXISTS "+table+
" (id serial PRIMARY KEY,"+
"comments character varying(255)[],"+
"fs integer, signame character varying(255)[],"+
"units character varying(255)[],"+
"subject_id integer,"+
"recordDate character varying(255));")
def track_not_exists(cur, subject_id,recordDate):
select_stament = 'select id from '+table+' where subject_id= %s and recorddate = %s'
cur.execute(select_stament,(int(subject_id),recordDate))
return cur.fetchone() is None
#print(cur.mogrify(select_stament,(int(subject_id),recordDate)))
notExist = False
if track_not_exists(cur,subject_id,recordDate):
notExist = True
print("not exist")
insert_statement = 'insert into '+table+' (%s) values %s'
print(cur.mogrify(insert_statement, (AsIs(','.join(columns)), tuple(values))))
if notExist:
cur.execute(insert_statement, (AsIs(','.join(columns)), tuple(values)))
conn.commit()
conn.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Leemos el archivo con las WaveForm que vamos a utilizar
Step2: 3) Limpiamos los caracteres extraños y Dividimos la cadena donde pXXNNNN-YYYY-MM-DD-hh-mm donde XXNNNN es el identificador unico del paciente SUBJECT_ID y YYYY-MM-DD-hh-mm es la fecha de la estadia del paciente'
Step3: 4) Leemos el encabezado del waveform, para obtener la información del paciente que almacenaremos
Step4: Le agregamos el subject_id y la fecha del record a los campos
Step5: convertimos los campos en un diccionario
Step6: Nos conectamos a la base de datos postgres donde almacenaremos los datos
Step7: Creamos la tabla donde quedaran almacenados los datos
Step8: verificamos si ya existe el dato
Step9: Insertamos los datos
Step10: Hacemos commit
Step11: cerramos conexion
|
3,327
|
<ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem
deepchem.__version__
import deepchem as dc
import numpy as np
tasks, datasets, transformers = dc.molnet.load_muv(split='stratified')
train_dataset, valid_dataset, test_dataset = datasets
n_tasks = len(tasks)
n_features = train_dataset.get_data_shape()[0]
model = dc.models.MultitaskClassifier(n_tasks, n_features)
model.fit(train_dataset)
y_true = test_dataset.y
y_pred = model.predict(test_dataset)
metric = dc.metrics.roc_auc_score
for i in range(n_tasks):
score = metric(dc.metrics.to_one_hot(y_true[:,i]), y_pred[:,i])
print(tasks[i], score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The MUV dataset is a challenging benchmark in molecular design that consists of 17 different "targets" where there are only a few "active" compounds per target. There are 93,087 compounds in total, yet no task has more than 30 active compounds, and many have even less. Training a model with such a small number of positive examples is very challenging. Multitask models address this by training a single model that predicts all the different targets at once. If a feature is useful for predicting one task, it often is useful for predicting several other tasks as well. Each added task makes it easier to learn important features, which improves performance on other tasks [2].
Step2: Now let's train a model on it. We'll use a MultitaskClassifier, which is a simple stack of fully connected layers.
Step3: Let's see how well it does on the test set. We loop over the 17 tasks and compute the ROC AUC for each one.
|
3,328
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import matplotlib.pyplot as plt
import bayesian_changepoint_detection.generate_data as gd
import seaborn
%matplotlib inline
%load_ext autoreload
%autoreload 2
partition, data = gd.generate_xuan_motivating_example(200,500)
import numpy as np
changes = np.cumsum(partition)
fig, ax = plt.subplots(figsize=[16, 4])
for p in changes:
ax.plot([p,p],[np.min(data),np.max(data)],'r')
for d in range(2):
ax.plot(data[:,d])
from bayesian_changepoint_detection.priors import const_prior
from bayesian_changepoint_detection.offline_likelihoods import IndepentFeaturesLikelihood
from bayesian_changepoint_detection.bayesian_models import offline_changepoint_detection
from functools import partial
Q_ifm, P_ifm, Pcp_ifm = offline_changepoint_detection(
data, partial(const_prior, p=1/(len(data) + 1)), IndepentFeaturesLikelihood(), truncate=-20
)
fig, ax = plt.subplots(2, figsize=[18, 8])
for p in changes:
ax[0].plot([p,p],[np.min(data),np.max(data)],'r')
for d in range(2):
ax[0].plot(data[:,d])
plt.legend(['Raw data with Original Changepoints'])
ax[1].plot(np.exp(Pcp_ifm).sum(0))
plt.legend(['Independent Factor Model'])
plt.show()
from bayesian_changepoint_detection.offline_likelihoods import FullCovarianceLikelihood
Q_full, P_full, Pcp_full = offline_changepoint_detection(
data, partial(const_prior, p=1/(len(data)+1)), FullCovarianceLikelihood(), truncate=-20
)
fig, ax = plt.subplots(2, figsize=[18, 8])
for p in changes:
ax[0].plot([p,p],[np.min(data),np.max(data)],'r')
for d in range(2):
ax[0].plot(data[:,d])
plt.legend(['Raw data with Original Changepoints'])
ax[1].plot(np.exp(Pcp_full).sum(0))
plt.legend(['Full Covariance Model'])
plt.show()
%timeit Q_ifm, P_ifm, Pcp_ifm = offline_changepoint_detection(data, partial(const_prior, p=1/(len(data)+1)), IndepentFeaturesLikelihood(), truncate=-20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's plot this data
Step2: Let's try to detect the changes with independent features
Step3: Unfortunately, not very good... Now let's try the full covariance model (warning, it'll take a while)
Step4: Ahh, much better now!
|
3,329
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
# Export path: Set to empty string '' if you want to export data to current directory
export_path = '../Csv/'
# Load FRED API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
# Download data
gdp = fp.series('GDP')
consumption = fp.series('PCEC')
investment = fp.series('GPDI')
government = fp.series('GCE')
exports = fp.series('EXPGS')
imports = fp.series('IMPGS')
net_exports = fp.series('NETEXP')
hours = fp.series('HOANBS')
deflator = fp.series('GDPDEF')
pce_deflator = fp.series('PCECTPI')
cpi = fp.series('CPIAUCSL')
m2 = fp.series('M2SL')
tbill_3mo = fp.series('TB3MS')
unemployment = fp.series('UNRATE')
# Base year for NIPA deflators
cpi_base_year = cpi.units.split(' ')[1].split('=')[0]
# Base year for CPI
nipa_base_year = deflator.units.split(' ')[1].split('=')[0]
# Convert monthly M2, 3-mo T-Bill, and unemployment to quarterly
m2 = m2.as_frequency('Q')
tbill_3mo = tbill_3mo.as_frequency('Q')
unemployment = unemployment.as_frequency('Q')
cpi = cpi.as_frequency('Q')
# Deflate GDP, consumption, investment, government expenditures, net exports, and m2 with the GDP deflator
def deflate(series,deflator):
deflator, series = fp.window_equalize([deflator, series])
series = series.divide(deflator).times(100)
return series
gdp = deflate(gdp,deflator)
consumption = deflate(consumption,deflator)
investment = deflate(investment,deflator)
government = deflate(government,deflator)
net_exports = deflate(net_exports,deflator)
exports = deflate(exports,deflator)
imports = deflate(imports,deflator)
m2 = deflate(m2,deflator)
# pce inflation as percent change over past year
pce_deflator = pce_deflator.apc()
# cpi inflation as percent change over past year
cpi = cpi.apc()
# GDP deflator inflation as percent change over past year
deflator = deflator.apc()
# Convert unemployment, 3-mo T-Bill, pce inflation, cpi inflation, GDP deflator inflation data to rates
unemployment = unemployment.divide(100)
tbill_3mo = tbill_3mo.divide(100)
pce_deflator = pce_deflator.divide(100)
cpi = cpi.divide(100)
deflator = deflator.divide(100)
# Make sure that the RBC data has the same data range
gdp,consumption,investment,government,exports,imports,net_exports,hours = fp.window_equalize([gdp,consumption,investment,government,exports,imports,net_exports,hours])
# T-Bill data doesn't neet to go all the way back to 1930s
tbill_3mo = tbill_3mo.window([gdp.data.index[0],'2222'])
metadata = pd.Series(dtype=str,name='Values')
metadata['nipa_base_year'] = nipa_base_year
metadata['cpi_base_year'] = cpi_base_year
metadata.to_csv(export_path+'/business_cycle_metadata.csv')
# Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(investment.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = investment.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')
# Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
# Convert real GDP, consumption, investment, government expenditures, net exports and M2
# into thousands of dollars per civilian 16 and over
gdp = gdp.per_capita(civ_pop=True).times(1000)
consumption = consumption.per_capita(civ_pop=True).times(1000)
investment = investment.per_capita(civ_pop=True).times(1000)
government = government.per_capita(civ_pop=True).times(1000)
exports = exports.per_capita(civ_pop=True).times(1000)
imports = imports.per_capita(civ_pop=True).times(1000)
net_exports = net_exports.per_capita(civ_pop=True).times(1000)
hours = hours.per_capita(civ_pop=True).times(1000)
capital = capital.per_capita(civ_pop=True).times(1000)
m2 = m2.per_capita(civ_pop=True).times(1000)
# Scale hours per person to equal 100 on October (Quarter III) of GDP deflator base year.
hours.data = hours.data/hours.data.loc[nipa_base_year+'-10-01']*100
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
# HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend= gdp.log().hp_filter()
consumption_log_cycle,consumption_log_trend= consumption.log().hp_filter()
investment_log_cycle,investment_log_trend= investment.log().hp_filter()
government_log_cycle,government_log_trend= government.log().hp_filter()
exports_log_cycle,exports_log_trend= exports.log().hp_filter()
imports_log_cycle,imports_log_trend= imports.log().hp_filter()
# net_exports_log_cycle,net_exports_log_trend= net_exports.log().hp_filter()
capital_log_cycle,capital_log_trend= capital.log().hp_filter()
hours_log_cycle,hours_log_trend= hours.log().hp_filter()
tfp_log_cycle,tfp_log_trend= tfp.log().hp_filter()
deflator_cycle,deflator_trend= deflator.hp_filter()
pce_deflator_cycle,pce_deflator_trend= pce_deflator.hp_filter()
cpi_cycle,cpi_trend= cpi.hp_filter()
m2_log_cycle,m2_log_trend= m2.log().hp_filter()
tbill_3mo_cycle,tbill_3mo_trend= tbill_3mo.hp_filter()
unemployment_cycle,unemployment_trend= unemployment.hp_filter()
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].plot(np.exp(gdp_log_trend.data),c='r')
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].plot(np.exp(consumption_log_trend.data),c='r')
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].plot(np.exp(investment_log_trend.data),c='r')
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].plot(np.exp(government_log_trend.data),c='r')
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].plot(np.exp(capital_log_trend.data),c='r')
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].plot(np.exp(hours_log_trend.data),c='r')
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].plot(np.exp(tfp_log_trend.data),c='r')
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].plot(np.exp(m2_log_trend.data),c='r')
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].plot(tbill_3mo_trend.data*100,c='r')
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].plot(pce_deflator_trend.data*100,c='r')
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].plot(cpi_trend.data*100,c='r')
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].plot(unemployment_trend.data*100,c='r')
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent')
ax = fig.add_subplot(1,1,1)
ax.axis('off')
ax.plot(0,0,label='Actual')
ax.plot(0,0,c='r',label='Trend')
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=2)
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp_log_cycle.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][1].plot(consumption_log_cycle.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][2].plot(investment_log_cycle.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[0][3].plot(government_log_cycle.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][0].plot(capital_log_cycle.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[1][1].plot(hours_log_cycle.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+nipa_base_year+'=100)')
axes[1][2].plot(tfp_log_cycle.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2_log_cycle.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+nipa_base_year+' $')
axes[2][0].plot(tbill_3mo_cycle.data)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator_cycle.data)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi_cycle.data)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment_cycle.data)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
# Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':consumption.data,
'consumption_trend':np.exp(consumption_log_trend.data),
'consumption_cycle':consumption_log_cycle.data,
'investment':investment.data,
'investment_trend':np.exp(investment_log_trend.data),
'investment_cycle':investment_log_cycle.data,
'government':government.data,
'government_trend':np.exp(government_log_trend.data),
'government_cycle':government_log_cycle.data,
'exports':exports.data,
'exports_trend':np.exp(exports_log_trend.data),
'exports_cycle':exports_log_cycle.data,
'imports':imports.data,
'imports_trend':np.exp(imports_log_trend.data),
'imports_cycle':imports_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
'real_m2':m2.data,
'real_m2_trend':np.exp(m2_log_trend.data),
'real_m2_cycle':m2_log_cycle.data,
't_bill_3mo':tbill_3mo.data,
't_bill_3mo_trend':tbill_3mo_trend.data,
't_bill_3mo_cycle':tbill_3mo_cycle.data,
'cpi_inflation':cpi.data,
'cpi_inflation_trend':cpi_trend.data,
'cpi_inflation_cycle':cpi_cycle.data,
'pce_inflation':pce_deflator.data,
'pce_inflation_trend':pce_deflator_trend.data,
'pce_inflation_cycle':pce_deflator_cycle.data,
'unemployment':unemployment.data,
'unemployment_trend':unemployment_trend.data,
'unemployment_cycle':unemployment_cycle.data,
})
# RBC data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend_cycle.csv',index=True)
# More comprehensive Business Cycle Data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend_cycle.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download and manage data
Step2: Compute capital stock for US using the perpetual inventory method
Step3: Compute total factor productivity
Step4: Additional data management
Step5: Plot aggregate data
Step6: Compute HP filter of data
Step7: Plot aggregate data with trends
Step8: Plot cyclical components of the data
Step9: Create data files
|
3,330
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
3,331
|
<ASSISTANT_TASK:>
Python Code:
import pymongo
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
db = client.airbnb
listings = db.Rawdata
reviews = db.reviews
import pandas as pd
listings_df = pd.DataFrame(list(listings.find()))
reviews_df = pd.DataFrame(list(reviews.find()))
listings_df.head()
reviews_df.head()
from afinn import Afinn
from textblob import TextBlob
afinn = Afinn()
reviews_df['afinn'] = ""
for i in range(0, len(reviews_df)):
if pd.isnull(reviews_df.eng_comments[i]):
reviews_df.loc[i, 'afinn'] = "NA"
else:
reviews_df.loc[i, 'afinn'] = afinn.score(reviews_df.loc[i, "eng_comments"])
reviews_df.head(10)
negative = reviews_df[reviews_df['afinn'] < 0]
positive = reviews_df[reviews_df['afinn'] > 0]
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
hosts = list(listings_df['host_name'])
hosts = [x.lower() for x in hosts if type(x) is str]
stopwords = ['a', 'able', 'about', 'above', 'abroad', 'according', 'accordingly', 'across', 'actually', 'adj', 'after', 'afterwards', 'again', 'against', 'ago', 'ahead', "ain't", 'all', 'allow', 'allows', 'almost', 'alone', 'along', 'alongside', 'already', 'also', 'although', 'always', 'am', 'amid', 'amidst', 'among', 'amongst', 'an', 'and', 'another', 'any', 'anybody', 'anyhow', 'anyone', 'anything', 'anyway', 'anyways', 'anywhere', 'apart', 'appear', 'appreciate', 'appropriate', 'are', "aren't", 'around', 'as', "a's", 'aside', 'ask', 'asking', 'associated', 'at', 'available', 'away', 'awfully', 'b', 'back', 'backward', 'backwards', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'begin', 'behind', 'being', 'believe', 'below', 'beside', 'besides', 'best', 'better', 'between', 'beyond', 'both', 'brief', 'but', 'by', 'c', 'came', 'can', 'cannot', 'cant', "can't", 'caption', 'cause', 'causes', 'certain', 'certainly', 'changes', 'clearly', "c'mon", 'co', 'co.', 'com', 'come', 'comes', 'concerning', 'consequently', 'consider', 'considering', 'contain', 'containing', 'contains', 'corresponding', 'could', "couldn't", 'course', "c's", 'currently', 'd', 'dare', "daren't", 'definitely', 'described', 'despite', 'did', "didn't", 'different', 'directly', 'do', 'does', "doesn't", 'doing', 'done', "don't", 'down', 'downwards', 'during', 'e', 'each', 'edu', 'eg', 'eight', 'eighty', 'either', 'else', 'elsewhere', 'end', 'ending', 'enough', 'entirely', 'especially', 'et', 'etc', 'even', 'ever', 'evermore', 'every', 'everybody', 'everyone', 'everything', 'everywhere', 'ex', 'exactly', 'example', 'except', 'f', 'fairly', 'farther', 'few', 'fewer', 'fifth', 'first', 'five', 'followed', 'following', 'follows', 'for', 'forever', 'former', 'formerly', 'forth', 'forward', 'found', 'four', 'from', 'further', 'furthermore', 'g', 'get', 'gets', 'getting', 'given', 'gives', 'go', 'goes', 'going', 'gone', 'got', 'gotten', 'greetings', 'h', 'had', "hadn't", 'half', 'happens', 'hardly', 'has', "hasn't", 'have', "haven't", 'having', 'he', "he'd", "he'll", 'hello', 'help', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', "here's", 'hereupon', 'hers', 'herself', "he's", 'hi', 'him', 'himself', 'his', 'hither', 'hopefully', 'how', 'howbeit', 'however', 'hundred', 'i', "i'd", 'ie', 'if', 'ignored', "i'll", "i'm", 'immediate', 'in', 'inasmuch', 'inc', 'inc.', 'indeed', 'indicate', 'indicated', 'indicates', 'inner', 'inside', 'insofar', 'instead', 'into', 'inward', 'is', "isn't", 'it', "it'd", "it'll", 'its', "it's", 'itself', "i've", 'j', 'just', 'k', 'keep', 'keeps', 'kept', 'know', 'known', 'knows', 'l', 'last', 'lately', 'later', 'latter', 'latterly', 'least', 'less', 'lest', 'let', "let's", 'like', 'liked', 'likely', 'likewise', 'little', 'look', 'looking', 'looks', 'low', 'lower', 'ltd', 'm', 'made', 'mainly', 'make', 'makes', 'many', 'may', 'maybe', "mayn't", 'me', 'mean', 'meantime', 'meanwhile', 'merely', 'might', "mightn't", 'mine', 'minus', 'miss', 'more', 'moreover', 'most', 'mostly', 'mr', 'mrs', 'much', 'must', "mustn't", 'my', 'myself', 'n', 'name', 'namely', 'nd', 'nearly', 'necessary', 'need', "needn't", 'needs', 'neither', 'never', 'neverf', 'neverless', 'nevertheless', 'next', 'nine', 'ninety', 'no', 'nobody', 'non', 'none', 'nonetheless', 'noone', 'no-one', 'nor', 'normally', 'not', 'nothing', 'notwithstanding', 'novel', 'now', 'nowhere', 'o', 'obviously', 'of', 'off', 'often', 'oh', 'ok', 'okay', 'old', 'on', 'once', 'one', 'ones', "one's", 'only', 'onto', 'opposite', 'or', 'other', 'others', 'otherwise', 'ought', "oughtn't", 'our', 'ours', 'ourselves', 'out', 'outside', 'over', 'overall', 'own', 'p', 'particular', 'particularly', 'past', 'per', 'perhaps', 'placed', 'please', 'plus', 'possible', 'presumably', 'probably', 'provided', 'provides', 'q', 'que', 'quite', 'qv', 'r', 'rather', 'rd', 're', 'really', 'reasonably', 'recent', 'recently', 'regarding', 'regardless', 'regards', 'relatively', 'respectively', 'right', 'round', 's', 'said', 'same', 'saw', 'say', 'saying', 'says', 'second', 'secondly', 'see', 'seeing', 'seem', 'seemed', 'seeming', 'seems', 'seen', 'self', 'selves', 'sensible', 'sent', 'serious', 'seriously', 'seven', 'several', 'shall', "shan't", 'she', "she'd", "she'll", "she's", 'should', "shouldn't", 'since', 'six', 'so', 'some', 'somebody', 'someday', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhat', 'somewhere', 'soon', 'sorry', 'specified', 'specify', 'specifying', 'still', 'sub', 'such', 'sup', 'sure', 't', 'take', 'taken', 'taking', 'tell', 'tends', 'th', 'than', 'thank', 'thanks', 'thanx', 'that', "that'll", 'thats', "that's", "that've", 'the', 'their', 'theirs', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', "there'd", 'therefore', 'therein', "there'll", "there're", 'theres', "there's", 'thereupon', "there've", 'these', 'they', "they'd", "they'll", "they're", "they've", 'thing', 'things', 'think', 'third', 'thirty', 'this', 'thorough', 'thoroughly', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'till', 'to', 'together', 'too', 'took', 'toward', 'towards', 'tried', 'tries', 'truly', 'try', 'trying', "t's", 'twice', 'two', 'u', 'un', 'under', 'underneath', 'undoing', 'unfortunately', 'unless', 'unlike', 'unlikely', 'until', 'unto', 'up', 'upon', 'upwards', 'us', 'use', 'used', 'useful', 'uses', 'using', 'usually', 'v', 'value', 'various', 'versus', 'very', 'via', 'viz', 'vs', 'w', 'want', 'wants', 'was', "wasn't", 'way', 'we', "we'd", 'welcome', 'well', "we'll", 'went', 'were', "we're", "weren't", "we've", 'what', 'whatever', "what'll", "what's", "what've", 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', "where's", 'whereupon', 'wherever', 'whether', 'which', 'whichever', 'while', 'whilst', 'whither', 'who', "who'd", 'whoever', 'whole', "who'll", 'whom', 'whomever', "who's", 'whose', 'why', 'will', 'willing', 'wish', 'with', 'within', 'without', 'wonder', "won't", 'would', "wouldn't", 'x', 'y', 'yes', 'yet', 'you', "you'd", "you'll", 'your', "you're", 'yours', 'yourself', 'yourselves', "you've", 'z', 'zero']
stopwords += ['apartment', 'appartment', 'apartments', 'house', 'room', 'rooms', 'airbnb', 'barcelona', 'accommodation', 'accomodation', 'mention', 'flat', 'review', 'rental', 'brother', 'lover', 'date', 'visit', 'face']
stopwords += hosts
vect_neg = TfidfVectorizer(min_df=50, max_df=0.8, token_pattern=r'\b[A-Za-z]{3,}\b', stop_words=stopwords)
X_neg = vect_neg.fit_transform(negative['eng_comments'])
X_df_neg = pd.DataFrame(X_neg.toarray(), index=negative['listing_id'], columns=vect_neg.get_feature_names())
X_df_neg1 = pd.concat([X_df_neg, pd.DataFrame(negative.filter(['listing_id', 'date'])).set_index(negative['listing_id'])], axis = 1, join_axes=[X_df_neg.index])
negative_df = pd.DataFrame(X_df_neg1.set_index(['listing_id', 'date']).stack(), columns = ['tfidf'])
negative_df.index.names = ['listing_id', 'date', 'word']
negative_df = negative_df[negative_df.tfidf != 0]
negative_df.head(20)
#negative_df.to_csv("negative_tfidf.csv")
test = positive['eng_comments'][0:50000]
vect_pos = TfidfVectorizer(min_df=50, max_df=0.8, token_pattern=r'\b[A-Za-z]{3,}\b', stop_words=stopwords)
X_pos = vect_pos.fit_transform(positive['eng_comments'])
X_df_pos = pd.DataFrame(X_pos.toarray(), index=positive['listing_id'], columns=vect_pos.get_feature_names())
X_df_pos1 = pd.concat([X_df_pos, pd.DataFrame(positive.filter(['listing_id', 'date'])).set_index(positive['listing_id'])], axis = 1, join_axes=[X_df_pos.index])
positive_df = pd.DataFrame(X_df_pos1.set_index(['listing_id', 'date']).stack(), columns = ['tfidf'])
positive_df.index.names = ['listing_id', 'date', 'word']
positive_df = positive_df[positive_df.tfidf != 0]
positive_df.head(20)
#positive_df.to_csv("positive_tfidf.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect Python to MongoDB
Step2: Retrieve from Database
Step3: Retrieve Tables from Database
Step4: Store data in a pandas dataframe for further analysis
Step5: Sentiment Analysis
Step6: 1. Calculate AFINN Score
Step7: 2. Split into positive and negative reviews
Step8: 3. Create TFIDF Corpus
Step9: Negative Reviews
Step10: Positive Reviews
|
3,332
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from IPython.html import widgets
print(widgets.Button.on_click.__doc__)
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
print(widgets.Widget.on_trait_change.__doc__)
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(name, value):
print(value)
int_range.on_trait_change(on_value_change, 'value')
from IPython.utils import traitlets
caption = widgets.Latex(value = 'The values of slider1, slider2 and slider3 are synchronized')
sliders1, slider2, slider3 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2'),\
widgets.IntSlider(description='Slider 3')
l = traitlets.link((sliders1, 'value'), (slider2, 'value'), (slider3, 'value'))
display(caption, sliders1, slider2, slider3)
caption = widgets.Latex(value = 'Changes in source values are reflected in target1 and target2')
source, target1, target2 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1'),\
widgets.IntSlider(description='Target 2')
traitlets.dlink((source, 'value'), (target1, 'value'), (target2, 'value'))
display(caption, source, target1, target2)
# l.unlink()
caption = widgets.Latex(value = 'The values of range1, range2 and range3 are synchronized')
range1, range2, range3 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2'),\
widgets.IntSlider(description='Range 3')
l = widgets.jslink((range1, 'value'), (range2, 'value'), (range3, 'value'))
display(caption, range1, range2, range3)
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1 and target_range2')
source_range, target_range1, target_range2 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1'),\
widgets.IntSlider(description='Target range 2')
widgets.jsdlink((source_range, 'value'), (target_range1, 'value'), (target_range2, 'value'))
display(caption, source_range, target_range1, target_range2)
# l.unlink()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
Step2: Example
Step3: on_sumbit
Step4: Traitlet events
Step5: Signatures
Step6: Linking Widgets
Step7: Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
Step8: Linking widgets attributes from the client side
Step9: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
|
3,333
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import numpy as np
import scipy as sp
import seaborn as sns
import matplotlib.pyplot as plt
import json
from IPython.display import Image
from IPython.core.display import HTML
import tensorflow as tf
retval=os.chdir("..")
clean_data=pd.read_pickle('./clean_data/clean_data.pkl')
clean_data.head()
kept_cols=['helpful']
kept_cols.extend(clean_data.columns[9:])
my_rand_state=0
test_size=0.25
from sklearn.model_selection import train_test_split
X = (clean_data[kept_cols].iloc[:,1:]).as_matrix()
y = (clean_data[kept_cols].iloc[:,0]).tolist()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size,
random_state=my_rand_state)
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=len(X[0,:]))]
dnn_clf=tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[200,100,50],
model_dir='./other_output/tf_model')
from sklearn.preprocessing import StandardScaler
std_scale=StandardScaler()
class PassData(object):
'''
Callable object that can be initialized and
used to pass data to tensorflow
'''
def __init__(self,X,y):
self.X=X
self.y=y
def scale(self):
self.X = std_scale.fit_transform(self.X, self.y)
def __call__(self):
return tf.constant(X), tf.constant(y)
train_data=PassData(X,y)
train_data.scale()
dnn_clf.fit(input_fn=train_data,steps=1000)
from sklearn.metrics import roc_curve, auc
nb_fpr, nb_tpr, _ = roc_curve(y_test,
nb_clf_est_b.predict_proba(X_test)[:,1])
nb_roc_auc = auc(nb_fpr, nb_tpr)
qda_fpr, qda_tpr, _ = roc_curve(y_test,
qda_clf_est_b.predict_proba(X_test)[:,1])
qda_roc_auc = auc(qda_fpr, qda_tpr)
log_fpr, log_tpr, _ = roc_curve(y_test,
log_clf_est_b.predict_proba(X_test)[:,1])
log_roc_auc = auc(log_fpr, log_tpr)
rf_fpr, rf_tpr, _ = roc_curve(y_test,
rf_clf_est_b.predict_proba(X_test)[:,1])
rf_roc_auc = auc(rf_fpr, rf_tpr)
plt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',
label='NB (area = %0.2f)' % nb_roc_auc, lw=2)
plt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',
label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)
plt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',
label='LOG (area = %0.2f)' % log_roc_auc, lw=2)
plt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',
label='RF (area = %0.2f)' % rf_roc_auc, lw=2)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',
label='Luck')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves of Basic Models Using BOW & Macro-Text Stats')
plt.legend(loc="lower right")
plt.savefig('./plots/ROC_Basic_BOW_MERGED.png', bbox_inches='tight')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training and Testing Split
Step2: Setting Up Tensor Flow
Step3: Testing Estimators
|
3,334
|
<ASSISTANT_TASK:>
Python Code::
import matplotlib.pyplot as plt
plt.figure(figsize=(10,6))
plt.bar(x=range(0,len(X_train.columns)),
height=pca.explained_variance_ratio_,
tick_label=X_train.columns)
plt.title('Explained Variance Ratio')
plt.ylabel('Explained Variance Ratio')
plt.xlabel('Component')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
3,335
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.extend(['../'])
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import onsager.crystal as crystal
import onsager.OnsagerCalc as onsager
from scipy.constants import physical_constants
kB = physical_constants['Boltzmann constant in eV/K'][0]
a0 = 0.28553
Fe = crystal.Crystal.BCC(a0, "Fe")
print(Fe)
stressconv = 1e9*1e-27*Fe.volume/physical_constants['electron volt'][0]
c11, c12, c44 = 243*stressconv, 145*stressconv, 116*stressconv
s11, s12, s44 = (c11+c12)/((c11-c12)*(c11+2*c12)), -c12/((c11-c12)*(c11+2*c12)), 1/c44
print('S11 = {:.4f} S12 = {:.4f} S44 = {:.4f}'.format(s11, s12, s44))
stensor = np.zeros((3,3,3,3))
for a in range(3):
for b in range(3):
for c in range(3):
for d in range(3):
if a==b and b==c and c==d: stensor[a,b,c,d] = s11
elif a==b and c==d: stensor[a,b,c,d] = s12
elif (a==d and b==c) or (a==c and b==d): stensor[a,b,c,d] = s44/4
uoct = np.dot(Fe.invlatt, np.array([0, 0, 0.5*a0]))
FeC = Fe.addbasis(Fe.Wyckoffpos(uoct), ["C"])
print(FeC)
chem = 1 # 1 is the index corresponding to our C atom in the crystal
sitelist = FeC.sitelist(chem)
jumpnetwork = FeC.jumpnetwork(chem, 0.6*a0) # 0.6*a0 is the cutoff distance for finding jumps
FeCdiffuser = onsager.Interstitial(FeC, chem, sitelist, jumpnetwork)
print(FeCdiffuser)
Dconv = 1e-2
vu0 = 10*Dconv
Etrans = 0.816
dipoledict = {'Poctpara': 8.03, 'Poctperp': 3.40,
'Ptetpara': 4.87, 'Ptetperp': 6.66}
FeCthermodict = {'pre': np.ones(len(sitelist)), 'ene': np.zeros(len(sitelist)),
'preT': vu0*np.ones(len(jumpnetwork)),
'eneT': Etrans*np.ones(len(jumpnetwork))}
# now to construct the site and transition dipole tensors; we use a "direction"--either
# the site position or the jump direction--to determine the parallel and perpendicular
# directions.
for dipname, Pname, direction in zip(('dipole', 'dipoleT'), ('Poct', 'Ptet'),
(np.dot(FeC.lattice, FeC.basis[chem][sitelist[0][0]]),
jumpnetwork[0][0][1])):
# identify the non-zero index in our direction:
paraindex = [n for n in range(3) if not np.isclose(direction[n], 0)][0]
Ppara, Pperp = dipoledict[Pname + 'para'], dipoledict[Pname + 'perp']
FeCthermodict[dipname] = np.diag([Ppara if i==paraindex else Pperp
for i in range(3)])
for k,v in FeCthermodict.items():
print('{}: {}'.format(k, v))
Trange = np.linspace(300, 1200, 91)
Tlabels = Trange[0::30]
Dlist, dDlist, Vlist = [], [], []
for T in Trange:
beta = 1./(kB*T)
D, dD = FeCdiffuser.elastodiffusion(FeCthermodict['pre'],
beta*FeCthermodict['ene'],
[beta*FeCthermodict['dipole']],
FeCthermodict['preT'],
beta*FeCthermodict['eneT'],
[beta*FeCthermodict['dipoleT']])
Dlist.append(D[0,0])
dDlist.append([dD[0,0,0,0], dD[0,0,1,1], dD[0,1,0,1]])
Vtensor = (kB*T/(D[0,0]))*np.tensordot(dD, stensor, axes=((2,3),(0,1)))
Vlist.append([np.trace(np.trace(Vtensor))/3,
Vtensor[0,0,0,0], Vtensor[0,0,1,1], Vtensor[0,1,0,1]])
D0 = FeCdiffuser.diffusivity(FeCthermodict['pre'],
np.zeros_like(FeCthermodict['ene']),
FeCthermodict['preT'],
np.zeros_like(FeCthermodict['eneT']))
D, dbeta = FeCdiffuser.diffusivity(FeCthermodict['pre'],
FeCthermodict['ene'],
FeCthermodict['preT'],
FeCthermodict['eneT'],
CalcDeriv=True)
print('D0: {:.4e} cm^2/s\nEact: {:.3f} eV'.format(D0[0,0], dbeta[0,0]/D[0,0]))
D, dD = np.array(Dlist), np.array(dDlist)
d11_T = np.vstack((Trange, dD[:,0])).T
d11pos = np.array([(T,d) for T,d in d11_T if d>=0])
d11neg = np.array([(T,d) for T,d in d11_T if d<0])
fig, ax1 = plt.subplots()
ax1.plot(1./Trange, D, 'k', label='$D$')
# ax1.plot(1./Trange, dD[:,0], 'b', label='$d_{11}$')
ax1.plot(1./d11pos[:,0], d11pos[:,1], 'b', label='$d_{11}$')
ax1.plot(1./d11neg[:,0], -d11neg[:,1], 'b--')
ax1.plot(1./Trange, dD[:,1], 'r', label='$d_{12}$')
ax1.plot(1./Trange, dD[:,2], 'g-.', label='$d_{44} = D$')
ax1.set_yscale('log')
ax1.set_ylabel('$D$ [cm$^2$/s]', fontsize='x-large')
ax1.set_xlabel('$T^{-1}$ [K$^{-1}$]', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.15,0.15,0.2,0.4), ncol=1,
shadow=True, frameon=True, fontsize='x-large')
ax2 = ax1.twiny()
ax2.set_xlim(ax1.get_xlim())
ax2.set_xticks([1./t for t in Tlabels])
ax2.set_xticklabels(["{:.0f}K".format(t) for t in Tlabels])
ax2.set_xlabel('$T$ [K]', fontsize='x-large')
ax2.grid(False)
ax2.tick_params(axis='x', top='on', direction='in', length=6)
plt.show()
# plt.savefig('Fe-C-diffusivity.pdf', transparent=True, format='pdf')
d11pos[0,0], d11neg[-1,0]
V = np.array(Vlist)
fig, ax1 = plt.subplots()
ax1.plot(1./Trange, V[:,0], 'k', label='$V_{\\rm{total}}$')
ax1.plot(1./Trange, V[:,1], 'b', label='$V_{11}$')
ax1.plot(1./Trange, V[:,2], 'r', label='$V_{12}$')
ax1.plot(1./Trange, 2*V[:,3], 'g', label='$V_{44}$')
ax1.set_yscale('linear')
ax1.set_ylabel('$V$ [atomic volume]', fontsize='x-large')
ax1.set_xlabel('$T^{-1}$ [K$^{-1}$]', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.3,0.3,0.5,0.2), ncol=2,
shadow=True, frameon=True, fontsize='x-large')
ax2 = ax1.twiny()
ax2.set_xlim(ax1.get_xlim())
ax2.set_xticks([1./t for t in Tlabels])
ax2.set_xticklabels(["{:.0f}K".format(t) for t in Tlabels])
ax2.set_xlabel('$T$ [K]', fontsize='x-large')
ax2.grid(False)
ax2.tick_params(axis='x', top='on', direction='in', length=6)
plt.show()
# plt.savefig('Fe-C-activation-volume.pdf', transparent=True, format='pdf')
print('Total volume: {v[0]:.4f}, {V[0]:.4f}A^3\nV_xxxx: {v[1]:.4f}, {V[1]:.4f}A^3\nV_xxyy: {v[2]:.4f}, {V[2]:.4f}A^3\nV_xyxy: {v[3]:.4f}, {V[3]:.4f}A^3'.format(v=V[-1,:], V=V[-1,:]*1e3*Fe.volume))
Vsph = 0.2*(3*V[-1,1] + 2*V[-1,2] + 4*V[-1,3]) # (3V11 + 2V12 + 2V44)/5
print('Spherical average uniaxial activation volume: {:.4f} {:.4f}A^3'.format(Vsph, Vsph*1e3*Fe.volume))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create BCC lattice (lattice constant in nm).
Step2: Elastic constants converted from GPa ($10^9$ J/m$^3$) to eV/(atomic volume).
Step3: Add carbon interstitial sites at octahedral sites in the lattice. This code (1) gets the set of symmetric Wyckoff positions corresponding to the single site $[00\frac12]$ (first translated into unit cell coordinates), and then adds that new basis to our Fe crystal to generate a new crystal structure, that we name "FeC".
Step4: Next, we construct a diffuser based on our interstitial. We need to create a sitelist (which will be the Wyckoff positions) and a jumpnetwork for the transitions between the sites. There are tags that correspond to the unique states and transitions in the diffuser.
Step5: Next, we assemble our data
Step6: We look at the diffusivity $D$, the elastodiffusivity $d$, and the activation volume tensor $V$ over a range of temperatures from 300K to 1200K.
Step7: Activation volume. We plot the isotropic value (change in diffusivity with respect to pressure), but also the $V_{xxxx}$, $V_{xxyy}$, and $V_{xyxy}$ terms. Interestingly, the $V_{xxxx}$ term is negative---which indicates that diffusivity along the $[100]$ direction increases with compressive stress in the $[100]$ direction.
|
3,336
|
<ASSISTANT_TASK:>
Python Code:
import os
from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors
train_file = "OPUS_en_it_europarl_train_5K.txt"
with utils.smart_open(train_file, "r") as f:
word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
print (word_pair[:10])
# Load the source language word vector
source_word_vec_file = "EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
source_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)
# Load the target language word vector
target_word_vec_file = "IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
target_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)
transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, word_pair)
transmat.train(word_pair)
print ("the shape of translation matrix is: ", transmat.translation_matrix.shape)
# The pair is in the form of (English, Italian), we can see whether the translated word is correct
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5, )
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
words = [("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"), ("mango", "mango")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("fish", "cavallo"), ("birds", "uccelli")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
import pickle
word_dict = "word_dict.pkl"
with utils.smart_open(word_dict, "r") as f:
word_pair = pickle.load(f)
print ("the length of word pair ", len(word_pair))
import time
test_case = 10
word_pair_length = len(word_pair)
step = word_pair_length / test_case
duration = []
sizeofword = []
for idx in range(0, test_case):
sub_pair = word_pair[: (idx + 1) * step]
startTime = time.time()
transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, sub_pair)
transmat.train(sub_pair)
endTime = time.time()
sizeofword.append(len(sub_pair))
duration.append(endTime - startTime)
import plotly
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [Scatter(x=sizeofword, y=duration)],
"layout": Layout(title="time for creation"),
}, filename="tm_creation_time.html")
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
# you can also using plotly lib to plot in one figure
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# Translate the English word five to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of five: ", translated_word
# the translated words of five
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
# remove the code, use the plotly for ploting instead
# pca = PCA(n_components=2)
# new_en_words_vec = pca.fit_transform(en_words_vec)
# new_it_words_vec = pca.fit_transform(it_words_vec)
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# Translate the English word birds to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of birds: ", translated_word
# the translated words of birds
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# # remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:5, 0],
y = new_it_words_vec[:5, 1],
mode = 'markers+text',
text = it_words[:5],
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models import Doc2Vec
from collections import namedtuple
from gensim import utils
def read_sentimentDocs():
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no // 25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
return train_docs, test_docs, doc_list
train_docs, test_docs, doc_list = read_sentimentDocs()
small_corpus = train_docs[:15000]
large_corpus = train_docs + test_docs
print len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)
# for the computer performance limited, didn't run on the notebook.
# You do can trained on the server and save the model to the disk.
import multiprocessing
from random import shuffle
cores = multiprocessing.cpu_count()
model1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
model2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
small_train_docs = train_docs[:15000]
# train for small corpus
model1.build_vocab(small_train_docs)
for epoch in range(50):
shuffle(small_train_docs)
model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)
model.save("small_doc_15000_iter50.bin")
large_train_docs = train_docs + test_docs
# train for large corpus
model2.build_vocab(large_train_docs)
for epoch in range(50):
shuffle(large_train_docs)
model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)
# save the model
model2.save("large_doc_50000_iter50.bin")
import os
import numpy as np
from sklearn.linear_model import LogisticRegression
def test_classifier_error(train, train_label, test, test_label):
classifier = LogisticRegression()
classifier.fit(train, train_label)
score = classifier.score(test, test_label)
print "the classifier score :", score
return score
#you can change the data folder
basedir = "/home/robotcator/doc2vec"
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
m2 = []
for i in range(len(large_corpus)):
m2.append(model2.docvecs[large_corpus[i].tags])
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
for i in range(12500):
train_array[i] = m2[i]
train_label[i] = 1
train_array[i + 12500] = m2[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m2[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m2[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by doc2vec method"
test_classifier_error(train_array, train_label, test_array, test_label)
from gensim.models import translation_matrix
# you can change the data folder
basedir = "/home/robotcator/doc2vec"
model1 = Doc2Vec.load(os.path.join(basedir, "small_doc_15000_iter50.bin"))
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
l = model1.docvecs.count
l2 = model2.docvecs.count
m1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])
# learn the mapping bettween two model
model = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)
model.train(large_corpus[:15000])
for i in range(l, l2):
infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])
m1 = np.vstack((m1, infered_vec.flatten()))
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
# because those document, 25k documents are postive label, 25k documents are negative label
for i in range(12500):
train_array[i] = m1[i]
train_label[i] = 1
train_array[i + 12500] = m1[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m1[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m1[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by back-mapping method"
test_classifier_error(train_array, train_label, test_array, test_label)
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
m1_part = m1[14995: 15000]
m2_part = m2[14995: 15000]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
m1_part = m1[14995: 15002]
m2_part = m2[14995: 15002]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this tutorial, we'll train our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each word pair is English word with corresponding Italian word.
Step2: This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target. (Those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,
Step3: Train the translation matrix
Step4: Prediction Time
Step5: Part two
Step6: Part three
Step7: The Creation Time for the Translation Matrix
Step8: You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
Step9: The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these two languages can be captured by linear mapping.
Step10: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.
Step11: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word birds, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.
Step12: Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.
Step13: For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
Step14: For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
Step15: For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.
Step16: As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
|
3,337
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("07-hw-animals.csv")
df
df.columns
df.head()
df['animal'].head(3)
df.sort_values('length', ascending=False).head(3)
df['animal'].value_counts()
dogs = df[df['animal'] == "dog"]
dogs
animal_larger_40 = df['length'] > 40
animal_larger_40
df[animal_larger_40]
df['length'].head()
inch = df['length'] * 0.393701
inch
inch = df['length'] / 2.54
inch
df['length_inch'] = inch
df.head()
dogs =df[df['animal'] =="dog"]
dogs
cats =df[df['animal'] =="cat"]
cats
cat = df['animal'] == "cat"
twelve_inch = df['length_inch'] > 12
df[cat & twelve_inch].head()
df[(df['animal'] == "cat") & (df['length_inch'] > 12)].head()
df[cat].describe()
dog = df['animal'] == "dog"
df[dog].describe()
df.groupby('animal').describe()
dogs.hist()
dogs.plot(kind='line')
df['length'].plot(kind='barh', x='lenth', y='name')
df['length'].plot(kind='barh', labels= df['name'])
cats =df[df['animal'] =="cat"]
cats
sort_cat = cats.sort_values('length', ascending=False)
sort_cat
sort_cat['length'].plot(kind='barh', x='length', y='name')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set all graphics from matplotlib to display inline
Step2: 3. Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
Step3: 4. Display the names of the columns in the csv
Step4: 5. Display the first 3 animals
Step5: 6. Sort the animals to see the 3 longest animals.
Step6: 7. What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
Step7: 8. Only select the dogs
Step8: 9. Display all of the animals that are greater than 40 cm.
Step9: 10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
Step10: 11. Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step11: 13. Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
Step12: 13. What's the mean length of a cat?
Step13: 14. What's the mean length of a dog
Step14: 15. Use groupby to accomplish both of the above tasks at once.
Step15: 16. Make a histogram of the length of dogs.
Step16: 17 Change your graphing style to be something else (anything else!)
Step17: 18. Make a horizontal bar graph of the length of the animals, with their name as the label
Step18: 19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.
|
3,338
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
import sys
import numpy as np
np.random.seed(777)
import os
# The followings are hacks to allow sphinx-gallery to run the example.
sys.path.insert(0, os.getcwd())
main_dir = os.path.basename(sys.modules['__main__'].__file__)
IS_RUN_WITH_SPHINX_GALLERY = main_dir != os.getcwd()
from skopt import gp_minimize
from skopt import callbacks
from skopt.callbacks import CheckpointSaver
noise_level = 0.1
if IS_RUN_WITH_SPHINX_GALLERY:
# When this example is run with sphinx gallery, it breaks the pickling
# capacity for multiprocessing backend so we have to modify the way we
# define our functions. This has nothing to do with the example.
from utils import obj_fun
else:
def obj_fun(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level
checkpoint_saver = CheckpointSaver("./checkpoint.pkl", compress=9) # keyword arguments will be passed to `skopt.dump`
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=[-20.], # the starting point
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
callback=[checkpoint_saver], # a list of callbacks including the checkpoint saver
random_state=777);
from skopt import load
res = load('./checkpoint.pkl')
res.fun
x0 = res.x_iters
y0 = res.func_vals
gp_minimize(obj_fun, # the function to minimize
[(-20.0, 20.0)], # the bounds on each dimension of x
x0=x0, # already examined values for x
y0=y0, # observed values for x0
acq_func="LCB", # the acquisition function (optional)
n_calls=10, # the number of evaluations of f including at x0
n_random_starts=0, # the number of random initialization points
callback=[checkpoint_saver],
random_state=777);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple example
Step2: Now let's assume this did not finish at once but took some long time
Step3: Continue the search
|
3,339
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName('2.1. Google Cloud Storage (CSV) & Spark DataFrames') \
.getOrCreate()
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
from google.cloud import storage
gcs_client = storage.Client()
bucket = gcs_client.bucket('solutions-public-assets')
list(bucket.list_blobs(prefix='time-series-master/'))
!hdfs dfs -ls 'gs://solutions-public-assets/time-series-master'
df1 = spark \
.read \
.option ( "inferSchema" , "true" ) \
.option ( "header" , "true" ) \
.csv ( "gs://solutions-public-assets/time-series-master/GBPUSD_*.csv" )
df1.printSchema()
df1
from pyspark.sql.types import StructType, StructField
from pyspark.sql.types import DoubleType, IntegerType, StringType, TimestampType, DateType
schema = StructType([
StructField("venue", StringType()),
StructField("currencies", StringType()),
StructField("time_stamp", TimestampType()),
StructField("bid", DoubleType()),
StructField("ask", DoubleType())
])
df2 = spark \
.read \
.schema(schema) \
.csv ( "gs://solutions-public-assets/time-series-master/GBPUSD_*.csv" )
df2.printSchema()
df2
print((df2.count(), len(df2.columns)))
import pyspark.sql.functions as F
df3 = df2.withColumn("hour", F.hour(F.col("time_stamp"))) \
.filter(df2['time_stamp'] >= F.lit('2014-01-01 00:00:00')) \
.filter(df2['time_stamp'] < F.lit('2014-01-02 00:00:10')).cache()
df3
print((df3.count(), len(df3.columns)))
import pyspark.sql.functions as F
df4 = df3 \
.groupBy("hour") \
.agg(F.sum('bid').alias('total_bids'))
df4.orderBy('total_bids', ascending=False)
# Update to your GCS bucket
gcs_bucket = 'dataproc-bucket-name'
gcs_filepath = 'gs://{}/currency/hourly_bids.csv'.format(gcs_bucket)
df4.coalesce(1).write \
.mode('overwrite') \
.csv(gcs_filepath)
!hdfs dfs -ls gs://dataproc-bucket-name/currency
df5 = spark.read \
.option ( "inferSchema" , "true" ) \
.option ( "header" , "true" ) \
.csv('gs://dataproc-bucket-name/currency/*')
df5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enable repl.eagerEval
Step2: List files in a GCS bucket
Step3: Alternatively use the hdfs cmd to list files in a directory which supports GCS buckets
Step4: Read CSV files from GCS into Spark Dataframe
Step5: If there is no header with column names as we can see with the dataset here or the schema is not infered correctly then read CSV files from GCS and define schema
Step6: View the top 20 rows of the spark dataframe
Step7: Print the shape of the dataframe. No of rows and no of columns
Step8: Add hour column and filter the data to create a new dataframe with only 1 day of data
Step9: Group by hour and order by top_bids
Step10: Write Spark Dataframe to Google Cloud Storage in CSV format
Step11: Read the CSV file into new DataFrame to check it was successfuly saved
|
3,340
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import statsmodels.api as sm
import numpy as np
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
data = sm.datasets.longley.load()
data.exog = sm.add_constant(data.exog)
print(data.exog[:5])
ols_resid = sm.OLS(data.endog, data.exog).fit().resid
resid_fit = sm.OLS(ols_resid[1:], sm.add_constant(ols_resid[:-1])).fit()
print(resid_fit.tvalues[1])
print(resid_fit.pvalues[1])
rho = resid_fit.params[1]
from scipy.linalg import toeplitz
toeplitz(range(5))
order = toeplitz(range(len(ols_resid)))
sigma = rho**order
gls_model = sm.GLS(data.endog, data.exog, sigma=sigma)
gls_results = gls_model.fit()
glsar_model = sm.GLSAR(data.endog, data.exog, 1)
glsar_results = glsar_model.iterative_fit(1)
print(glsar_results.summary())
print(gls_results.params)
print(glsar_results.params)
print(gls_results.bse)
print(glsar_results.bse)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Longley dataset is a time series dataset
Step2: Let's assume that the data is heteroskedastic and that we know
Step3: Assume that the error terms follow an AR(1) process with a trend
Step4: While we don't have strong evidence that the errors follow an AR(1)
Step5: As we know, an AR(1) process means that near-neighbors have a stronger
Step6: so that our error covariance structure is actually rho**order
Step7: Of course, the exact rho in this instance is not known so it it might make more sense to use feasible gls, which currently only has experimental support.
Step8: Comparing gls and glsar results, we see that there are some small
|
3,341
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Hari Bharadwaj <hari@nmr.mgh.harvard.edu>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.time_frequency import tfr_multitaper
from mne.datasets import somato
print(__doc__)
data_path = somato.data_path()
raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'
event_id, tmin, tmax = 1, -1., 3.
# Setup for reading the raw data
raw = io.Raw(raw_fname)
baseline = (None, 0)
events = mne.find_events(raw, stim_channel='STI 014')
# Pick a good channel for somatosensory responses.
picks = [raw.info['ch_names'].index('MEG 1142'), ]
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13))
freqs = np.arange(5., 50., 2.) # define frequencies of interest
n_cycles = freqs / 2. # 0.5 second time windows for all frequencies
# Choose time x (full) bandwidth product
time_bandwidth = 4.0 # With 0.5 s time windows, this gives 8 Hz smoothing
power, itc = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles,
use_fft=True, time_bandwidth=time_bandwidth,
return_itc=True, n_jobs=1)
# Plot results (with baseline correction only for power)
power.plot([0], baseline=(-0.5, 0), mode='mean', title='MEG 1142 - Power')
itc.plot([0], title='MEG 1142 - Intertrial Coherence')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load real somatosensory sample data.
Step2: Calculate power
|
3,342
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('V08g_lkKj6Q')
with open("primes_file.txt", "r") as primes:
output = []
for line in primes.readlines()[3:]: # skip first 4 lines
if line.strip() == 'end.':
break
for column in line.split():
num = int(column.strip())
output.append(num)
# verify we have the data we want
print(output[0:10]) # first 10
print(output[-10: -1]) # last 10
from math import sqrt as root2, floor
def eratosthenes(n):
the_max = floor(root2(n)) # upper limit of eliminator
sieve = list(range(0, n+1))
eliminator = 2
while True:
if eliminator > the_max:
break
print("Eliminating multiples of:", eliminator)
for k in range(2 * eliminator, n + 1, eliminator):
sieve[k] = 0
while sieve[eliminator + 1] == 0:
eliminator += 1
else:
eliminator = sieve[eliminator + 1]
# shrink me down (compact the sieve)
sieve = [n for n in sieve if n != 0][1:] # list comprehension!
return sieve
# apply fancy formatting to output
output = eratosthenes(1000)
for row in range(0, len(output), 10):
print(", ".join(map(lambda s: str.format("{:3d}", s), output[row:row+10])))
how_many = len(output)
check_list = output[:how_many] # get as many from the published pool as are in the sieve
print(output == check_list) # check for equality
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: output will be global since the above cell is run top level, not inside the scope of a function. Checking the last few entries below, to confirm everything worked
Step2: So far I'm feeling pretty good about this one.
Step3: Now we're interested in checking our list against the elements obtained from primes_file.txt, up to and including our largest prime.
|
3,343
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import cm
# Import badlands grid generation toolbox
import pybadlands_companion.hydroGrid as hydr
# display plots in SVG format
%config InlineBackend.figure_format = 'svg'
#help(hydr.hydroGrid.__init__)
hydro1 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
hydro2 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
#help(hydro.getCatchment)
hydro1.getCatchment(timestep=200)
hydro2.getCatchment(timestep=200)
#help(hydro.viewNetwork)
hydro1.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = False, linePlot = True, lineWidth = 2, markerSize = 15,
val = 'chi', width = 300, height = 500, colorMap = cm.viridis,
colorScale = 'Viridis', reverse = False,
title = '<br>Stream network graph 2')
hydro1.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 1')
hydro2.viewNetwork(markerPlot = True, linePlot = True, lineWidth = 3, markerSize = 3,
val = 'FA', width = 300, height = 500, colorMap = cm.Blues,
colorScale = 'Blues', reverse = True,
title = '<br>Stream network graph 2')
#help(hydro.extractMainstream)
hydro1.extractMainstream()
hydro2.extractMainstream()
#help(hydro.viewStream)
hydro1.viewStream(linePlot = False, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 1')
hydro2.viewStream(linePlot = True, lineWidth = 1, markerSize = 7,
val = 'Z', width = 300, height = 500, colorMap = cm.jet,
colorScale = 'Jet', reverse = False,
title = '<br>Stream network graph 2')
hydro1.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
hydro2.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=100)
#help(hydro1.viewPlot)
hydro1.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'black', colorMarker = 'black',
opacity = 0.2, title = 'Chi vs distance to outlet')
hydro2.viewPlot(lineWidth = 3, markerSize = 5, xval = 'dist', yval = 'Z',
width = 800, height = 500, colorLine = 'orange', colorMarker = 'purple',
opacity = 0.2, title = 'Chi vs distance to outlet')
#help(hydro.timeProfiles)
hydro0 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [40599,7656.65])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro0.getCatchment(timestep=timeStp[t])
hydro0.extractMainstream()
hydro0.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=1000)
dist.append(hydro0.dist)
elev.append(hydro0.Zdata)
hydro0.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
hydro00 = hydr.hydroGrid(folder='output/h5', ncpus=1, \
ptXY = [33627.6,30672.9])
timeStp = [20,40,60,80,100,120,140,160,180,200]
timeMA = map(lambda x: x * 0.25, timeStp)
print 'Profile time in Ma:',timeMA
dist = []
elev = []
for t in range(len(timeStp)):
hydro00.getCatchment(timestep=timeStp[t])
hydro00.extractMainstream()
hydro00.computeParams(kd=8.e-1, kc=5.e-6, m=0.5, n=1., num=50)
dist.append(hydro00.dist)
elev.append(hydro00.Zdata)
hydro00.timeProfiles(pData = elev, pDist = dist, width = 1000, height = 600, linesize = 3,
title = 'River profile through time')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Load catchments parameters
Step2: 2. Extract particular catchment dataset
Step3: We can visualise the stream network using the viewNetwork function. The following paramters can be displayed
Step4: 3. Extract catchment main stream
Step5: As for the global stream network, you can use the viewStream function to visualise the main stream dataset.
Step6: 4. Compute main stream hydrometrics
Step7: The following combination of parameters can be visualised with the viewPlot function
Step8: 5. River profile through time
|
3,344
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
os.environ["SPARK_HOME"] = "/Users/projects/.pyenv/versions/3.7.10/envs/tatapower/lib/python3.7/site-packages/pyspark"
# os.environ["HADOOP_HOME"] = ""
# os.environ["PYSPARK_PYTHON"] = "/opt/cloudera/parcels/Anaconda/bin/python"
# os.environ["JAVA_HOME"] = "/usr/java/jdk1.8.0_161/jre"
# os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
# sys.path.insert(0, os.environ["PYLIB"] +"/py4j-0.10.6-src.zip")
# sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")
from pyspark import SparkFiles
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql.types import *
from pyspark.sql.functions import *
# Create a SparkSession.
spark = SparkSession.builder\
.master("local[*]")\
.appName("ETL")\
.config("spark.executor.logs.rolling.time.interval", "daily")\
.getOrCreate()
# Social Network edgelist
# First, we set the filename
file = "HiggsTwitter/higgs-social_network.edgelist.gz"
# Second, Set the Schema where first column is follower and second is followed, both of types integer.
schema = StructType([StructField("follower", IntegerType()), StructField("followed", IntegerType())])
# Create the DataFrame
socialDF = spark.read.csv(path=file, sep=" ", schema=schema)
#Retweet Network
# First, we set the filename
file = "HiggsTwitter/higgs-retweet_network.edgelist.gz"
# Second, Set the Schema where first column is tweeter, second is tweeted, third is occur and all are of type integer.
schema =
# Create the DataFrame
retweetDF =
# Reply Network
# First, we set the filename
file = "HiggsTwitter/higgs-reply_network.edgelist.gz"
# Second, Set the Schema where first column is replier, second is replied, third is occur and all are of type integer.
schema =
# Create the DataFrame
replyDF =
# Mention Network
# First, we set the filename
file = "HiggsTwitter/higgs-mention_network.edgelist.gz"
# Second, Set the Schema where first column is mentioner, second is mentioned, third is occur and all are of type integer.
schema =
# Create the DataFrame
mentionDF =
# Activity Time
# First, we set the filename
file = "HiggsTwitter/higgs-activity_time.txt.gz"
# Second, Set the Schema where
# * first column is userA (integer)
# * second is userB (integer)
# * third is timestamp (integer)
# * fourth is interaction (string): Interaction can be: RT (retweet), MT (mention) or RE (reply)
schema =
activityDF =
# Save all the five files to parquet format
socialDF
retweetDF
replyDF
mentionDF
activityDF
# Read all the five files from parquet format
socialDFpq = spark.read
retweetDFpq = spark.read
replyDFpq = spark.read
mentionDFpq = spark.read
activityDFpq = spark.read
# Display the schema of the dataframes
socialDFpq
socialDFpq
# Show the top 5 rows of each dataframe
socialDFpq
retweetDFpq
replyDFpq
mentionDFpq
activityDFpq
# Users who have most followers
socialDFpq
# Users who have most mentions
mentionDFpq
# Of the top 5 followed users, how many mentions has each one?
# top_f contains "top 5 users who have most followers"
top_f =
top_f.
# Create temporary views so we can use SQL statements
socialDFpq.
retweetDFpq.
replyDFpq.
mentionDFpq.
activityDFpq.
# List all the tables in spark memory
spark.
# Users who have most followers using SQL
spark.
# Users who have most mentions using SQL
spark.
# Of the top 5 followed users, how many mentions has each one? (using SQL)
%%time
# GZIP Compressed CSV
socialDF.groupBy("followed").agg(count("follower").alias("followers")).orderBy(desc("followers")).show(5)
%%time
# Parquet file
socialDFpq.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
# cache dataframes
socialDF.cache()
socialDFpq.cache()
# remove from cache
#socialDF.unpersist()
#socialDFpq.unpersist()
%%time
# GZIP Compressed CSV (dataframe cached)
socialDF.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
%%time
# Parquet file (dataframe cached)
socialDFpq.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing and creating SparkSession
Step2: Setting filesystem and files
Step3: Convert CSV's dataframes to Apache Parquet files
Step4: Load the parquet files into new dataframes
Step5: Working with dataframes
Step6: Spark SQL using DataFrames API
Step7: Spark SQL using SQL language
Step8: Performance testing
Step9: Cached DF vs not cached DF
Step10: Note
|
3,345
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')
import py_entitymatching as em
import pandas as pd
import os
# Display the versions
print('python version: ' + sys.version )
print('pandas version: ' + pd.__version__ )
print('magellan version: ' + em.__version__ )
# Get the paths
path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/fodors.csv'
path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/zagats.csv'
# Load csv files as dataframes and set the key attribute in the dataframe
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
print('Number of tuples in A: ' + str(len(A)))
print('Number of tuples in B: ' + str(len(B)))
print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))
A.head(2)
B.head(2)
# Display the keys of the input tables
em.get_key(A), em.get_key(B)
# If the tables are large we can downsample the tables like this
A1, B1 = em.down_sample(A, B, 200, 1, show_progress=False)
len(A1), len(B1)
# But for the purposes of this notebook, we will use the entire table A and B
# Blocking plan
# A, B -- attribute equiv. blocker [city] --------------------|---> candidate set
# Create attribute equivalence blocker
ab = em.AttrEquivalenceBlocker()
# Block using city attribute
C1 = ab.block_tables(A, B, 'city', 'city',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone']
)
len(C1)
# Debug blocker output
dbg = em.debug_blocker(C1, A, B, output_size=200)
# Display first few tuple pairs from the debug_blocker's output
dbg.head()
# Updated blocking sequence
# A, B ------ attribute equivalence [city] -----> C1--
# |----> C
# A, B ------ overlap blocker [name] --------> C2--
# Create overlap blocker
ob = em.OverlapBlocker()
# Block tables using 'name' attribute
C2 = ob.block_tables(A, B, 'name', 'name',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone'],
overlap_size=1,
show_progress=False
)
len(C2)
# Display first two rows from C2
C2.head(2)
# Combine blocker outputs
C = em.combine_blocker_outputs_via_union([C1, C2])
len(C)
# Debug again
dbg = em.debug_blocker(C, A, B)
# Display first few rows from the debugger output
dbg.head(3)
# Sample candidate set
S = em.sample_table(C, 450)
# Label S
G = em.label_table(S, 'gold')
# Load the pre-labeled data
path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/lbl_restnt_wf1.csv'
G = em.read_csv_metadata(path_G,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(G)
# Split S into development set (I) and evaluation set (J)
IJ = em.split_train_test(G, train_proportion=0.7, random_state=0)
I = IJ['train']
J = IJ['test']
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
nb = em.NBMatcher(name='NaiveBayes')
# Generate features
feature_table = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
# List the names of the features generated
feature_table['feature_name']
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=feature_table,
attrs_after='gold',
show_progress=False)
# Display first few rows
H.head(3)
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric_to_select_matcher='f1', random_state=0)
result['cv_stats']
# Split feature vectors into train and test
UV = em.split_train_test(H, train_proportion=0.5)
U = UV['train']
V = UV['test']
# Debug decision tree using GUI
em.vis_debug_rf(rf, U, V,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
target_attr='gold')
def phone_phone_feature(ltuple, rtuple):
p1 = ltuple.phone
p2 = rtuple.phone
p1 = p1.replace('/','-')
p1 = p1.replace(' ','')
p2 = p2.replace('/','-')
p2 = p2.replace(' ','')
if p1 == p2:
return 1.0
else:
return 0.0
feature_table = em.get_features_for_matching(A, B)
em.add_blackbox_feature(feature_table, 'phone_phone_feature', phone_phone_feature)
H = em.extract_feature_vecs(I, feature_table=feature_table, attrs_after='gold', show_progress=False)
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg, nb], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
k=5,
target_attr='gold', metric_to_select_matcher='f1', random_state=0)
result['cv_stats']
# Convert J into a set of feature vectors using feature table
L = em.extract_feature_vecs(J, feature_table=feature_table,
attrs_after='gold', show_progress=False)
# Train using feature vectors from I
rf.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
target_attr='gold')
# Predict on L
predictions = rf.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'gold'],
append=True, target_attr='predicted', inplace=False)
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'gold', 'predicted')
em.print_eval_summary(eval_result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matching two tables typically consists of the following three steps
Step2: Block tables to get candidate set
Step3: Debug blocker output
Step4: From the debug blocker's output we observe that the current blocker drops quite a few potential matches. We would want to update the blocking sequence to avoid dropping these potential matches.
Step5: We observe that the number of tuple pairs considered for matching is increased to 12530 (from 10165). Now let us debug the blocker output again to check if the current blocker sequence is dropping any potential matches.
Step6: We observe that the current blocker sequence does not drop obvious potential matches, and we can proceed with the matching step now. A subtle point to note here is, debugging blocker output practically provides a stopping criteria for modifying the blocker sequence.
Step7: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
Step8: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
Step9: Splitting the labeled data into development and evaluation set
Step10: Selecting the best learning-based matcher
Step11: Creating features
Step12: Converting the development set to feature vectors
Step13: Selecting the best matcher using cross-validation
Step14: Debugging matcher
Step15: Next, we debug the matcher using GUI. For the purposes of this guide, we use random forest matcher for debugging purposes.
Step16: From the GUI, we observe that phone numbers seem to be an important attribute, but they are in different format. Current features does not capture and adding a feature incorporating this difference in format can potentially improve
Step17: Now, we repeat extracting feature vectors (this time with updated feature table), imputing table and selecting the best matcher again using cross-validation.
Step18: Now, observe the best matcher is achieving a better F1. Let us stop here and proceed on to evaluating the best matcher on the unseen data (the evaluation set).
Step19: Training the selected matcher
Step20: Predicting the matches
Step21: Evaluating the predictions
|
3,346
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp_restful -U
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
HanLP.text_style_transfer(['国家对中石油抱有很大的期望.', '要用创新去推动高质量的发展。'],
target_style='gov_doc')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建客户端
Step2: 申请秘钥
|
3,347
|
<ASSISTANT_TASK:>
Python Code:
# mode practice
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
# below this cell
# Move this cell down
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Help with commands
Step2: Line numbers
|
3,348
|
<ASSISTANT_TASK:>
Python Code:
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import os
import time
import numpy as np
import matplotlib.pyplot as plt
import PIL
from IPython.display import clear_output
path_to_zip = tf.keras.utils.get_file('facades.tar.gz',
cache_subdir=os.path.abspath('.'),
origin='https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facades.tar.gz',
extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'facades/')
BUFFER_SIZE = 400
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
def load_image(image_file, is_train):
image = tf.read_file(image_file)
image = tf.image.decode_jpeg(image)
w = tf.shape(image)[1]
w = w // 2
real_image = image[:, :w, :]
input_image = image[:, w:, :]
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)
if is_train:
# random jittering
# resizing to 286 x 286 x 3
input_image = tf.image.resize_images(input_image, [286, 286],
align_corners=True,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
real_image = tf.image.resize_images(real_image, [286, 286],
align_corners=True,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# randomly cropping to 256 x 256 x 3
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.random_crop(stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
input_image, real_image = cropped_image[0], cropped_image[1]
if np.random.random() > 0.5:
# random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
else:
input_image = tf.image.resize_images(input_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
real_image = tf.image.resize_images(real_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
# normalizing the images to [-1, 1]
input_image = (input_image / 127.5) - 1
real_image = (real_image / 127.5) - 1
return input_image, real_image
train_dataset = tf.data.Dataset.list_files(PATH+'train/*.jpg')
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.map(lambda x: load_image(x, True))
train_dataset = train_dataset.batch(1)
test_dataset = tf.data.Dataset.list_files(PATH+'test/*.jpg')
test_dataset = test_dataset.map(lambda x: load_image(x, False))
test_dataset = test_dataset.batch(1)
OUTPUT_CHANNELS = 3
class Downsample(tf.keras.Model):
def __init__(self, filters, size, apply_batchnorm=True):
super(Downsample, self).__init__()
self.apply_batchnorm = apply_batchnorm
initializer = tf.random_normal_initializer(0., 0.02)
self.conv1 = tf.keras.layers.Conv2D(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
if self.apply_batchnorm:
self.batchnorm = tf.keras.layers.BatchNormalization()
def call(self, x, training):
x = self.conv1(x)
if self.apply_batchnorm:
x = self.batchnorm(x, training=training)
x = tf.nn.leaky_relu(x)
return x
class Upsample(tf.keras.Model):
def __init__(self, filters, size, apply_dropout=False):
super(Upsample, self).__init__()
self.apply_dropout = apply_dropout
initializer = tf.random_normal_initializer(0., 0.02)
self.up_conv = tf.keras.layers.Conv2DTranspose(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
self.batchnorm = tf.keras.layers.BatchNormalization()
if self.apply_dropout:
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, x1, x2, training):
x = self.up_conv(x1)
x = self.batchnorm(x, training=training)
if self.apply_dropout:
x = self.dropout(x, training=training)
x = tf.nn.relu(x)
x = tf.concat([x, x2], axis=-1)
return x
class Generator(tf.keras.Model):
def __init__(self):
super(Generator, self).__init__()
initializer = tf.random_normal_initializer(0., 0.02)
self.down1 = Downsample(64, 4, apply_batchnorm=False)
self.down2 = Downsample(128, 4)
self.down3 = Downsample(256, 4)
self.down4 = Downsample(512, 4)
self.down5 = Downsample(512, 4)
self.down6 = Downsample(512, 4)
self.down7 = Downsample(512, 4)
self.down8 = Downsample(512, 4)
self.up1 = Upsample(512, 4, apply_dropout=True)
self.up2 = Upsample(512, 4, apply_dropout=True)
self.up3 = Upsample(512, 4, apply_dropout=True)
self.up4 = Upsample(512, 4)
self.up5 = Upsample(256, 4)
self.up6 = Upsample(128, 4)
self.up7 = Upsample(64, 4)
self.last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS,
(4, 4),
strides=2,
padding='same',
kernel_initializer=initializer)
@tf.contrib.eager.defun
def call(self, x, training):
# x shape == (bs, 256, 256, 3)
x1 = self.down1(x, training=training) # (bs, 128, 128, 64)
x2 = self.down2(x1, training=training) # (bs, 64, 64, 128)
x3 = self.down3(x2, training=training) # (bs, 32, 32, 256)
x4 = self.down4(x3, training=training) # (bs, 16, 16, 512)
x5 = self.down5(x4, training=training) # (bs, 8, 8, 512)
x6 = self.down6(x5, training=training) # (bs, 4, 4, 512)
x7 = self.down7(x6, training=training) # (bs, 2, 2, 512)
x8 = self.down8(x7, training=training) # (bs, 1, 1, 512)
x9 = self.up1(x8, x7, training=training) # (bs, 2, 2, 1024)
x10 = self.up2(x9, x6, training=training) # (bs, 4, 4, 1024)
x11 = self.up3(x10, x5, training=training) # (bs, 8, 8, 1024)
x12 = self.up4(x11, x4, training=training) # (bs, 16, 16, 1024)
x13 = self.up5(x12, x3, training=training) # (bs, 32, 32, 512)
x14 = self.up6(x13, x2, training=training) # (bs, 64, 64, 256)
x15 = self.up7(x14, x1, training=training) # (bs, 128, 128, 128)
x16 = self.last(x15) # (bs, 256, 256, 3)
x16 = tf.nn.tanh(x16)
return x16
class DiscDownsample(tf.keras.Model):
def __init__(self, filters, size, apply_batchnorm=True):
super(DiscDownsample, self).__init__()
self.apply_batchnorm = apply_batchnorm
initializer = tf.random_normal_initializer(0., 0.02)
self.conv1 = tf.keras.layers.Conv2D(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
if self.apply_batchnorm:
self.batchnorm = tf.keras.layers.BatchNormalization()
def call(self, x, training):
x = self.conv1(x)
if self.apply_batchnorm:
x = self.batchnorm(x, training=training)
x = tf.nn.leaky_relu(x)
return x
class Discriminator(tf.keras.Model):
def __init__(self):
super(Discriminator, self).__init__()
initializer = tf.random_normal_initializer(0., 0.02)
self.down1 = DiscDownsample(64, 4, False)
self.down2 = DiscDownsample(128, 4)
self.down3 = DiscDownsample(256, 4)
# we are zero padding here with 1 because we need our shape to
# go from (batch_size, 32, 32, 256) to (batch_size, 31, 31, 512)
self.zero_pad1 = tf.keras.layers.ZeroPadding2D()
self.conv = tf.keras.layers.Conv2D(512,
(4, 4),
strides=1,
kernel_initializer=initializer,
use_bias=False)
self.batchnorm1 = tf.keras.layers.BatchNormalization()
# shape change from (batch_size, 31, 31, 512) to (batch_size, 30, 30, 1)
self.zero_pad2 = tf.keras.layers.ZeroPadding2D()
self.last = tf.keras.layers.Conv2D(1,
(4, 4),
strides=1,
kernel_initializer=initializer)
@tf.contrib.eager.defun
def call(self, inp, tar, training):
# concatenating the input and the target
x = tf.concat([inp, tar], axis=-1) # (bs, 256, 256, channels*2)
x = self.down1(x, training=training) # (bs, 128, 128, 64)
x = self.down2(x, training=training) # (bs, 64, 64, 128)
x = self.down3(x, training=training) # (bs, 32, 32, 256)
x = self.zero_pad1(x) # (bs, 34, 34, 256)
x = self.conv(x) # (bs, 31, 31, 512)
x = self.batchnorm1(x, training=training)
x = tf.nn.leaky_relu(x)
x = self.zero_pad2(x) # (bs, 33, 33, 512)
# don't add a sigmoid activation here since
# the loss function expects raw logits.
x = self.last(x) # (bs, 30, 30, 1)
return x
# The call function of Generator and Discriminator have been decorated
# with tf.contrib.eager.defun()
# We get a performance speedup if defun is used (~25 seconds per epoch)
generator = Generator()
discriminator = Discriminator()
LAMBDA = 100
def discriminator_loss(disc_real_output, disc_generated_output):
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_real_output),
logits = disc_real_output)
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.zeros_like(disc_generated_output),
logits = disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
def generator_loss(disc_generated_output, gen_output, target):
gan_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_generated_output),
logits = disc_generated_output)
# mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss
generator_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0.5)
discriminator_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0.5)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 200
def generate_images(model, test_input, tar):
# the training=True is intentional here since
# we want the batch statistics while running the model
# on the test dataset. If we use training=False, we will get
# the accumulated statistics learned from the training dataset
# (which we don't want)
prediction = model(test_input, training=True)
plt.figure(figsize=(15,15))
display_list = [test_input[0], tar[0], prediction[0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for input_image, target in dataset:
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator(input_image, training=True)
disc_real_output = discriminator(input_image, target, training=True)
disc_generated_output = discriminator(input_image, gen_output, training=True)
gen_loss = generator_loss(disc_generated_output, gen_output, target)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(gen_loss,
generator.variables)
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.variables)
generator_optimizer.apply_gradients(zip(generator_gradients,
generator.variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.variables))
if epoch % 1 == 0:
clear_output(wait=True)
for inp, tar in test_dataset.take(1):
generate_images(generator, inp, tar)
# saving (checkpoint) the model every 20 epochs
if (epoch + 1) % 20 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
train(train_dataset, EPOCHS)
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Run the trained model on the entire test dataset
for inp, tar in test_dataset:
generate_images(generator, inp, tar)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the dataset
Step2: Use tf.data to create batches, map(do preprocessing) and shuffle the dataset
Step3: Write the generator and discriminator models
Step4: Define the loss functions and the optimizer
Step5: Checkpoints (Object-based saving)
Step6: Training
Step7: Restore the latest checkpoint and test
Step8: Testing on the entire test dataset
|
3,349
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
sdss = pd.read_csv('../datasets/Skyserver_SQL2_27_2018 6_51_39 PM.csv', skiprows=1)
sdss.head(2)
sdss['class'].value_counts()
sdss.info()
sdss.describe()
sdss.columns.values
sdss.drop(['objid', 'run', 'rerun', 'camcol', 'field', 'specobjid'], axis=1, inplace=True)
sdss.head(2)
# Extract output values
y = sdss['class'].copy() # copy “y” column values out
sdss.drop(['class'], axis=1, inplace=True) # then, drop y column
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(sdss, y, test_size=0.3)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
from sklearn.linear_model import LogisticRegression
# Create One-vs-rest logistic regression object
ovr = LogisticRegression(multi_class='ovr', solver='liblinear')
modelOvr = ovr.fit(X_train_scaled, y_train)
modelOvr.score(X_test_scaled, y_test)
# Create cross-entropy-loss logistic regression object
xe = LogisticRegression(multi_class='multinomial', solver='newton-cg')
# Train model
modelXE = xe.fit(X_train_scaled, y_train)
preds = modelXE.predict(X_test_scaled)
modelXE.score(X_test_scaled, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The class identifies an object to be either a galaxy, star or quasar.
Step2: The dataset has 10000 examples, 17 features and 1 target.
Step3: No missing values!
Step4: From the project website, we can see that objid and specobjid are just identifiers for accessing the rows in the original database. Therefore we will not need them for classification as they are not related to the outcome.
Step5: Training
Step6: Data scaling
Step7: fit one-versus-all model
Step8: A very simple classifier like this has already a high accuracy of 90%
|
3,350
|
<ASSISTANT_TASK:>
Python Code:
import prody as pd # Note: if you're a Pandas user, it has the same conventional abbreviation "pd", so be careful
ubi = pd.parsePDB("1ubi")
print(ubi)
print(ubi.numAtoms())
print(pd.calcGyradius(ubi)) # This function calculates the radius of gyration of the atoms
pd.saveAtoms(ubi)
pd.writePDB("ubi.pdb", ubi) # Save to the file "ubi.pdb"
ubi2 = pd.parsePDB("ubi.pdb") # Now read from it, just to test that it worked!
print(ubi2)
type(ubi)
print(ubi.numAtoms())
ag = pd.parsePDB('1vrt')
print(ag.numAtoms())
names = ag.getNames()
print(names)
len(names)
type(names)
coords = ag.getCoords()
print(coords)
print(coords.shape)
a0 = ag[0]
print(a0)
print(a0.getName())
every_other_atom = ag[::2]
print(every_other_atom)
print(len(every_other_atom))
print(len(ag))
print(ag.numChains())
print(ag.numResidues())
# Printing out each chain and the number of residues each has.
for chain in ag.iterChains():
print(chain, chain.numResidues())
# Here, we'll print out each chain and their first 10 residues.
for chain in ag.iterChains():
print(chain)
residues = 0
for residue in chain: # We can also loop through residues on a chain!
print(' | - ', residue)
residues += 1
if residues >= 10: break
print("...")
# Select all the alpha Carbon atoms
sel = ag.select("protein and name CA")
print(sel)
print(sel.numAtoms())
# Shorthand
sel2 = ag.select("calpha")
sel3 = ag.select("ca")
sel2 == sel3
import numpy as np
origin = np.zeros(3)
sel = ag.select("within 5 of origin", origin = origin)
print(sel)
print(pd.calcDistance(sel, origin))
sel = ag.select("within 5 of center", center = pd.calcCenter(ag))
print(sel)
# You can even use dot-selection shorthand, instead of the "select" method!
ag.calpha
ag.name_CA_and_resname_ALA
p38 = pd.parsePDB("5uoj")
bound = pd.parsePDB("1zz2")
matches = pd.matchChains(p38, bound)
print(len(matches))
p38_ch, bnd_ch, seqid, seqov = matches[0]
print(bnd_ch)
print(seqid)
print(seqov)
print(pd.calcRMSD(p38_ch, bnd_ch))
bnd_ch, transformation = pd.superpose(bnd_ch, p38_ch)
print(pd.calcRMSD(bnd_ch, p38_ch))
ubi = pd.parsePDB('2lz3')
ubi_ensemble = pd.Ensemble(ubi.calpha) # Why calpha?
ubi_ensemble
print(ubi_ensemble.getRMSDs())
ubi_ensemble.iterpose() # This performs an iterative alignment.
print(ubi_ensemble.getRMSDs()) # Did that improve things any?
pca = pd.PCA()
pca.buildCovariance(ubi_ensemble)
cov = pca.getCovariance()
print(cov.shape)
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
plt.imshow(cov, cmap = 'Blues')
plt.colorbar()
pca.calcModes() # Performs the actual PCA analysis.
plt.plot(pca.getEigvals())
for mode in pca:
print(pd.calcFractVariance(mode).round(2))
pd.showSqFlucts(pca)
pd.showProjection(ubi_ensemble, pca[:3]) # The first 3 principal components
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Most ProDy functions follow a specific naming convention
Step2: How cool is that?!
Step3: File Handling
Step4: You can also save to and load from more standard file formats, like PDB
Step5: The key point
Step6: Why AtomGroup?
Step7: We can also ask for specific properties of the macromolecule.
Step8: What do you think these are?
Step9: Oh hey, we recognize that!
Step10: (Would you be able to compute the generalized coordinates of this macromolecule?)
Step11: Taking that same thinking further, we can even slice out subgroups of atoms from the macromolecule
Step12: The type is a Selection object, but we can see that we get what we'd expect
Step13: ProDy Hierarchy
Step14: We can set up a loop to iterate through the chains, using the iterChains method
Step15: Other methods for looping over structures in a macromolecule
Step16: You can also select atoms or residues by proximity
Step17: See the full documentation on selection grammar here
Step18: matchChains takes 2 arguments
Step19: Recall our discussion of RMSD (Root Mean Squared Deviation) from the previous lecture--it's basically the Euclidean distance between corresponding points in 3D space.
Step20: Much better!
Step21: WARNING
Step22: The next step is to minimize the differences between each conformer.
Step23: Initially, the conformers aren't aligned; their respective RMSDs to some reference (by default, the first one; hence its RMSD is 0) is decently high.
Step24: Once we've aligned the conformers, we can do some analysis. Remember PCA?
Step25: Initial results don't tell us a whole lot, except that there seem to be some parts of the ensemble that are positively correlated, and some negatively correlated. Let's dig in a bit more.
Step26: Now that we've calculated the modes--or principal components!--we can see everything that we discussed in the previous lecture.
Step27: Remember what we said about how PCA works
Step28: The first dimension alone contains over half the variance in the original data.
Step29: Of the nearly-60 atoms in the chains, the atoms around certain indices move quite a bit more than the others.
|
3,351
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sklearn.metrics.pairwise
data = pd.read_csv('data/lastfm-matrix-germany.csv').set_index('user')
data.head()
data.shape
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# similarity_matrix = sklearn.metrics.pairwise.cosine_similarity( ? )
assert similarity_matrix.shape == (285, 285)
print(similarity_matrix.ndim)
similarity_matrix[:5, :5]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# artist_similarities = pd.DataFrame( ? , index=data.columns, columns= ? )
assert np.array_equal(artist_similarities.columns, data.columns)
assert artist_similarities.shape == similarity_matrix.shape
artist_similarities.iloc[:5, :5]
slice_artists = ['ac/dc', 'madonna', 'metallica', 'rihanna', 'the white stripes']
artist_similarities.loc[slice_artists, slice_artists]
similarities = (
# start from untidy DataFrame
artist_similarities
# add a name to the index
.rename_axis(index='artist')
# artist needs to be a column for melt
.reset_index()
# create the tidy dataset
.melt(id_vars='artist', var_name='compared_with', value_name='cosine_similarity')
# artist compared with itself not needed, keep rows where artist and compared_with are not equal.
.query('artist != compared_with')
# set identifying observations to index
.set_index(['artist', 'compared_with'])
# sort the index
.sort_index()
)
similarities.head()
similarities.index
similarities.loc['the beatles', :].tail()
similarities.loc[('abba', 'madonna'), :]
print(slice_artists)
similarities.loc[('abba', slice_artists), :]
artist = 'a perfect circle'
n_artists = 10
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# top_n = similarities.loc[?, :].sort_values('cosine_similarity') ?
print(top_n)
assert len(top_n) == 10
assert type(top_n) == pd.DataFrame
def most_similar_artists(artist, n_artists=10):
Get the most similar artists for a given artist.
Parameters
----------
artist: str
The artist for which to get similar artists
n_artists: int, optional
The number of similar artists to return
Returns
-------
pandas.DataFrame
A DataFrame with the similar artists and their cosine_similarity to
the given artist
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# return similarities.loc[ ? ].sort_values( ? ) ?
print(most_similar_artists('a perfect circle'))
assert top_n.equals(most_similar_artists('a perfect circle'))
assert most_similar_artists('abba', n_artists=15).shape == (15, 1)
help(most_similar_artists)
user_id = 42
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# user_history = data.loc[ ? , ?]
print(user_history)
assert user_history.name == user_id
assert len(user_history) == 285
artist = 'the beatles'
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# similar_labels = most_similar_artists( ? ). ?
print(similar_labels)
assert len(similar_labels) == 10
assert type(similar_labels) == pd.Index
user_id = 42
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# similar_history = data.loc[?, ?]
assert similar_history.name == user_id
print(similar_history)
def most_similar_artists_history(artist, user_id):
Get most similar artists and their listening history.
Parameters
----------
artist: str
The artist for which to get the most similar bands
user_id: int
The user for which to get the listening history
Returns
-------
pandas.DataFrame
A DataFrame containing the most similar artists for the given artist,
with their cosine similarities and their listening history status for
the given user.
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# artists = most_similar_artists( ? )
# history = data.loc[ ? , ? ].rename('listening_history')
return pd.concat([artists, history], axis=1)
example = most_similar_artists_history('abba', 42)
assert example.columns.to_list() == ['cosine_similarity', 'listening_history']
example
listening_history = np.array([0, 1, 0])
similarity_scores = np.array([0.3, 0.2, 0.1])
recommendation_score = sum(listening_history * similarity_scores) / sum(similarity_scores)
print(f'{recommendation_score:.3f}')
user_id = 42
artist = 'abba'
most_similar_artists_history(artist, user_id)
most_similar_artists_history(artist, user_id).product(axis=1)
most_similar_artists_history(artist, user_id).product(axis=1).sum()
def recommendation_score(artist, user_id):
Calculate recommendation score.
Parameters
----------
artist: str
The artist for which to calculate the recommendation score.
user_id: int
The user for which to calculate the recommendation score.
Returns:
float
Recommendation score
df = most_similar_artists_history(artist, user_id)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# return df.?(axis=1).?() / df.loc[:, ? ].sum()
assert np.allclose(recommendation_score('abba', 42), 0.08976655361839528)
assert np.allclose(recommendation_score('the white stripes', 1), 0.09492796371597861)
recommendation_score('abba', 42)
def unknown_artists(user_id):
Get artists the user hasn't listened to.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
pandas.Index
Collection of artists the user hasn't listened to.
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# history = data.loc[ ? , :]
# return history.loc[ ? == 0].index
print(unknown_artists(42))
assert len(unknown_artists(42)) == 278
assert type(unknown_artists(42)) == pd.Index
def score_unknown_artists(user_id):
Score all unknown artists for a given user.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
list of dict
A list of dictionaries.
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# artists = unknown_artists( ? )
# return [{'recommendation': artist, 'score': recommendation_score( ? , user_id)} for artist in ?]
assert np.allclose(score_unknown_artists(42)[1]['score'], 0.08976655361839528)
assert np.allclose(score_unknown_artists(313)[137]['score'], 0.20616395469219984)
score_unknown_artists(42)[:5]
def user_recommendations(user_id, n_rec=5):
Recommend new artists for a user.
Parameters
----------
user_id: int
User for which to get recommended artists
n_rec: int, optional
Number of recommendations to make
Returns
-------
pandas.DataFrame
A DataFrame containing artist recommendations for the given user,
with their recommendation score.
scores = score_unknown_artists(user_id)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# return (
# pd.DataFrame( ? )
# .sort_values( ? , ascending=False)
# . ? (n_rec)
# .reset_index(drop=True)
# )
assert user_recommendations(313).loc[4, 'recommendation'] == 'jose gonzalez'
assert len(user_recommendations(1, n_rec=10)) == 10
user_recommendations(642)
recommendations = [user_recommendations(user).loc[:, 'recommendation'].rename(user) for user in data.index[:10]]
np.transpose(pd.concat(recommendations, axis=1))
g_s = most_similar_artists_history('gorillaz', 642).assign(sim2 = lambda x: x.product(axis=1))
r_1 = g_s.sim2.sum()
total = g_s.cosine_similarity.sum()
print(total)
r_1/total
g_s
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. The data
Step2: The resulting DataFrame contains a row for each user and each column represents an artist. The values indicate whether the user listend to a song by that artist (1) or not (0). Note that the number of times a person listened to a specific artist is not listed.
Step3: The cosine_similarity function returned a 2-dimensional numpy array. This array contains all the similarity values we need, but it is not labelled. Since the entire array will not fit the screen, we will use slicing to print a subset of the result.
Step4: The artist names are both the row and column labels for the similarity_matrix. We can add these labels by creating a new DataFrame based on the numpy array. By using the pandas.DataFrame.iloc integer-location based indexer, we get the same slice as above, but with added labels.
Step5: Pandas also provides a label based indexer, pandas.DataFrame.loc, which we can use to get a slice based on label values.
Step6: As you can see above, bands are 100% similar to themselves and The White Stripes are nothing like Abba.
Step7: To view the first n rows, we can use the pandas.DataFrame.head method, the default value for n is 5.
Step8: Note that we created a MultiIndex by specifying two columns in the set_index call.
Step9: The use of the MultiIndex enables flexible access to the data. If we index with a single artist name, we get all compared artists. To view the last n rows for this result, we can use the pandas.DataFrame.tail method.
Step10: We can index on multiple levels by providing a tuple of indexes
Step11: 4. Picking the best matches
Step13: We can transform the task of getting the most similar bands for a given band to a function.
Step14: Note that we also defined a docstring for this function, which we can view by using help() or shift + tab in a jupyter notebook.
Step15: 5. Get the listening history
Step16: We now have the complete listening history, but we only need the history for the similar artists. For this we can use the index labels from the DataFrame returned by the most_similar_artists function. Index labels for a DataFrame can be retrieved by using the pandas.DataFrame.index attribute.
Step17: We can combine the user id and similar labels in the .loc indexer to get the listening history for the most similar artists.
Step19: Let's make a function to get the most similar artists and their listening history for a given artist and user. The function creates two DataFrames with the same index, and then uses pandas.concat to create a single DataFrame from them.
Step20: 6. Calculate the recommendation score.
Step21: Remember what the DataFrame returned by the most_similar_artists_history function looks like
Step22: Pandas provides methods to do column or row aggregation, like e.g. pandas.DataFrame.product. This method will calculate all values in a column or row. The direction can be chosen with the axis parameter. As we need the product of the values in the rows (similarity * history), we will need to specify axis=1.
Step23: Then there's pandas.DataFrame.sum which does the same thing for summing the values. As we want the sum for all values in the column we would have to specify axis=0. Since 0 is the default value for the axis parameter we don't have to add it to the method call.
Step25: Knowing these methods, it is only a small step to define the scoring function based on the output of most_similar_artists_history.
Step27: Determine artists to recommend
Step29: The last requirement for our recommender engine is a function that can score all unknown artists for a given user. We will make this function return a list of dictionaries, which can be easily converted to a DataFrame later on. The list will be generated using a list comprehension.
Step31: From the scored artists we can easily derive the best recommendations for a given user.
Step32: With this final function, it is a small step to get recommendations for multiple users. As our code hasn't been optimized for performance, it is advised to limit the number of users somewhat.
Step33: We can now use the concat function again to get a nice overview of the recommended artists.
|
3,352
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d
from scipy.spatial.distance import euclidean
from metpy.gridding import polygons, triangles
from metpy.gridding.interpolation import nn_point
plt.rcParams['figure.figsize'] = (15, 10)
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = (pts[:, 0] * pts[:, 0]) / 1000
tri = Delaunay(pts)
delaunay_plot_2d(tri)
for i, zval in enumerate(zp):
plt.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))
sim_gridx = [30., 60.]
sim_gridy = [30., 60.]
plt.plot(sim_gridx, sim_gridy, '+', markersize=10)
plt.axes().set_aspect('equal', 'datalim')
plt.title('Triangulation of observations and test grid cell '
'natural neighbor interpolation values')
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
val = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
plt.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
val = nn_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], tri_info)
plt.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
def draw_circle(x, y, r, m, label):
nx = x + r * np.cos(np.deg2rad(list(range(360))))
ny = y + r * np.sin(np.deg2rad(list(range(360))))
plt.plot(nx, ny, m, label=label)
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
delaunay_plot_2d(tri)
plt.plot(sim_gridx, sim_gridy, 'ks', markersize=10)
for i, info in tri_info.items():
x_t = info['cc'][0]
y_t = info['cc'][1]
if i in members[1] and i in members[0]:
draw_circle(x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')
plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[0]:
draw_circle(x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')
plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[1]:
draw_circle(x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')
plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)
else:
draw_circle(x_t, y_t, info['r'], 'k:', str(i) + ': no match')
plt.annotate(str(i), xy=(x_t, y_t), fontsize=9)
plt.axes().set_aspect('equal', 'datalim')
plt.legend()
x_t, y_t = tri_info[8]['cc']
r = tri_info[8]['r']
print('Distance between grid0 and Triangle 8 circumcenter:',
euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))
print('Triangle 8 circumradius:', r)
cc = np.array([tri_info[m]['cc'] for m in members[0]])
r = np.array([tri_info[m]['r'] for m in members[0]])
print('circumcenters:\n', cc)
print('radii\n', r)
vor = Voronoi(list(zip(xp, yp)))
voronoi_plot_2d(vor)
nn_ind = np.array([0, 5, 7, 8])
z_0 = zp[nn_ind]
x_0 = xp[nn_ind]
y_0 = yp[nn_ind]
for x, y, z in zip(x_0, y_0, z_0):
plt.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))
plt.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)
plt.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))
plt.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',
label='natural neighbor\ncircumcenters')
for center in cc:
plt.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),
xy=(center[0] + 1, center[1] + 1))
tris = tri.points[tri.simplices[members[0]]]
for triangle in tris:
x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]
y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]
plt.plot(x, y, ':', linewidth=2)
plt.legend()
plt.axes().set_aspect('equal', 'datalim')
def draw_polygon_with_info(polygon, off_x=0, off_y=0):
Draw one of the natural neighbor polygons with some information.
pts = np.array(polygon)[ConvexHull(polygon).vertices]
for i, pt in enumerate(pts):
plt.plot([pt[0], pts[(i + 1) % len(pts)][0]],
[pt[1], pts[(i + 1) % len(pts)][1]], 'k-')
avex, avey = np.mean(pts, axis=0)
plt.annotate('area: {:.3f}'.format(polygons.area(pts)), xy=(avex + off_x, avey + off_y),
fontsize=12)
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info([cc[0], cc1, cc2])
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info([cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info([cc[1], cc1, cc2], off_x=-15)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info([cc[0], cc[1], cc1, cc2])
areas = np.array([60.434, 448.296, 25.916, 70.647])
values = np.array([0.064, 1.156, 2.809, 0.225])
total_area = np.sum(areas)
print(total_area)
proportions = areas / total_area
print(proportions)
contributions = proportions * values
print(contributions)
interpolation_value = np.sum(contributions)
function_output = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
print(interpolation_value, function_output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For a test case, we generate 10 random points and observations, where the
Step2: Using the circumcenter and circumcircle radius information from
Step3: What?....the circle from triangle 8 looks pretty darn close. Why isn't
Step4: Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)
Step6: Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram
Step7: Put all of the generated polygon areas and their affiliated values in arrays.
Step8: For each polygon area, calculate its percent of total area.
Step9: Multiply the percent of total area by the respective values.
Step10: The sum of this array is the interpolation value!
|
3,353
|
<ASSISTANT_TASK:>
Python Code:
import sys #only needed to determine Python version number
# Handle table-like data and matrices
import numpy as np
import pandas as pd
# Visualisation
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
# Enable inline plotting
%matplotlib inline
# Modelo
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
print("ok")
# Realizamos la lectura del fichero y incluimos en el fichero el nombre de las columnas
names_features = [
'age', # Edad del individuo: [17 - 90] media 38,
'type_employer', # Tipo de trabajo : [ State-gov, Self-emp-not-inc, Private, Federal-gov, Local-gov, ?, Self-emp-inc, Without-pay, Never-worked ]
'fnlwgt', # Numero de personas encuestadas
'education', # Nivel mas alto de educacion para el individuo
'education-num', # Nivel mas alto de educacion en forma numerica [1 - 16]
'marital-status', # Estado civil de la persona: [ Never-married, Married-civ-spouse, Divorced, Married-spouse-absent, Separated, Married-AF-spouse, Widowed ]
'occupation', # Ocupacion de la persona: [ Adm-clerical, Exec-managerial, Handlers-cleaners, Prof-specialty, Other-service, Sales, Craft-repair, Transport-moving, Farming-fishing, Machine-op-inspct, Tech-support, ?, Protective-serv, Armed-Forces, Priv-house-serv']
'relationship', # Relacion familiar: [ Not-in-family, Husband, Wife, Own-child, Unmarried, Other-Relative]
'race', # Raza de un individuo: [ White, Black, Asian-Pac-Islander, Amer-Indian-Eskimo, Other ]
'sex', # Sexo de un individuo : [ Male, Female]
'capital_gain', # Ganancias de capital : [ 0 - 99999 ]
'capital_loss', # Perdidas de caapital : [ 0 - 4356 ]
'hours-per-week', # Horas trabajadas por semana : [ 1 - 99 ]
'native-country', # Pais de origen de la persona : [ ]
'income' # Indica si una persona gana o no mas de 50.000
]
df = pd.read_csv('adult.txt' , header=None, names=names_features)
df.head(3)
#print('ok')
columna_unica = df['native-country'].unique()
# Obtenemos las edades de las personas
edad = df['age']
# Hacemos una busqueda con las edades que pertenezcan a un rango
edad.where( edad < 30 ).where( edad > 18).count()
print('ok')
#",".join(ocupacion )
def isEEUU(x):
if x == ' United-States':
return 1
else :
return 0
# Aplicamos una funcion a la columna, para convertir los valores en enumerados
enum_isEEUU = df["native-country"].apply(isEEUU)
# Cramos la nueva columna enum_isEEUU con los datos anteriores
df["enum_isEEUU"] = enum_isEEUU
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
# Mapea los valores unicos de type_employer a enumerados
le.fit(df["race"].unique())
# Muestra como lo ha mapeado
print(list(le.classes_))
# Aplicamos una funcion a la columna, para convertir los valores en enumerados
enum_razas = le.transform( df["race"])
# Cramos la nueva columna enum_razas con los datos anteriores
df["enum_race"] = enum_razas
#muestra la inversa
#list(le.inverse_transform(df["enum_razas"].head()))
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
# Mapea los valores unicos de type_employer a enumerados
le.fit(df["type_employer"].unique())
# Transformamos la columna type_employer con 'labelEncoder' en enumerados
enum_type_employer = le.transform( df["type_employer"])
# Cramos la nueva columna enum_type_employer con los datos anteriores
df["enum_type_employer"] = enum_type_employer
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df["relationship"].unique())
enum_relationship = le.transform( df["relationship"])
df["enum_relationship"] = enum_relationship
#muestra la inversa
#list(le.inverse_transform(df["enum_rel"].head()))
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df["occupation"].unique())
df["enum_occupation"] = le.transform( df["occupation"])
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df["income"].unique())
# Muestra como lo ha mapeado
print(list(le.classes_))
df["enum_income"] = le.transform( df["income"])
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df["sex"].unique())
# Muestra como lo ha mapeado
print(list(le.classes_))
df["enum_sex"] = le.transform( df["sex"])
print('ok')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df["marital-status"].unique())
# Muestra como lo ha mapeado
print(list(le.classes_))
df["enum_marital-status"] = le.transform( df["marital-status"])
print('ok')
plt.title('Personas que ganan mas de 50 o lo contrario', y=1.1, size=15)
sns.countplot('income', data=df)
plt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por sexos', size=20, y=1.1)
sns.countplot(x = 'income', hue='sex', data=df)
plt.title('media de las que ganan mas o menos de 50k diferenciadas por sexos', size=20, y=1.1)
sns.barplot(x = 'income', y='enum_sex',data=df)
plt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por el tipo de trabajador', size=20, y=1.1)
sns.countplot(x = 'income', hue='type_employer', data=df)
plt.title('Media de las ganancias diferenciadas por el tipo de trabajador', size=10, y=1.1)
sns.barplot(x = 'enum_type_employer', y='enum_income', data=df, hue="type_employer")
plt.title('Numero de personas que ganan mas o menos de 50k diferenciadas por la raza', size=20, y=1.1)
sns.countplot(x = 'income', hue='race', data=df)
plt.title('Media de las ganancias diferenciadas por la raza', size=20, y=1.1)
sns.barplot(x = 'enum_race', y='enum_income', data=df, hue="race")
plt.title('Numero de personas que ganan mas de 50k diferenciadas por la relacion familiar', size=20, y=1.1)
sns.countplot(x = 'income', hue='relationship', data=df)
plt.title('Media de las ganancias diferenciadas por la relacion familiar', size=20, y=1.1)
sns.barplot(x = 'enum_relationship', y='enum_income', data=df, hue="relationship")
plt.title('Numero de personas que ganan mas de 50k diferenciadas por la ocupacion', size=20, y=1.1)
sns.countplot(x = 'income', hue='occupation', data=df)
plt.title('Media de las ganancias diferenciadas por la relacion familiar', size=20, y=1.1)
sns.barplot(x = 'enum_occupation', y='enum_income', data=df, hue="occupation")
plt.title('Distribucion de las horas por semana', size=20, y=1.1)
sns.distplot(df['hours-per-week'])
df['hours-per-week'].describe()
colormap = plt.cm.viridis
plt.figure(figsize=(12, 12))
plt.title("Correlacion entre todas las features", y=1.05, size=20)
sns.heatmap(df.corr(), linewidths=0.1, square=True, vmax=1.0, annot=True, cmap=colormap)
features = ["hours-per-week","enum_sex","enum_type_employer","enum_race","enum_income","age","education-num","capital_gain","capital_loss","enum_marital-status","enum_occupation","enum_relationship"]
X = df[features] # features
var_correlations = {c: np.abs(X['enum_income'].corr(df[c])) for c in X.columns}
corr_dataframe = pd.DataFrame(var_correlations, index=['Correlation']).T.sort_values(by='Correlation')
plt.title('Correlacion entre las features y la ganancia', y=1.1, size=20)
plt.barh(range(corr_dataframe.shape[0]), corr_dataframe['Correlation'].values, tick_label=X.columns.values)
g = sns.FacetGrid(df, col='enum_income')
g.map(plt.hist, 'education-num', bins=20)
from sklearn import svm
from sklearn.model_selection import cross_val_score
import pandas as pd
import numpy as np
# modelo
modelo = RandomForestClassifier(n_estimators=9, min_samples_split=2 ,min_samples_leaf =1, n_jobs = 49)
# Seleccionamos las features con las que vamos a usar el algoritmo
features = ["education-num","capital_gain","capital_loss","enum_marital-status","enum_occupation","enum_relationship"]
X = df[features] # features
y = df["enum_income"] # targets
#SCORE
scores = cross_val_score(modelo, X, y, cv = 5 )
print(scores.mean())
from sklearn.grid_search import GridSearchCV
# define the parameter values that should be searched
N_estimators = list(range(1, 20))
Min_samples_split= list(range(2, 10))
Min_samples_leaf = list(range(1, 2))
N_jobs = list(range(1, 20))
print(Min_samples_leaf)
print(Min_samples_split)
# create a parameter grid: map the parameter names to the values that should be searched
param_grid = dict(n_estimators=N_estimators, min_samples_split=Min_samples_split ,min_samples_leaf =Min_samples_leaf, n_jobs = N_jobs)
# instantiate and fit the grid
grid = GridSearchCV(modelo, param_grid, cv=5, scoring='accuracy')
grid.fit(X, y)
# view the complete results
grid.grid_scores_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1 Importamos las librerias que vamos a necesitar
Step2: • Agrupa los países en grupos
Step3: • Mapeamos las razas
Step4: • Mapea el tipo de trabajo
Step5: • Mapea el tipo de familia
Step6: • Mapea la ocupacion
Step7: • Mapea las ganancias (>50k o <=50k)
Step8: • Mapea el sexo
Step9: • Mapea el estado civil de cada persona
Step10: 2. Haz visualizaciones de diferentes valores con seaborn.
Step11: En esta grafica podemos ver que hay menos mujeres que ganan mas de 50k.
Step12: Esta grafica nos muestra que los que mas ganan son los trabajadores con los puestos
Step13: Aqui podemos ver que las personas de raza blanca son las mas numerosas.
Step14: En cambio al hacer la media de las personas nos da como resultado que que los que mas ganan son los asiaticos.
Step15: En esta grafica se aprecia que el numero de personas que ganan mas de 50k son los 'maridos'
Step16: Aqui podemos ver las 'esposas' tienen la mayor media de ganancias (> 50k)
Step17: Podemos observar que la mayoria de las personas trabajan 40h a la semana
Step18: 4. Utiliza random forest con diferentes parámetros, diferentes features, y cross_validation para conseguir el mejor modelo posible.
|
3,354
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt # For graphics
%matplotlib inline
import numpy as np # linear algebra and math
import pandas as pd # data frames
from openfisca_core.model_api import *
from openfisca_senegal import SenegalTaxBenefitSystem # The Senegalese tax-benefits system
from openfisca_senegal.survey_scenarios import SenegalSurveyScenario
year = 2017
household_weight = 100
size = int(1.6e6 / household_weight)
print("Size of the sample: {}".format(size))
np.random.seed(seed = 42)
est_marie = np.random.binomial(1, .66, size = size)
est_celibataire = np.logical_not(est_marie)
nombre_enfants = np.maximum(
est_marie * np.round(np.random.normal(5, scale = 3, size = size)),
0,
)
mean_wage = 5e6
median_wage = .75 * mean_wage
est_salarie = np.random.binomial(1, .8, size = size)
mu = np.log(median_wage)
sigma = np.sqrt(2 * np.log(mean_wage / median_wage))
salaire = (
est_salarie *
np.random.lognormal(mean = mu, sigma = sigma, size = int(size))
)
mean_pension = 2.5e6
median_pension = .9 * mean_pension
mu = np.log(median_pension)
sigma = np.sqrt(2 * np.log(mean_pension / median_pension))
pension_retraite = (
np.logical_not(est_salarie) *
np.random.lognormal(mean = mu, sigma = sigma, size = int(size))
)
input_data_frame = pd.DataFrame({
'est_marie': est_marie,
'est_celibataire': est_celibataire,
'nombre_enfants': nombre_enfants,
'pension_retraite': pension_retraite,
'salaire': salaire,
'id_menage': range(size),
'role_menage': 0,
})
input_data_frame.salaire.hist(bins=100)
input_data_frame.pension_retraite.hist(bins=100, range = [.0001, 1e8])
data = dict(input_data_frame = input_data_frame)
scenario = SenegalSurveyScenario(
year = year,
data = data,
)
scenario.simulation.calculate('impot_revenus', period = year)
df = pd.DataFrame({'impot_revenus': scenario.simulation.calculate('impot_revenus', period = year)})
df.hist(bins = 100)
scenario.compute_aggregate('impot_revenus', period = year) / 1e9
scenario.compute_pivot_table(
aggfunc = 'sum',
values = ['impot_revenus'],
columns = ['nombre_enfants'],
period = year,
).stack().reset_index().plot(x = 'nombre_enfants', kind = 'bar')
def modify_parameters(parameters):
parameters.bareme_impot_progressif[5].rate.update(period = period(year), value = .5)
return parameters
class tax_the_rich(Reform):
name = u"Tax last bracket at 50%"
def apply(self):
self.modify_parameters(modifier_function = modify_parameters)
senegal_tax_benefit_system = SenegalTaxBenefitSystem()
tax_the_rich_tax_benefit_system = tax_the_rich(senegal_tax_benefit_system)
scenario = SenegalSurveyScenario(
data = data,
tax_benefit_system = tax_the_rich_tax_benefit_system,
baseline_tax_benefit_system = senegal_tax_benefit_system,
year = year,
)
print('reform tax the rich: ', scenario.compute_aggregate('impot_revenus', period = year) / 1e9)
print('baseline: ', scenario.compute_aggregate('impot_revenus', use_baseline = True, period = year) / 1e9)
from openfisca_senegal.entities import Person
def build_ultimate_reform_tax_benefit_system(seuil = 0, marginal_tax_rate = .4):
senegal_tax_benefit_system = SenegalTaxBenefitSystem()
class impot_revenus(Variable):
def formula(individu, period):
impot_avant_reduction_famille = individu('impot_avant_reduction_famille', period)
reduction_impots_pour_charge_famille = individu('reduction_impots_pour_charge_famille', period)
impot_apres_reduction_famille = impot_avant_reduction_famille - reduction_impots_pour_charge_famille
impot_revenus = max_(0, impot_apres_reduction_famille)
return impot_revenus * (impot_revenus > seuil)
def modify_parameters(parameters):
parameters.bareme_impot_progressif[5].rate.update(period = period(year), value = marginal_tax_rate)
return parameters
class ultimate_reform(Reform):
name = u"Tax the rich and save the poor taxpayers (tax < {})".format(seuil)
def apply(self):
self.update_variable(impot_revenus)
self.modify_parameters(modifier_function = modify_parameters)
return ultimate_reform(senegal_tax_benefit_system)
reformed_tax_benefit_system = build_ultimate_reform_tax_benefit_system(seuil = 100000, marginal_tax_rate = .45)
scenario = SenegalSurveyScenario(
data = data,
tax_benefit_system = reformed_tax_benefit_system,
baseline_tax_benefit_system = SenegalTaxBenefitSystem(),
year = year,
)
print('reform: ', scenario.compute_aggregate('impot_revenus', period = year) / 1e9)
print('baseline: ', scenario.compute_aggregate('impot_revenus', use_baseline = True, period = year) / 1e9)
scenario = SenegalSurveyScenario(
data = data,
tax_benefit_system = reformed_tax_benefit_system,
baseline_tax_benefit_system = SenegalTaxBenefitSystem(),
year = year,
)
print(scenario.compute_aggregate('impot_revenus', period = year) / 1e9)
print(scenario.compute_aggregate('impot_revenus', use_baseline = True, period = year) / 1e9)
cost = - (
scenario.compute_aggregate('impot_revenus', period = year) -
scenario.compute_aggregate('impot_revenus', use_baseline = True, period = year)
) / 1e9
print(cost)
def compute_reform_cost(seuil):
reformed_tax_benefit_system = build_ultimate_reform_tax_benefit_system(
seuil = seuil,
marginal_tax_rate = .41
)
scenario = SenegalSurveyScenario(
data = data,
tax_benefit_system = reformed_tax_benefit_system,
baseline_tax_benefit_system = SenegalTaxBenefitSystem(),
year = year,
)
cost = - (
scenario.compute_aggregate('impot_revenus', period = year) -
scenario.compute_aggregate('impot_revenus', use_baseline = True, period = year)
) / 1e9
return cost
compute_reform_cost(seuil = 100000)
from scipy.optimize import fsolve
x = fsolve(compute_reform_cost, 100000)
x
compute_reform_cost(seuil = x)
reformed_tax_benefit_system = build_ultimate_reform_tax_benefit_system(
seuil = x,
marginal_tax_rate = .41
)
scenario = SenegalSurveyScenario(
data = data,
tax_benefit_system = reformed_tax_benefit_system,
baseline_tax_benefit_system = SenegalTaxBenefitSystem(),
year = year,
)
scenario.compute_pivot_table(
aggfunc = 'sum',
values = ['impot_revenus'],
columns = ['nombre_enfants'],
period = year,
difference = True,
).stack().reset_index().plot(x = 'nombre_enfants', kind = 'bar')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the artificial data
Step2: We assume that 2/3 of the household heads are married and that only married houshold do have children. The mean number of children per household is 5 and is normally distributed
Step3: We assume that 80% of the population are wage earners.
Step4: We choose a mean pension of 2 500 000 CFA
Step5: Microsimulation
Step6: We can compute the value of any variable for the whole population an draw distributions
Step7: Special methods allow access to aggregates and pivot tables
Step8: Evaluate the financial impact of a reform
|
3,355
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# Número de figuritas total en el álbum
n_total=640.0
# Figuritas en un sobre
n_total_sobre=5
from scipy.misc import comb
# Probabilidad de que en un sobre aparezcan i figuritas repetidas
def prob_repetidas(n_total,n_tengo,n_total_sobre,i):
prob_i_repetidas=comb(n_total_sobre,i)*(n_tengo/n_total)**i*((n_total-n_tengo)/n_total)**(n_total_sobre-i)
return prob_i_repetidas
print "Probablidad de una repetida:",prob_repetidas(n_total,150,n_total_sobre,1)
# Probabilidad de que en un sobre haya por lo menos una repetida
def prob_alguna_repetida(n_total,n_tengo,n_total_sobre):
a=np.array([prob_repetidas(n_total,n_tengo,n_total_sobre,x) for x in range(n_total_sobre+1)])
return sum(a[1:])
print "Probabilidad de alguna repetida:",prob_alguna_repetida(n_total,150,n_total_sobre)
# Grafico cantidad que tengo versus la probabilidad de sacar alguna repetida
p_repetidas=[prob_alguna_repetida(n_total,i,n_total_sobre) for i in np.arange(n_total+1)]
plt.title('Probabilidad de sacar alguna repetida')
plt.xlabel('Cantidad de figuritas que tengo')
plt.ylabel('Probabilidad')
pl=plt.plot(np.arange(n_total+1),p_repetidas)
from scipy.stats import binom
n=n_total_sobre
p=150/n_total
rv = binom(n, p)
print "Probabilidad de alguna repetida, versión distribución binomial:",rv.pmf(1)
print "Esperanza del número de repetidas:",rv.mean()
def esperanza_repetidas(n_total_sobre,n_tengo,n_total):
p=n_tengo/n_total
rv=binom(n_total_sobre,p)
return rv.mean()
#print esperanza_repetidas(640.0,0
p_esperanzas_repetidas=[esperanza_repetidas(n_total_sobre,i,n_total) for i in np.arange(n_total+1)]
plt.xlabel('Cantidad de figuritas que tengo')
plt.ylabel('Esperanza')
pl=plt.plot(np.arange(n_total+1),p_esperanzas_repetidas)
#plt.title('Esperanza de repetidas según las figuritas que tengo')
print "Número en el que la esperanza es 1:",p_esperanzas_repetidas.index(1)
p=p_esperanzas_repetidas.index(1)/n_total
rv=binom(n_total_sobre,p)
print "Intervalo que concentra el 95% de la probabilidad para ese caso:",rv.interval(0.95)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El álbum tiene 640 figuritas, y el sobre siempre trae 5
Step2: Lo que tengo que hacer es calcular la probabilidad de que salga una repetida de un sobre. Eso, primero que nada, depende de 3 parámetros
Step3: Supongamos que tengo 150 figuritas, veamos la probabilidad de sacar exactamente una repetida
Step4: Pero a mí me interesa saber la probabilidad de que haya alguna repetida, así que tengo que sumar las probabilidades de que haya entre 1 y 5 repetidas
Step5: O sea que si tengo 150 figuritas, la probabilidad de sacar alguna repetida es del 73% (y eso que tengo menos de la cuarta parte de las figuritas). Nota
Step6: Siempre me pregunté por qué la insistencia en los libros o cursos de probabilidad y estadística por listar distribuciones típicas y sacar un montón de medidas. La respuesta
Step7: Mucho más fácil (y seguro...). Y gracias a que en los libros (y en las bibliotecas...) ya tenemos calculadas ciertas estadísticas sobre ciertas distribuciones, utilizando la esperanza podemos ver la cantidad de figuritas repetidas que espero sacar cada vez que compro un sobre
Step8: Veamos cómo varía la esperanza dependiendo de la cantidad de figuritas que tengo en el álbum
Step9: Mirá vos. Es lineal. Y tiene sentido, porque la esperanza de una binomial es $np$, o sea que para este caso sería $5*p$.
|
3,356
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
s = date 0 : 14/9/2000
date 1 : 20/04/1971 date 2 : 14/09/1913 date 3 : 2/3/1978
date 4 : 1/7/1986 date 5 : 7/3/47 date 6 : 15/10/1914
date 7 : 08/03/1941 date 8 : 8/1/1980 date 9 : 30/6/1976
import re
# première étape : construction
expression = re.compile("([0-3]?[0-9]/[0-1]?[0-9]/([0-2][0-9])?[0-9][0-9])")
# seconde étape : recherche
res = expression.findall(s)
print(res)
import re
s = "something\\support\\vba\\image/vbatd1_4.png"
print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # résultat positif
print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # même résultat
"<h1>mot</h1>"
import re
s = "<h1>mot</h1>"
print(re.compile("(<.*>)").match(s).groups()) # ('<h1>mot</h1>',)
print(re.compile("(<.*?>)").match(s).groups()) # ('<h1>',)
print(re.compile("(<.+?>)").match(s).groups()) # ('<h1>',)
texte = Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/1963.
J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83'
import re
expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}")
cherche = expression.findall(texte)
print(cherche)
texte = Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Je me suis marié le 8/5/45.
J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83'
expression = re.compile("[0-3]?[0-9]/[0-1]?[0-9]/[0-1]?[0-9]?[0-9]{2}")
cherche = expression.findall(texte)
print(cherche)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lorsqu'on remplit un formulaire, on voit souvent le format "MM/JJ/AAAA" qui précise sous quelle forme on s'attend à ce qu’une date soit écrite. Les expressions régulières permettent de définir également ce format et de chercher dans un texte toutes les chaînes de caractères qui sont conformes à ce format.
Step3: Le premier chiffre du jour est soit 0, 1, 2, ou 3 ; ceci se traduit par [0-3]. Le second chiffre est compris entre 0 et 9, soit [0-9]. Le format des jours est traduit par [0-3][0-9]. Mais le premier jour est facultatif, ce qu'on précise avec le symbole ?
Step4: Le résultat une liste de couples dont chaque élément correspond aux parties comprises entre parenthèses qu'on appelle des groupes. Lorsque les expressions régulières sont utilisées, on doit d'abord se demander comment définir ce qu’on cherche puis quelles fonctions utiliser pour obtenir les résultats de cette recherche. Les deux paragraphes qui suivent y répondent.
Step5: Les multiplicateurs
Step6: <.*> correspond avec <h1>, </h1> ou encore <h1>mot</h1>.
Step8: Exercice 1
Step10: Puis dans celle-ci
|
3,357
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow-transform
# This cell is only necessary because packages were installed while python was
# running. It avoids the need to restart the runtime when running in Colab.
import pkg_resources
import importlib
importlib.reload(pkg_resources)
import math
import os
import pprint
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
print('TF: {}'.format(tf.__version__))
import apache_beam as beam
print('Beam: {}'.format(beam.__version__))
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
print('Transform: {}'.format(tft.__version__))
from tfx_bsl.public import tfxio
from tfx_bsl.coders.example_coder import RecordBatchToExamples
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.test
train_path = './adult.data'
test_path = './adult.test'
CATEGORICAL_FEATURE_KEYS = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country',
]
NUMERIC_FEATURE_KEYS = [
'age',
'capital-gain',
'capital-loss',
'hours-per-week',
'education-num'
]
ORDERED_CSV_COLUMNS = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'label'
]
LABEL_KEY = 'label'
pandas_train = pd.read_csv(train_path, header=None, names=ORDERED_CSV_COLUMNS)
pandas_train.head(5)
one_row = dict(pandas_train.loc[0])
COLUMN_DEFAULTS = [
'' if isinstance(v, str) else 0.0
for v in dict(pandas_train.loc[1]).values()]
pandas_test = pd.read_csv(test_path, header=1, names=ORDERED_CSV_COLUMNS)
pandas_test.head(5)
testing = os.getenv("WEB_TEST_BROWSER", False)
if testing:
pandas_train = pandas_train.loc[:1]
pandas_test = pandas_test.loc[:1]
RAW_DATA_FEATURE_SPEC = dict(
[(name, tf.io.FixedLenFeature([], tf.string))
for name in CATEGORICAL_FEATURE_KEYS] +
[(name, tf.io.FixedLenFeature([], tf.float32))
for name in NUMERIC_FEATURE_KEYS] +
[(LABEL_KEY, tf.io.FixedLenFeature([], tf.string))]
)
SCHEMA = tft.tf_metadata.dataset_metadata.DatasetMetadata(
tft.tf_metadata.schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC)).schema
#@title
def encode_example(input_features):
input_features = dict(input_features)
output_features = {}
for key in CATEGORICAL_FEATURE_KEYS:
value = input_features[key]
feature = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.strip().encode()]))
output_features[key] = feature
for key in NUMERIC_FEATURE_KEYS:
value = input_features[key]
feature = tf.train.Feature(
float_list=tf.train.FloatList(value=[value]))
output_features[key] = feature
label_value = input_features.get(LABEL_KEY, None)
if label_value is not None:
output_features[LABEL_KEY] = tf.train.Feature(
bytes_list = tf.train.BytesList(value=[label_value.strip().encode()]))
example = tf.train.Example(
features = tf.train.Features(feature=output_features)
)
return example
tf_example = encode_example(pandas_train.loc[0])
tf_example.features.feature['age']
serialized_example_batch = tf.constant([
encode_example(pandas_train.loc[i]).SerializeToString()
for i in range(3)
])
serialized_example_batch
decoded_tensors = tf.io.parse_example(
serialized_example_batch,
features=RAW_DATA_FEATURE_SPEC
)
features_dict = dict(pandas_train.loc[0])
features_dict.pop(LABEL_KEY)
LABEL_KEY in features_dict
no_label_example = encode_example(features_dict)
LABEL_KEY in no_label_example.features.feature.keys()
NUM_OOV_BUCKETS = 1
EPOCH_SPLITS = 10
TRAIN_NUM_EPOCHS = 2*EPOCH_SPLITS
NUM_TRAIN_INSTANCES = len(pandas_train)
NUM_TEST_INSTANCES = len(pandas_test)
BATCH_SIZE = 128
STEPS_PER_TRAIN_EPOCH = tf.math.ceil(NUM_TRAIN_INSTANCES/BATCH_SIZE/EPOCH_SPLITS)
EVALUATION_STEPS = tf.math.ceil(NUM_TEST_INSTANCES/BATCH_SIZE)
# Names of temp files
TRANSFORMED_TRAIN_DATA_FILEBASE = 'train_transformed'
TRANSFORMED_TEST_DATA_FILEBASE = 'test_transformed'
EXPORTED_MODEL_DIR = 'exported_model_dir'
if testing:
TRAIN_NUM_EPOCHS = 1
def preprocessing_fn(inputs):
Preprocess input columns into transformed columns.
# Since we are modifying some features and leaving others unchanged, we
# start by setting `outputs` to a copy of `inputs.
outputs = inputs.copy()
# Scale numeric columns to have range [0, 1].
for key in NUMERIC_FEATURE_KEYS:
outputs[key] = tft.scale_to_0_1(inputs[key])
# For all categorical columns except the label column, we generate a
# vocabulary but do not modify the feature. This vocabulary is instead
# used in the trainer, by means of a feature column, to convert the feature
# from a string to an integer id.
for key in CATEGORICAL_FEATURE_KEYS:
outputs[key] = tft.compute_and_apply_vocabulary(
tf.strings.strip(inputs[key]),
num_oov_buckets=NUM_OOV_BUCKETS,
vocab_filename=key)
# For the label column we provide the mapping from string to index.
table_keys = ['>50K', '<=50K']
with tf.init_scope():
initializer = tf.lookup.KeyValueTensorInitializer(
keys=table_keys,
values=tf.cast(tf.range(len(table_keys)), tf.int64),
key_dtype=tf.string,
value_dtype=tf.int64)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
# Remove trailing periods for test data when the data is read with tf.data.
# label_str = tf.sparse.to_dense(inputs[LABEL_KEY])
label_str = inputs[LABEL_KEY]
label_str = tf.strings.regex_replace(label_str, r'\.$', '')
label_str = tf.strings.strip(label_str)
data_labels = table.lookup(label_str)
transformed_label = tf.one_hot(
indices=data_labels, depth=len(table_keys), on_value=1.0, off_value=0.0)
outputs[LABEL_KEY] = tf.reshape(transformed_label, [-1, len(table_keys)])
return outputs
def transform_data(train_data_file, test_data_file, working_dir):
Transform the data and write out as a TFRecord of Example protos.
Read in the data using the CSV reader, and transform it using a
preprocessing pipeline that scales numeric data and converts categorical data
from strings to int64 values indices, by creating a vocabulary for each
category.
Args:
train_data_file: File containing training data
test_data_file: File containing test data
working_dir: Directory to write transformed data and metadata to
# The "with" block will create a pipeline, and run that pipeline at the exit
# of the block.
with beam.Pipeline() as pipeline:
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
# Create a TFXIO to read the census data with the schema. To do this we
# need to list all columns in order since the schema doesn't specify the
# order of columns in the csv.
# We first read CSV files and use BeamRecordCsvTFXIO whose .BeamSource()
# accepts a PCollection[bytes] because we need to patch the records first
# (see "FixCommasTrainData" below). Otherwise, tfxio.CsvTFXIO can be used
# to both read the CSV files and parse them to TFT inputs:
# csv_tfxio = tfxio.CsvTFXIO(...)
# raw_data = (pipeline | 'ToRecordBatches' >> csv_tfxio.BeamSource())
train_csv_tfxio = tfxio.CsvTFXIO(
file_pattern=train_data_file,
telemetry_descriptors=[],
column_names=ORDERED_CSV_COLUMNS,
schema=SCHEMA)
# Read in raw data and convert using CSV TFXIO.
raw_data = (
pipeline |
'ReadTrainCsv' >> train_csv_tfxio.BeamSource())
# Combine data and schema into a dataset tuple. Note that we already used
# the schema to read the CSV data, but we also need it to interpret
# raw_data.
cfg = train_csv_tfxio.TensorAdapterConfig()
raw_dataset = (raw_data, cfg)
# The TFXIO output format is chosen for improved performance.
transformed_dataset, transform_fn = (
raw_dataset | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn, output_record_batches=True))
# Transformed metadata is not necessary for encoding.
transformed_data, _ = transformed_dataset
# Extract transformed RecordBatches, encode and write them to the given
# directory.
# TODO(b/223384488): Switch to `RecordBatchToExamplesEncoder`.
_ = (
transformed_data
| 'EncodeTrainData' >>
beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch))
| 'WriteTrainData' >> beam.io.WriteToTFRecord(
os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE)))
# Now apply transform function to test data. In this case we remove the
# trailing period at the end of each line, and also ignore the header line
# that is present in the test data file.
test_csv_tfxio = tfxio.CsvTFXIO(
file_pattern=test_data_file,
skip_header_lines=1,
telemetry_descriptors=[],
column_names=ORDERED_CSV_COLUMNS,
schema=SCHEMA)
raw_test_data = (
pipeline
| 'ReadTestCsv' >> test_csv_tfxio.BeamSource())
raw_test_dataset = (raw_test_data, test_csv_tfxio.TensorAdapterConfig())
# The TFXIO output format is chosen for improved performance.
transformed_test_dataset = (
(raw_test_dataset, transform_fn)
| tft_beam.TransformDataset(output_record_batches=True))
# Transformed metadata is not necessary for encoding.
transformed_test_data, _ = transformed_test_dataset
# Extract transformed RecordBatches, encode and write them to the given
# directory.
_ = (
transformed_test_data
| 'EncodeTestData' >>
beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch))
| 'WriteTestData' >> beam.io.WriteToTFRecord(
os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE)))
# Will write a SavedModel and metadata to working_dir, which can then
# be read by the tft.TFTransformOutput class.
_ = (
transform_fn
| 'WriteTransformFn' >> tft_beam.WriteTransformFn(working_dir))
import tempfile
import pathlib
output_dir = os.path.join(tempfile.mkdtemp(), 'keras')
transform_data(train_path, test_path, output_dir)
tf_transform_output = tft.TFTransformOutput(output_dir)
tf_transform_output.transformed_feature_spec()
!ls -l {output_dir}
def _make_training_input_fn(tf_transform_output, train_file_pattern,
batch_size):
An input function reading from transformed data, converting to model input.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
transformed_examples: Base filename of examples.
batch_size: Batch size.
Returns:
The input data for training or eval, in the form of k.
def input_fn():
return tf.data.experimental.make_batched_features_dataset(
file_pattern=train_file_pattern,
batch_size=batch_size,
features=tf_transform_output.transformed_feature_spec(),
reader=tf.data.TFRecordDataset,
label_key=LABEL_KEY,
shuffle=True)
return input_fn
train_file_pattern = pathlib.Path(output_dir)/f'{TRANSFORMED_TRAIN_DATA_FILEBASE}*'
input_fn = _make_training_input_fn(
tf_transform_output=tf_transform_output,
train_file_pattern = str(train_file_pattern),
batch_size = 10
)
for example, label in input_fn().take(1):
break
pd.DataFrame(example)
label
def build_keras_model(working_dir):
inputs = build_keras_inputs(working_dir)
encoded_inputs = encode_inputs(inputs)
stacked_inputs = tf.concat(tf.nest.flatten(encoded_inputs), axis=1)
output = tf.keras.layers.Dense(100, activation='relu')(stacked_inputs)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(2)(output)
model = tf.keras.Model(inputs=inputs, outputs=output)
return model
def build_keras_inputs(working_dir):
tf_transform_output = tft.TFTransformOutput(working_dir)
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(LABEL_KEY)
# Build the `keras.Input` objects.
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=spec.shape, name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
return inputs
def encode_inputs(inputs):
encoded_inputs = {}
for key in inputs:
feature = tf.expand_dims(inputs[key], -1)
if key in CATEGORICAL_FEATURE_KEYS:
num_buckets = tf_transform_output.num_buckets_for_transformed_feature(key)
encoding_layer = (
tf.keras.layers.CategoryEncoding(
num_tokens=num_buckets, output_mode='binary', sparse=False))
encoded_inputs[key] = encoding_layer(feature)
else:
encoded_inputs[key] = feature
return encoded_inputs
model = build_keras_model(output_dir)
tf.keras.utils.plot_model(model,rankdir='LR', show_shapes=True)
def get_dataset(working_dir, filebase):
tf_transform_output = tft.TFTransformOutput(working_dir)
data_path_pattern = os.path.join(
working_dir,
filebase + '*')
input_fn = _make_training_input_fn(
tf_transform_output,
data_path_pattern,
batch_size=BATCH_SIZE)
dataset = input_fn()
return dataset
def train_and_evaluate(
model,
working_dir):
Train the model on training data and evaluate on test data.
Args:
working_dir: The location of the Transform output.
num_train_instances: Number of instances in train set
num_test_instances: Number of instances in test set
Returns:
The results from the estimator's 'evaluate' method
train_dataset = get_dataset(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE)
validation_dataset = get_dataset(working_dir, TRANSFORMED_TEST_DATA_FILEBASE)
model = build_keras_model(working_dir)
history = train_model(model, train_dataset, validation_dataset)
metric_values = model.evaluate(validation_dataset,
steps=EVALUATION_STEPS,
return_dict=True)
return model, history, metric_values
def train_model(model, train_dataset, validation_dataset):
model.compile(optimizer='adam',
loss=tf.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=validation_dataset,
epochs=TRAIN_NUM_EPOCHS,
steps_per_epoch=STEPS_PER_TRAIN_EPOCH,
validation_steps=EVALUATION_STEPS)
return history
model, history, metric_values = train_and_evaluate(model, output_dir)
plt.plot(history.history['loss'], label='Train')
plt.plot(history.history['val_loss'], label='Eval')
plt.ylim(0,max(plt.ylim()))
plt.legend()
plt.title('Loss');
def read_csv(file_name, batch_size):
return tf.data.experimental.make_csv_dataset(
file_pattern=file_name,
batch_size=batch_size,
column_names=ORDERED_CSV_COLUMNS,
column_defaults=COLUMN_DEFAULTS,
prefetch_buffer_size=0,
ignore_errors=True)
for ex in read_csv(test_path, batch_size=5):
break
pd.DataFrame(ex)
ex2 = ex.copy()
ex2.pop('fnlwgt')
tft_layer = tf_transform_output.transform_features_layer()
t_ex = tft_layer(ex2)
label = t_ex.pop(LABEL_KEY)
pd.DataFrame(t_ex)
ex2 = pd.DataFrame(ex)[['education', 'hours-per-week']]
ex2
pd.DataFrame(tft_layer(dict(ex2)))
class Transform(tf.Module):
def __init__(self, working_dir):
self.working_dir = working_dir
self.tf_transform_output = tft.TFTransformOutput(working_dir)
self.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def __call__(self, features):
raw_features = {}
for key, val in features.items():
# Skip unused keys
if key not in RAW_DATA_FEATURE_SPEC:
continue
raw_features[key] = val
# Apply the `preprocessing_fn`.
transformed_features = tft_layer(raw_features)
if LABEL_KEY in transformed_features:
# Pop the label and return a (features, labels) pair.
data_labels = transformed_features.pop(LABEL_KEY)
return (transformed_features, data_labels)
else:
return transformed_features
transform = Transform(output_dir)
t_ex, t_label = transform(ex)
pd.DataFrame(t_ex)
model.evaluate(
read_csv(test_path, batch_size=5).map(transform),
steps=EVALUATION_STEPS,
return_dict=True
)
class ServingModel(tf.Module):
def __init__(self, model, working_dir):
self.model = model
self.working_dir = working_dir
self.transform = Transform(working_dir)
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def __call__(self, serialized_tf_examples):
# parse the tf.train.Example
feature_spec = RAW_DATA_FEATURE_SPEC.copy()
feature_spec.pop(LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
# Apply the `preprocessing_fn`
transformed_features = self.transform(parsed_features)
# Run the model
outputs = self.model(transformed_features)
# Format the output
classes_names = tf.constant([['0', '1']])
classes = tf.tile(classes_names, [tf.shape(outputs)[0], 1])
return {'classes': classes, 'scores': outputs}
def export(self, output_dir):
# Increment the directory number. This is required in order to make this
# model servable with model_server.
save_model_dir = pathlib.Path(output_dir)/'model'
number_dirs = [int(p.name) for p in save_model_dir.glob('*')
if p.name.isdigit()]
id = max([0] + number_dirs)+1
save_model_dir = save_model_dir/str(id)
# Set the signature to make it visible for serving.
concrete_serving_fn = self.__call__.get_concrete_function()
signatures = {'serving_default': concrete_serving_fn}
# Export the model.
tf.saved_model.save(
self,
str(save_model_dir),
signatures=signatures)
return save_model_dir
serving_model = ServingModel(model, output_dir)
serving_model(serialized_example_batch)
saved_model_dir = serving_model.export(output_dir)
saved_model_dir
reloaded = tf.saved_model.load(str(saved_model_dir))
run_model = reloaded.signatures['serving_default']
run_model(serialized_example_batch)
def _make_training_input_fn(tf_transform_output, transformed_examples,
batch_size):
Creates an input function reading from transformed data.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
transformed_examples: Base filename of examples.
batch_size: Batch size.
Returns:
The input function for training or eval.
def input_fn():
Input function for training and eval.
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=transformed_examples,
batch_size=batch_size,
features=tf_transform_output.transformed_feature_spec(),
reader=tf.data.TFRecordDataset,
shuffle=True)
transformed_features = tf.compat.v1.data.make_one_shot_iterator(
dataset).get_next()
# Extract features and label from the transformed tensors.
transformed_labels = tf.where(
tf.equal(transformed_features.pop(LABEL_KEY), 1))
return transformed_features, transformed_labels[:,1]
return input_fn
def _make_serving_input_fn(tf_transform_output):
Creates an input function reading from raw data.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
Returns:
The serving input function.
raw_feature_spec = RAW_DATA_FEATURE_SPEC.copy()
# Remove label since it is not available during serving.
raw_feature_spec.pop(LABEL_KEY)
def serving_input_fn():
Input function for serving.
# Get raw features by generating the basic serving input_fn and calling it.
# Here we generate an input_fn that expects a parsed Example proto to be fed
# to the model at serving time. See also
# tf.estimator.export.build_raw_serving_input_receiver_fn.
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
# Apply the transform function that was used to generate the materialized
# data.
raw_features = serving_input_receiver.features
transformed_features = tf_transform_output.transform_raw_features(
raw_features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
return serving_input_fn
def get_feature_columns(tf_transform_output):
Returns the FeatureColumns for the model.
Args:
tf_transform_output: A `TFTransformOutput` object.
Returns:
A list of FeatureColumns.
# Wrap scalars as real valued columns.
real_valued_columns = [tf.feature_column.numeric_column(key, shape=())
for key in NUMERIC_FEATURE_KEYS]
# Wrap categorical columns.
one_hot_columns = [
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key=key,
num_buckets=(NUM_OOV_BUCKETS +
tf_transform_output.vocabulary_size_by_name(
vocab_filename=key))))
for key in CATEGORICAL_FEATURE_KEYS]
return real_valued_columns + one_hot_columns
def train_and_evaluate(working_dir, num_train_instances=NUM_TRAIN_INSTANCES,
num_test_instances=NUM_TEST_INSTANCES):
Train the model on training data and evaluate on test data.
Args:
working_dir: Directory to read transformed data and metadata from and to
write exported model to.
num_train_instances: Number of instances in train set
num_test_instances: Number of instances in test set
Returns:
The results from the estimator's 'evaluate' method
tf_transform_output = tft.TFTransformOutput(working_dir)
run_config = tf.estimator.RunConfig()
estimator = tf.estimator.LinearClassifier(
feature_columns=get_feature_columns(tf_transform_output),
config=run_config,
loss_reduction=tf.losses.Reduction.SUM)
# Fit the model using the default optimizer.
train_input_fn = _make_training_input_fn(
tf_transform_output,
os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE + '*'),
batch_size=BATCH_SIZE)
estimator.train(
input_fn=train_input_fn,
max_steps=TRAIN_NUM_EPOCHS * num_train_instances / BATCH_SIZE)
# Evaluate model on test dataset.
eval_input_fn = _make_training_input_fn(
tf_transform_output,
os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE + '*'),
batch_size=1)
# Export the model.
serving_input_fn = _make_serving_input_fn(tf_transform_output)
exported_model_dir = os.path.join(working_dir, EXPORTED_MODEL_DIR)
estimator.export_saved_model(exported_model_dir, serving_input_fn)
return estimator.evaluate(input_fn=eval_input_fn, steps=num_test_instances)
import tempfile
temp = temp = os.path.join(tempfile.mkdtemp(),'estimator')
transform_data(train_path, test_path, temp)
results = train_and_evaluate(temp)
pprint.pprint(results)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocessing data with TensorFlow Transform
Step2: Imports and globals
Step3: Next download the data files
Step4: Name our columns
Step5: Here's a quick preview of the data
Step6: The test data has 1 header line that needs to be skipped, and a trailing "." at the end of each line.
Step7: Define our features and schema
Step8: [Optional] Encode and decode tf.train.Example protos
Step9: Now you can convert dataset examples into Example protos
Step10: You can also convert batches of serialized Example protos back into a dictionary of tensors
Step11: In some cases the label will not be passed in, so the encode function is written so that the label is optional
Step12: When creating an Example proto it will simply not contain the label key.
Step13: Setting hyperparameters and basic housekeeping
Step15: Preprocessing with tf.Transform
Step17: Syntax
Step18: Run the pipeline
Step19: Wrap up the output directory as a tft.TFTransformOutput
Step20: If you look in the directory you'll see it contains three things
Step22: Using our preprocessed data to train a model using tf.keras
Step23: Below you can see a transformed sample of the data. Note how the numeric columns like education-num and hourd-per-week are converted to floats with a range of [0,1], and the string columns have been converted to IDs
Step24: Train, Evaluate the model
Step25: Build the datasets
Step27: Train and evaluate the model
Step28: Transform new data
Step29: Load the tft.TransformFeaturesLayer to transform this data with the preprocessing_fn
Step30: The tft_layer is smart enough to still execute the transformation if only a subset of features are passed in. For example, if you only pass in two features, you'll get just the transformed versions of those features back
Step31: Here's a more robust version that drops features that are not in the feature-spec, and returns a (features, label) pair if the label is in the provided features
Step32: Now you can use Dataset.map to apply that transformation, on the fly to new data
Step33: Export the model
Step34: Build the model and test-run it on the batch of serialized examples
Step35: Export the model as a SavedModel
Step36: Reload the the model and test it on the same batch of examples
Step39: What we did
Step42: Create an input function for serving
Step44: Wrap our input data in FeatureColumns
Step46: Train, Evaluate, and Export our model
Step47: Put it all together
|
3,358
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from IPython.display import HTML
%%html
<!-- make tables display a little nicer in markdown cells -->
<style>table {float:left;}</style>
import os, sys
import pandas as pd
PARTIAL_PATH = os.path.join('tests', 'incomplete.tsv')
# ensure numbers with potentially leading zeros are read as strings
PARTIAL_DTYPES = {'zipcode': str}
partial = pd.read_csv(PARTIAL_PATH, sep='\t', encoding='utf-8', dtype=PARTIAL_DTYPES)
# drop some un-needed testing columns for this notebook
partial.drop(['ID', 'email', 'change'], axis=1, inplace=True)
partial.head()
from mergepurge import clean
PART_LOC_COLS = ['address', 'city', 'state', 'zipcode']
PART_CONTACT_COLS = ['first', 'last']
PART_COMPANY_COLS = ['company']
partial = clean.build_matching_cols(partial,
PART_LOC_COLS,
PART_CONTACT_COLS,
PART_COMPANY_COLS)
partial.info()
HTML( partial.head(3).to_html() )
COMP_PATH = os.path.join('tests', 'complete_parsed.tsv')
COMP_DTYPES = {'aa_streetnum': str, 'aa_zip': str, 'zipcode': str}
complete = pd.read_csv(COMP_PATH, sep='\t', encoding='utf-8', dtype=COMP_DTYPES)
HTML(complete.tail(3).to_html())
COMP_LOC_COLS = ['address', 'city', 'state', 'zipcode']
COMP_CONTACT_COLS = ['first', 'last']
COMP_COMPANY_COLS = ['company']
from mergepurge import match
matches_found = match.find_related(partial, complete)
matches_found[0]
output = match.merge_lists(partial, complete,
matching_indices=matches_found,
wanted_cols=['email'])
output[['email']].info()
output[['first','last','company','email']].head()
built_cols = [col for col in output.columns if col.startswith('aa_')]
output.drop(built_cols, axis=1, inplace=True)
output.head()
try:
%load_ext watermark
except ImportError as e:
%install_ext https://raw.githubusercontent.com/rasbt/python_reference/master/ipython_magic/watermark.py
%load_ext watermark
%watermark
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Some Partial Contact Data
Step2: Make Standardized Columns to Match Contacts With
Step3: The dataframe info shows that a whole bunch of columns were appended to the end of the dataframe, all of them prefixed with aa_.
Step4: Load Additional Contact Data to Match with the Incomplete Data
Step5: These data have already been run through build_matching_cols(), but if we were going to run it again this is how we would describe which columns contain what info
Step6: Search for Each Incomplete Contact in the DataFrame of Complete Contacts
Step7: So we found a matching record in the dataframe of complete records for all but one of our incomplete records.
Step8: Add Columns from Complete DataFrame to Matching Records of the Incomplete DataFrame
Step9: We see some summary output above that tells us how many of the incomplete records were matched by each match type.
Step10: Cleanup
Step11: System Info
|
3,359
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
table_alg =pd.ExcelFile("results_cec2015.pdf.xlsx")
print(table_alg.sheet_names)
df=table_alg.parse(table_alg.sheet_names[1])
print(df)
def get_best_pos(function, accuracy_level=0):
This function get the final position from the function and the accurary_level required (0 = 1.2e5, 1=6e5, 2=3e6)
Keyword arguments:
function -- function.
accuracy_level -- level of accuracy (0 to 2)
f = function - 1
r = (f % 5)
c = (f / 5)*16+accuracy_level*5
return r+1, c
for f in range(1, 16):
print(f, get_best_pos(f))
def parse_table_orig(df):
accuracies = ['1.2e5', '6e5', '3e6']
best = pd.DataFrame(columns=accuracies)
for acc_index, acc in enumerate(accuracies):
val = []
for f in range(1,16):
r, c = get_best_pos(f, acc_index)
val.append(df['f {}'.format(r)][c])
best[acc] = val
best.index = ['f{:02d}'.format(i+1) for i in range(15)]
return best
df=table_alg.parse(table_alg.sheet_names[1])
df = {}
for alg in table_alg.sheet_names:
df[alg]=table_alg.parse(alg)
# if "_orig" in alg:
# df[alg] = parse_table_orig(df[alg])
def calculate_points(fun, dfs, algs, acc=0):
Returns the ranking in positions for the function and algorithm desired.
Keyword parameters:
- fun -- function to compare.
- dfs -- hash with the dataframes for algorithm.
- algs -- algorithms to compare (must be into df).
- acc -- accuracy level (from 0 to 2).
values = pd.DataFrame(columns=algs)
for alg in algs:
df = dfs[alg]
values[alg] = [df[acc][fun-1]]
ranks = values.rank(1, method='min')
return np.array(ranks, dtype=np.int).reshape(len(algs))
def get_f1_score(num_algs):
Return a np.array with the scoring criterio by position from the Formula 1, in which
the first 10 items have scores.
The array have num_algs positions.
- If num_algs is lower than 10, it is shorten.
- If num_algs is greater than 10, it is increased with 0s.
f1 = np.array([25, 18, 15, 12, 10, 8, 6, 4, 2, 1])
if len(algs) < len(f1):
f1 = f1[:len(algs)]
else:
f1 += np.zeros(len(algs))
return f1
def get_scores(df, algs, funs, accuracies):
This function returns the scores for the algs 'algs', functions 'funs' and accuracies 'accuracies'.
Keyword parameters:
df -- dataframe with the data.
algs -- algorithms to compare (must be included in df).
funs -- functions to compare.
accuracies -- accuracy levels to compare (in string).
size = len(algs)
f1 = get_f1_score(size)
result = np.zeros(size)
for acc in accuracies:
for i in funs:
result += f1[calculate_points(i, df, algs, acc)-1]
results_alg = {alg: res for alg, res in zip(algs, result)}
return results_alg
algs = filter(lambda x: not "_orig" in x, df.keys())
algs = filter(lambda x: not "_old" in x, algs)
algs = [alg for alg in algs]
print(algs)
#algs = ['IHDELS', 'MOS']
accuracies = ['1.2e5', '6e5', '3e6']
funs_group = [range(1, 4), range(4, 8), range(8, 12), range(12, 15), [15]]
funs_group_names = ['Fully Separable', 'Partially Separable I', 'Partially Separable II', 'Overlapping', 'Non-separable']
from matplotlib import pyplot as plt
import seaborn as sns
# Increase font
sns.set(font_scale=1.5)
# Put white grid
sns.set_style("whitegrid")
for fid, funs in enumerate(funs_group):
title = funs_group_names[fid]
results = get_scores(df, algs, funs, accuracies)
results_df = pd.Series(results)
plt.figure()
results_df.plot(kind='bar', title=title)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
funs_categories = dict(zip(funs_group_names, funs_group))
excel = pd.ExcelWriter("results.xls")
def print_results(df, algs, title, funs, accuracies, style='b'):
results = get_scores(df, algs, funs, accuracies)
results_df = pd.Series(results)
plt.figure()
pd.DataFrame(results_df, columns=['Results']).to_excel(excel, title)
results_df.plot(kind='bar', title=title, color=style)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
fig_names = ['cat1', 'cat2', 'cat3', 'cat4', 'cat5']
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
for id, title in enumerate(funs_group_names):
fig_name = fig_names[id]
print_results(df, algs, title, funs_categories[title], accuracies, styles[id])
plt.savefig(fig_name, bbox_inches='tight')
def results_by_accuracy(acc):
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
funs = range(15)
results_cat = pd.DataFrame(columns=funs_group_names)
for id, fun in enumerate(funs_group_names):
fig_name = 'fe{}'.format(acc)
results = get_scores(df, algs, funs_categories[fun], [acc])
results_cat[fun] = pd.Series(results)
title = 'Results after {} Fitness Evaluations'.format(acc)
results_cat.plot(kind='bar', title=title, stacked=True)
results_cat.to_excel(excel, acc)
for acc in accuracies:
plt.figure()
results_by_accuracy(acc)
fname = '{}.png'.format(acc.replace('.',''))
lgd = plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.savefig(fname, bbox_extra_artists=(lgd,), bbox_inches='tight')
def results_by_all():
styles = ['blue', 'orange', 'yellow', 'green', 'brown']
funs = range(15)
results_cat = pd.DataFrame(columns=funs_group_names)
for id, fun in enumerate(funs_group_names):
fig_name = 'fall'
results = get_scores(df, algs, funs_categories[fun], accuracies)
results_cat[fun] = pd.Series(results)
title = 'Overall score'
results_cat.plot(kind='bar', title=title, stacked=True)
results_cat.to_excel(excel, 'all')
plt.figure()
results_by_all()
fname = 'all.png'
lgd = plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5))
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.savefig(fname, bbox_extra_artists=(lgd,), bbox_inches='tight')
excel.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we read the Sheet right (the 2 in the example)
Step3: I need a function that get the right position of the Data Frame, considering the function number and the accuracy level.
Step4: Get all the data for comparisons
Step6: Get function that calculate the points for function
Step8: The following function return the scores for each position following the Formula 1 criterion.
Step10: Finally, a function get_scores combine the previous two functions.
Step11: Putting all together
Step12: Now, we prepare the library.
Step13: First, Comparing by group of variables
Step14: Now, we are going to show for group for the Excel file.
Step15: By Groups
Step16: By Category
|
3,360
|
<ASSISTANT_TASK:>
Python Code:
# Install tweepy
# !pip install tweepy
# Import the libraries we need
import tweepy
import json
import time
import networkx
import os
import matplotlib.pyplot as plt
from collections import Counter
# Authenticate!
auth = tweepy.OAuthHandler("Consumer Key", "Consumer Secret")
auth.set_access_token("Access Token", "Access Token Secret")
api = tweepy.API(auth)
# Check working directory
os.getcwd()
# Set working directory
os.chdir('FOLDER FOR SAVING FILES')
# Check working directory
os.getcwd()
# Keep track of API calls
# User timeline
callsUT = 0
# Retweeters
callsRT = 0
# Number of tweets to be pulled
# Ego
E = 10
# Alter
A = 10
# Existing user with tweets
ego = api.user_timeline(screen_name = "CUBoulder", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
len(ego)
# Existing user with no tweets
ego = api.user_timeline(screen_name = "DeveloperYotam", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
len(ego)
# Non-existing user
ego = api.user_timeline(screen_name = "fakeuserq4587937045", count = E, include_rts = False, exclude_replies = True)
callsUT += 1
# Handling errors
ego = []
egosn = "CUBoulder"
try:
ego_raw = api.user_timeline(screen_name = egosn, count = E, include_rts = False, exclude_replies = True)
except tweepy.TweepError:
print("fail!")
callsUT += 1
# Converting results to a list of json objects
ego = [egotweet._json for egotweet in ego_raw]
# Writing ego tweets to a json file
with open('egotweet.json', 'w') as file:
json.dump(ego, file)
callsUT
# Looking at a json object
ego[0]
# Accessing an element of ego tweets
ego[0]["id_str"]
# Storing one of ego's tweet id
egoid = ego[0]["id_str"]
# Storing and printing ego tweet ids and retweet counts
tweetids = []
retweets = []
if len(ego) != 0:
for egotweet in ego:
tweetids.append(egotweet["id_str"])
retweets.append(egotweet["retweet_count"])
print(egotweet["id_str"],egotweet["retweet_count"])
# Collecting Retweets
egort = api.retweets(ego[0]["id_str"])
callsRT += 1
len(egort)
callsRT
# Non-existing tweet
egort = api.retweets("garblegarble")
callsRT += 1
# Note: callsRT did not increase in the last command
callsRT
callsRT += 1
# Sleep for 10 seconds
time.sleep(10)
# Collecting retweeters of ego tweets
allretweeters = []
self = []
check = []
for egotweet in ego:
retweeters = []
try:
selftweet = 0
if callsRT >= 75:
time.sleep(900)
egort_raw = api.retweets(egotweet["id_str"])
egort = [egoretweet._json for egoretweet in egort_raw]
for retweet in egort:
if retweet["user"]["id_str"]!=egoid:
allretweeters.append((egoid,retweet["user"]["id_str"]))
retweeters.append(retweet["user"]["id_str"])
else:
selftweet = 1
check.append(len(retweeters))
self.append(selftweet)
except tweepy.TweepError:
check.append(0)
self.append(0)
callsRT += 1
# Writing results to files
with open('check.json', 'w') as file:
json.dump(check, file)
with open('self.json', 'w') as file:
json.dump(self, file)
with open('allretweeters.json', 'w') as file:
json.dump(allretweeters, file)
# Printing tweet ids, retweet counts,
# retweeters obtained, and whether a self tweet is included
for a, b, c, d in zip(tweetids,retweets,check,self):
print(a, b, c, d)
len(allretweeters)
allretweeters
# Assigning edge weight to be number of tweets retweeted
weight = Counter()
for (i, j) in allretweeters:
weight[(i, j)] +=1
weight
# Defining weighted edges
weighted_edges = list(weight.items())
weighted_edges
# Defining the network object
G = networkx.Graph()
G.add_edges_from([x[0] for x in weighted_edges])
# Visualizing the network
networkx.draw(G, width=[x[1] for x in weighted_edges])
# Defining the set of unique retweeters
unique = [x[0][1] for x in weighted_edges]
len(unique)
unique
callsUT
# Collecting and storing the tweets of retweeters
alter = []
alters = []
for retweeter in unique:
try:
if callsUT >= 900:
time.sleep(900)
alter_raw = api.user_timeline(retweeter, count = A, include_rts = False, exclude_replies = True)
alter = [altertweet._json for altertweet in alter_raw]
alters.append(alter)
except tweepy.TweepError:
print("fail!")
callsUT += 1
with open('alters.json', 'w') as file:
json.dump(alters, file)
callsUT
len(alters)
# Printing the number of tweets pulled for each retweeter
for alt in alters:
print(len(alt))
# Storing and printing alter ids, tweet ids, and retweet counts
altids = []
alttweetids = []
altretweets = []
for alt in alters:
for alttweet in alt:
altids.append(alttweet["user"]["id_str"])
alttweetids.append(alttweet["id_str"])
altretweets.append(alttweet["retweet_count"])
print(alttweet["user"]["id_str"],alttweet["id_str"],alttweet["retweet_count"])
# Collecting retweeters of alter tweets
allalt = []
altself = []
altcheck = []
for alt in alters:
for alttweet in alt:
altid = alttweet["user"]["id_str"]
altretweeters = []
try:
selftweet = 0
if callsRT >= 75:
time.sleep(900)
altrt_raw = api.retweets(alttweet["id_str"])
altrt = [altretweet._json for altretweet in altrt_raw]
for retweet in altrt:
if retweet["user"]["id_str"]!=altid:
allalt.append((altid,retweet["user"]["id_str"]))
altretweeters.append(retweet["user"]["id_str"])
else:
selftweet = 1
altcheck.append(len(altretweeters))
altself.append(selftweet)
except tweepy.TweepError:
altcheck.append(0)
altself.append(0)
callsRT += 1
# Writing results to files
with open('altcheck.json', 'w') as file:
json.dump(altcheck, file)
with open('altself.json', 'w') as file:
json.dump(altself, file)
with open('altretweeters.json', 'w') as file:
json.dump(altretweeters, file)
with open('allalt.json', 'w') as file:
json.dump(allalt, file)
# Printing alter user ids, tweet ids, retweet counts,
# retweeters obtained, and whether a self tweet is included
for a, b, c, d, e in zip(altids,alttweetids,altretweets,altcheck,altself):
print(a, b, c, d, e)
len(allalt)
allalt
weight = Counter()
for (i, j) in allalt:
weight[(i, j)] +=1
weight
all_edges = weighted_edges + list(weight.items())
all_edges
# Defining the full network object
G = networkx.Graph()
G.add_edges_from([x[0] for x in all_edges])
# Visualizing the full network
networkx.draw(G, width=[x[1] for x in all_edges])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2>2. Pulling ego tweets</h2>
Step2: <h2>3. Pulling retweeters</h2>
Step3: <h2>4. Visualizing the network of retweeters</h2>
Step4: <h2>5. Pulling retweeter tweets</h2>
Step5: <h2>6. Pulling retweeters of retweeters</h2>
Step6: <h2>7. Visualizing the full network of retweeters</h2>
|
3,361
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.legend()
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_val = -1
best_stats = None
learning_rates = [1e-2, 1e-3]
regularization_strengths = [0.4, 0.5, 0.6]
results = {}
iters = 2000
for lr in learning_rates:
for rs in regularization_strengths:
net = TwoLayerNet(input_size, hidden_size, num_classes)
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=iters, batch_size=200, learning_rate=lr,
learning_rate_decay=0.95, reg=rs)
y_train_pred = net.predict(X_train)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = net.predict(X_val)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_stats = stats
best_val = acc_val
best_net = net
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print("lr, ", lr, "reg ", reg, "train_accuracy", train_accuracy, "val_accuracy", val_accuracy)
print("best validation accuracy achieved during cross-validation:", best_val)
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
3,362
|
<ASSISTANT_TASK:>
Python Code:
with open("erty.txt", "w") as f:
f.write("try.try.try")
import pymmails
server = pymmails.create_smtp_server("gmail", "xavier.somebody@gmail.com", "pwd")
pymmails.send_email(server, "xavier.somebody@gmail.com", "xavier.somebodyelse@else.com",
"results", "body", attachements = [ "erty.txt" ])
server.quit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In fonction create_smtp_server, the string gmail is replaced by smtp.gmail.com
|
3,363
|
<ASSISTANT_TASK:>
Python Code:
!pip install keras==2.0.8
from keras.datasets import mnist
from keras.layers import *
from keras.layers import Dense, Input, Flatten
from keras.models import Model
from keras.layers.merge import concatenate
from keras.utils import np_utils
img_rows, img_cols = 28, 28
if K.image_data_format() == 'channels_first':
shape_ord = (1, img_rows, img_cols)
else: # channel_last
shape_ord = (img_rows, img_cols, 1)
inputs = Input(shape=(28, 28, 1), name='left_input')
random_layer_name = Flatten()(inputs)
random_layer_name = Dense(32)(random_layer_name)
predictions = Dense(2, activation='softmax')(random_layer_name)
model = Model(inputs=[inputs], outputs=predictions)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0],) + shape_ord)
X_test = X_test.reshape((X_test.shape[0],) + shape_ord)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
np.random.seed(1338) # for reproducibilty!!
# Test datafit
X_test = X_test.copy()
Y = y_test.copy()
# Converting the output to binary classification(Six=1,Not Six=0)
Y_test = Y == 6
Y_test = Y_test.astype(int)
# Selecting the 5918 examples where the output is 6
X_six = X_train[y_train == 6].copy()
Y_six = y_train[y_train == 6].copy()
# Selecting the examples where the output is not 6
X_not_six = X_train[y_train != 6].copy()
Y_not_six = y_train[y_train != 6].copy()
# Selecting 6000 random examples from the data that
# only contains the data where the output is not 6
random_rows = np.random.randint(0,X_six.shape[0],6000)
X_not_six = X_not_six[random_rows]
Y_not_six = Y_not_six[random_rows]
# Appending the data with output as 6 and data with output as <> 6
X_train = np.append(X_six,X_not_six)
# Reshaping the appended data to appropraite form
X_train = X_train.reshape((X_six.shape[0] + X_not_six.shape[0],) + shape_ord)
# Appending the labels and converting the labels to
# binary classification(Six=1,Not Six=0)
Y_labels = np.append(Y_six,Y_not_six)
Y_train = Y_labels == 6
Y_train = Y_train.astype(int)
# Converting the classes to its binary categorical form
nb_classes = 2
Y_train = np_utils.to_categorical(Y_train, nb_classes)
Y_test = np_utils.to_categorical(Y_test, nb_classes)
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=32,
verbose=1,
validation_data=(X_test, Y_test))
# %load ../solutions/sol_821.py
## try yourself
## `evaluate` the model on test data
from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model
# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
# Note that we can name any layer by passing it a "name" argument.
main_input = Input(shape=(100,), dtype='int32', name='main_input')
# This embedding layer will encode the input sequence
# into a sequence of dense 512-dimensional vectors.
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(32)(x)
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
from keras.layers import concatenate
auxiliary_input = Input(shape=(5,), name='aux_input')
x = concatenate([lstm_out, auxiliary_input])
# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
# And finally we add the main logistic regression layer
main_output = Dense(1, activation='sigmoid', name='main_output')(x)
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])
model.compile(optimizer='rmsprop',
loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},
loss_weights={'main_output': 1., 'aux_output': 0.2})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Keras supports different Merge strategies
Step3: Here we insert the auxiliary loss, allowing the LSTM and Embedding layer to be trained smoothly even though the main loss will be much higher in the model.
Step4: At this point, we feed into the model our auxiliary input data by concatenating it with the LSTM output
Step5: Model Definition
Step6: We compile the model and assign a weight of 0.2 to the auxiliary loss.
|
3,364
|
<ASSISTANT_TASK:>
Python Code:
from binary_tools.binary.kicks import*
from binary_tools.binary.tests.test_kicks import*
from binary_tools.binary.orbits import*
from binary_tools.binary.tests.run_tests import*
phi = rand_phi()
theta = rand_theta()
velocity = rand_velocity(100)
post_explosion_params_circular(133, 5.5, 55, 1.4, theta, phi, velocity)
sample_e = 0.2
true_anomaly = rand_true_anomaly(sample_e)
rand_separation(133,sample_e)
post_explosion_params_general( 133, 5.5, 55, 1.4, sample_e, theta, phi, velocity, true_anomaly)
test_rand_phi(num_sample=10000, nbins=20, tolerance = 1e-3, seed="Jean", plot=True, save=False)
test_rand_theta(num_sample=10000, nbins=20, tolerance = 1e-3, seed="Jubilee", plot=True, save=False)
test_rand_velocity(100, num_sample=10000, nbins=20, tolerance=1e-3, seed="Dimitris", plot=True, save=False)
test_rand_true_anomaly(sample_e,num_sample=10000, nbins=20, tolerance = 1e-3, seed="Rhysand",\
plot=True, save=False)
testing_circular_function_graph(test_sigma=100,test_m1=5.5,test_m2=55,test_ai=133,\
test_m1f=1.4,seed="Flay",sample_velocity=100,npoints=10000,plot=True,save=False)
testing_circular_function_momentum(ai=133, m1=5.5, m2=55, m1f=1.4, test_sigma=100,\
num_sample=1000,seed = "Lela", tolerance=1e-3)
testing_eccentric_kick(Ai=133, M1=5.5, M2=55, Mns=1.4, num_sample=100, seed = "Guarnaschelli")
testing_inverse_kick(Ai=133, M1=5.5, M2=55, Mns=1.4, test_sigma=1000, num_sample=100,\
seed="Tamlin",tolerance=1e-4)
testing_momentum_full_eccentric(Ai=133, M1=5.5, M2=55, Mns=1.4, test_sigma=15,num_sample=100,\
seed="Lucien",tolerance=1e-4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: kicks
Step2: $\theta$ follows the distribution
Step3: the velocity follows the maxwellian distribution
Step4: The following function created is a function that accepts initial conditions on a circular binary system and returns the final semi-major axis, final eccentricity, the angle between the pre and post explosion orbital planes, and a boolean describing if the orbit is unbound
Step5: To consider a non circular system, a series of functions were created to sample an eccentric system with a kick. The first function finds a random true anomaly of the system, then the second uses that true anomaly to find the separation of the system.
Step6: the true anomaly follows the distribution
Step7: the separation is calculated from the following equation
Step8: The last part of the eccentric series is a function that accepts the initial contitions of an eccentric system, then returns final semi-major axis, final eccentricity, and a boolean describing if the orbit is unbound
Step9: test kicks
Step10: the testing_circular_function_graph is different from the previous tests, there is no comparision being calculated in this test, it merely creates a graph that must be checked by a person against another correct graph.
Step11: In order to fully test post_explosion_params_circular function, a second function was created in addition to testing_circular_function_graph which works by comparing the momentum calculated from that fuction, to a momentum calculated from other known values.
Step12: A second version of the momentum test and the graph test were needed after the post_explosion_params_general function was created to test that function. These slightly modified functions accept the post_explosion_params_general function with an intial eccentricity of zero. the results they produce are very similar to the original circular tests.
Step13: An additional test was created for the post_explosions_params_general function that kicks a circular system with mass loss, then reverses that with mass gain back into a circular orbit.
Step14: A full version of the momentum test was also created to test the post_explosion_params_general in the case of an initial eccentric orbit. The momentum is calculated in a slightly different method from the previous momentum funcitons.
|
3,365
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
weights = tf.Variable(tf.random_normal([prev_layer.shape[0], num_units]))
layer = tf.matmul(prev_layer, weights)
# missing: ReLU
return layer
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
3,366
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import matplotlib.image as mpimg
from IPython.display import Image
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
from IPython.display import HTML
HTML('../style/code_toggle.html')
def generalGauss2d(x0, y0, sigmax, sigmay, amp=1., theta=0.):
Return a normalized general 2-D Gaussian function
x0,y0: centre position
sigmax, sigmay: standard deviation
amp: amplitude
theta: rotation angle (deg)
#norm = amp * (1./(2.*np.pi*(sigmax*sigmay))) #normalization factor
norm = amp
rtheta = theta * 180. / np.pi #convert to radians
#general function parameters (https://en.wikipedia.org/wiki/Gaussian_function)
a = (np.cos(rtheta)**2.)/(2.*(sigmax**2.)) + (np.sin(rtheta)**2.)/(2.*(sigmay**2.))
b = -1.*(np.sin(2.*rtheta))/(4.*(sigmax**2.)) + (np.sin(2.*rtheta))/(4.*(sigmay**2.))
c = (np.sin(rtheta)**2.)/(2.*(sigmax**2.)) + (np.cos(rtheta)**2.)/(2.*(sigmay**2.))
return lambda x,y: norm * np.exp(-1. * (a * ((x - x0)**2.) - 2.*b*(x-x0)*(y-y0) + c * ((y-y0)**2.)))
def genRstoredBeamImg(fitsImg):
Generate an image of the restored PSF beam based on the FITS header and image size
fh = fits.open(fitsImg)
#get the restoring beam information from the FITS header
bmin = fh[0].header['BMIN'] #restored beam minor axis (deg)
bmaj = fh[0].header['BMAJ'] #restored beam major axis (deg)
bpa = fh[0].header['BPA'] #restored beam angle (deg)
dRA = fh[0].header['CDELT1'] #pixel size in RA direction (deg)
ra0 = fh[0].header['CRPIX1'] #centre RA pixel
dDec = fh[0].header['CDELT2'] #pixel size in Dec direction (deg)
dec0 = fh[0].header['CRPIX2'] #centre Dec pixel
#construct 2-D ellipitcal Gaussian function
gFunc = generalGauss2d(0., 0., bmin/2., bmaj/2., theta=bpa)
#produce an restored PSF beam image
imgSize = 2.*(ra0-1) #assumes a square image
xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float) #make a grid of pixel indicies
xpos -= ra0 #recentre
ypos -= dec0 #recentre
xpos *= dRA #convert pixel number to degrees
ypos *= dDec #convert pixel number to degrees
return gFunc(xpos, ypos) #restored PSF beam image
def convolveBeamSky(beamImg, skyModel):
Convolve a beam (PSF or restored) image with a sky model image, images must be the same shape
sampFunc = np.fft.fft2(beamImg) #sampling function
skyModelVis = np.fft.fft2(skyModel[0,0]) #sky model visibilities
sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities
return np.abs(np.fft.fftshift(np.fft.ifft2(sampModelVis))) #sky model convolved with restored beam
fig = plt.figure(figsize=(16, 7))
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')
residualImg = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-model.fits')
skyModel = fh[0].data
#generate a retored PSF beam image
restBeam = genRstoredBeamImg(
'../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')
#convolve restored beam image with skymodel
convImg = convolveBeamSky(restBeam, skyModel)
gc1 = aplpy.FITSFigure(residualImg, figure=fig, subplot=[0.1,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-1.5, vmax=2, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Residual Image (niter=1000)')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(vmin=0., vmax=2.5, cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Sky Model')
gc2.add_colorbar()
fig.canvas.draw()
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits', \
figure=fig, subplot=[0.1,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Dirty Image')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Deconvolved Image')
gc2.add_colorbar()
fig.canvas.draw()
#load deconvolved image
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits')
deconvImg = fh[0].data
#load residual image
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')
residImg = fh[0].data
peakI = np.max(deconvImg)
print 'Peak Flux: %f Jy'%(peakI)
print 'Dynamic Range:'
#method 1
noise = np.std(deconvImg)
print '\tMethod 1:', peakI/noise
#method 2
noise = np.std(residImg)
print '\tMethod 2:', peakI/noise
#method 3
noise = np.std(np.random.choice(deconvImg.flatten(), int(deconvImg.size*.01))) #randomly sample 1% of pixels
print '\tMethod 3:', peakI/noise
#method 4, region 1
noise = np.std(deconvImg[0,0,0:128,0:128]) #corner of image
print '\tMethod 4a:', peakI/noise
#method 4, region 2
noise = np.std(deconvImg[0,0,192:320,192:320]) #centre of image
print '\tMethod 4b:', peakI/noise
fig = plt.figure(figsize=(8, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \
figure=fig)
gc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Residual Image')
gc1.add_colorbar()
fig.canvas.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step5: 6.4 Residuals and Image Quality<a id='deconv
Step6: Figure
Step7: Left
Step8: Method 1 will always result in a lower dynamic range than Method 2 as the deconvoled image includes the sources where method 2 only uses the residuals. Method 3 will result in a dynamic range which varies depending on the number of pixels sampled and which pixels are sampled. One could imagine an unlucky sampling where every pixel chosen is part of a source, resulting in a large standard deviation. Method 4 depends on the region used to compute the noise. In the Method 4a result a corner of the image, where there are essentially no sources, results in a high dynamic range. On the other hand, choosing the centre region to compute the noise standard deviation results in a low dynamic range. This variation between methods can lead to people playing 'the dynamic range game' where someone can pick the result that best fits what they want to say about the image. Be careful, and make sure your dynamic range metric is well defined and unbiased.
|
3,367
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d import proj3d
np.set_printoptions(suppress=True,precision=3)
from matplotlib.patches import FancyArrowPatch
X_men=np.array([[1.97,110,5],[1.80,70,4.8],[1.70,90,4.9]]).transpose()
X_women=np.array([[1.65,52,4.7],[1.75,65,4.8],[1.67,58,4.6]]).transpose()
X = np.hstack((X_men,X_women))
print (X)
mean=np.mean(X,axis=1)
std=np.std(X,axis=1)
var=np.var(X,axis=1)
print ("Means:",mean)
print ("Standard deviations:",std)
print ("Variances:",var)
def scale_linear_byrow(X):
mins = np.min(X, axis=1).reshape(3,1)
maxs = np.max(X, axis=1).reshape(3,1)
rng = maxs - mins
return ((X - mins)/rng)
X_r=scale_linear_byrow(X)
print(X_r)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
plt.rcParams['legend.fontsize'] = 10
ax.plot(X_r[0,0:3], X_r[1,0:3], X_r[2,0:3], 'o', markersize=8, color='red', alpha=0.5, label='class1')
ax.plot(X_r[0,3:6], X_r[1,3:6], X_r[2,3:6], 'o', markersize=8, color='blue', alpha=0.5, label='class1')
plt.show()
mean=np.mean(X_r,axis=1)
std=np.std(X_r,axis=1)
var=np.var(X_r,axis=1)
print ("Means:",mean)
print ("Standard deviations:",std)
print ("Variances:",var)
#mean_vector = np.array([[mean_x],[mean_y],[mean_z]])
cvm=np.cov(X_r)
print (cvm)
eig_val_cov, eig_vec_cov = np.linalg.eig(cvm)
for i in range(len(eig_val_cov)):
eigvec_cov = eig_vec_cov[:,i].reshape(1,3).T
print('Eigenvalue {} from covariance matrix: {}'.format(i+1, eig_val_cov[i]))
print('Eigenvector:')
print(eigvec_cov)
for i in range(len(eig_val_cov)):
eigv = eig_vec_cov[:,i].reshape(1,3).T
np.testing.assert_array_almost_equal(cvm.dot(eigv), eig_val_cov[i] * eigv,
decimal=6, err_msg='', verbose=True)
# Sort eigenvectors given their eigenvalue
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_val_cov[i]), eig_vec_cov[:,i]) for i in range(len(eig_val_cov))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort()
eig_pairs.reverse()
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
for i in eig_pairs:
print(i)
# print(i[0])
# Just to principal components
matrix_w = np.hstack((eig_pairs[0][1].reshape(3,1), eig_pairs[1][1].reshape(3,1)))
print('Matrix W:\n', matrix_w)
# Transform instance to the new subspace
transformed = np.dot(X_r.T,matrix_w).transpose()
print (transformed)
plt.plot(transformed[0,0:3], transformed[1,0:3], 'o', markersize=7, color='blue', alpha=0.5, label='men')
plt.plot(transformed[0,3:6], transformed[1,3:6], '^', markersize=7, color='red', alpha=0.5, label='women')
#plt.xlim([-4,4])
#plt.ylim([-4,4])
plt.xlabel('x_values')
plt.ylabel('y_values')
#plt.legend()
plt.title('Transformed samples with class labels')
plt.show()
# Just one dimension
matrix_w2 = np.hstack((eig_pairs[0][1].reshape(3,1),))
print('Matrix W:\n', matrix_w2)
transformed2 = np.dot(X_r.T,matrix_w2).transpose()
print (transformed2)
plt.plot(transformed2[0,0:3],[0,0,0], 'o', markersize=7, color='blue', alpha=0.5, label='men')
plt.plot(transformed2[0,3:6],[0,0,0], '^', markersize=7, color='red', alpha=0.5, label='women')
#plt.xlim([-4,4])
#plt.ylim([-4,4])
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples with class labels')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us define a very simple dataset, with 6 people, identified by their heigth, weigth, and the length of their middle finger (?). Each column is an instance, with 3 attributes (we use this format to directly call the np.cov() function
Step2: Rescale values
Step3: Calculate mean, standard deviation and variance for every attribute, using the axis property of each of the methods
Step4: Now, obtain the covariance matrix for our sample
Step5: Calculate eigenvalues and eigenvectors from the covariance matrix. Since it is symmetric, we know they are real numbers
Step6: Verify that the eigenvalue equation holds
Step7: Transform instances to the new subspace...
|
3,368
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def convolution(img, kernel, padding=1, stride=1):
img: input image with one channel
kernel: convolution kernel
h, w = img.shape
kernel_size = kernel.shape[0]
# height and width of image with padding
ph, pw = h + 2 * padding, w + 2 * padding
padding_img = np.zeros((ph, pw))
padding_img[padding:h + padding, padding:w + padding] = img
# height and width of output image
result_h = (h + 2 * padding - kernel_size) // stride + 1
result_w = (w + 2 * padding - kernel_size) // stride + 1
result = np.zeros((result_h, result_w))
# convolution
x, y = 0, 0
for i in range(0, ph - kernel_size + 1, stride):
for j in range(0, pw - kernel_size + 1, stride):
roi = padding_img[i:i+kernel_size, j:j+kernel_size]
result[x, y] = np.sum(roi * kernel)
y += 1
y = 0
x += 1
return result
from PIL import Image
import matplotlib.pyplot as plt
img = Image.open('pics/lena.jpg').convert('L')
plt.imshow(img, cmap='gray')
# a Laplace kernel
laplace_kernel = np.array([[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1]])
# Gauss kernel with kernel_size=3
gauss_kernel3 = (1/ 16) * np.array([[1, 2, 1],
[2, 4, 2],
[1, 2, 1]])
# Gauss kernel with kernel_size=5
gauss_kernel5 = (1/ 84) * np.array([[1, 2, 3, 2, 1],
[2, 5, 6, 5, 2],
[3, 6, 8, 6, 3],
[2, 5, 6, 5, 2],
[1, 2, 3, 2, 1]])
fig, ax = plt.subplots(1, 3, figsize=(12, 8))
laplace_img = convolution(np.array(img), laplace_kernel, padding=1, stride=1)
ax[0].imshow(Image.fromarray(laplace_img), cmap='gray')
ax[0].set_title('laplace')
gauss3_img = convolution(np.array(img), gauss_kernel3, padding=1, stride=1)
ax[1].imshow(Image.fromarray(gauss3_img), cmap='gray')
ax[1].set_title('gauss kernel_size=3')
gauss5_img = convolution(np.array(img), gauss_kernel5, padding=2, stride=1)
ax[2].imshow(Image.fromarray(gauss5_img), cmap='gray')
ax[2].set_title('gauss kernel_size=5')
def myconv2d(features, weights, padding=0, stride=1):
features: input, in_channel * h * w
weights: kernel, out_channel * in_channel * kernel_size * kernel_size
return output with out_channel
in_channel, h, w = features.shape
out_channel, _, kernel_size, _ = weights.shape
# height and width of output image
output_h = (h + 2 * padding - kernel_size) // stride + 1
output_w = (w + 2 * padding - kernel_size) // stride + 1
output = np.zeros((out_channel, output_h, output_w))
# call convolution out_channel * in_channel times
for i in range(out_channel):
weight = weights[i]
for j in range(in_channel):
feature_map = features[j]
kernel = weight[j]
output[i] += convolution(feature_map, kernel, padding, stride)
return output
input_data=[
[[0,0,2,2,0,1],
[0,2,2,0,0,2],
[1,1,0,2,0,0],
[2,2,1,1,0,0],
[2,0,1,2,0,1],
[2,0,2,1,0,1]],
[[2,0,2,1,1,1],
[0,1,0,0,2,2],
[1,0,0,2,1,0],
[1,1,1,1,1,1],
[1,0,1,1,1,2],
[2,1,2,1,0,2]]
]
weights_data=[[
[[ 0, 1, 0],
[ 1, 1, 1],
[ 0, 1, 0]],
[[-1, -1, -1],
[ -1, 8, -1],
[ -1, -1, -1]]
]]
# numpy array
input_data = np.array(input_data)
weights_data = np.array(weights_data)
# show the result
print(myconv2d(input_data, weights_data, padding=3, stride=3))
import torch
import torch.nn.functional as F
input_tensor = torch.tensor(input_data).unsqueeze(0).float()
F.conv2d(input_tensor, weight=torch.tensor(weights_data).float(), bias=None, stride=3, padding=3)
def convolutionV2(img, kernel, padding=(0,0), stride=(1,1)):
h, w = img.shape
kh, kw = kernel.shape
# height and width of image with padding
ph, pw = h + 2 * padding[0], w + 2 * padding[1]
padding_img = np.zeros((ph, pw))
padding_img[padding[0]:h + padding[0], padding[1]:w + padding[1]] = img
# height and width of output image
result_h = (h + 2 * padding[0] - kh) // stride[0] + 1
result_w = (w + 2 * padding[1] - kw) // stride[1] + 1
result = np.zeros((result_h, result_w))
# convolution
x, y = 0, 0
for i in range(0, ph - kh + 1, stride[0]):
for j in range(0, pw - kw + 1, stride[1]):
roi = padding_img[i:i+kh, j:j+kw]
result[x, y] = np.sum(roi * kernel)
y += 1
y = 0
x += 1
return result
# test input
test_input = np.array([[1, 1, 2, 1],
[0, 1, 0, 2],
[2, 2, 0, 2],
[2, 2, 2, 1],
[2, 3, 2, 3]])
test_kernel = np.array([[1, 0], [0, 1], [0, 0]])
# output
print(convolutionV2(test_input, test_kernel, padding=(1, 0), stride=(1, 1)))
print(convolutionV2(test_input, test_kernel, padding=(2, 1), stride=(1, 2)))
import torch
import torch.nn as nn
x = torch.randn(1, 1, 32, 32)
conv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3, stride=1, padding=0)
y = conv_layer(x)
print(x.shape)
print(y.shape)
x = torch.randn(1, 1, 32, 32)
conv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=5, stride=2, padding=2)
y = conv_layer(x)
print(x.shape)
print(y.shape)
# input N * C * H * W
x = torch.randn(1, 1, 4, 4)
# maxpool
maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
y = maxpool(x)
# avgpool
avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
z = avgpool(x)
#avgpool
print(x)
print(y)
print(z)
import torch
import torch.nn as nn
import torch.utils.data as Data
import torchvision
class MyCNN(nn.Module):
def __init__(self, image_size, num_classes):
super(MyCNN, self).__init__()
# conv1: Conv2d -> BN -> ReLU -> MaxPool
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
# conv2: Conv2d -> BN -> ReLU -> MaxPool
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
# fully connected layer
self.fc = nn.Linear(32 * (image_size // 4) * (image_size // 4), num_classes)
def forward(self, x):
input: N * 3 * image_size * image_size
output: N * num_classes
x = self.conv1(x)
x = self.conv2(x)
# view(x.size(0), -1): change tensor size from (N ,H , W) to (N, H*W)
x = x.view(x.size(0), -1)
output = self.fc(x)
return output
def train(model, train_loader, loss_func, optimizer, device):
train model using loss_fn and optimizer in an epoch.
model: CNN networks
train_loader: a Dataloader object with training data
loss_func: loss function
device: train on cpu or gpu device
total_loss = 0
# train the model using minibatch
for i, (images, targets) in enumerate(train_loader):
images = images.to(device)
targets = targets.to(device)
# forward
outputs = model(images)
loss = loss_func(outputs, targets)
# backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss += loss.item()
# every 100 iteration, print loss
if (i + 1) % 100 == 0:
print ("Step [{}/{}] Train Loss: {:.4f}"
.format(i+1, len(train_loader), loss.item()))
return total_loss / len(train_loader)
def evaluate(model, val_loader, device):
model: CNN networks
val_loader: a Dataloader object with validation data
device: evaluate on cpu or gpu device
return classification accuracy of the model on val dataset
# evaluate the model
model.eval()
# context-manager that disabled gradient computation
with torch.no_grad():
correct = 0
total = 0
for i, (images, targets) in enumerate(val_loader):
# device: cpu or gpu
images = images.to(device)
targets = targets.to(device)
outputs = model(images)
# return the maximum value of each row of the input tensor in the
# given dimension dim, the second return vale is the index location
# of each maxium value found(argmax)
_, predicted = torch.max(outputs.data, dim=1)
correct += (predicted == targets).sum().item()
total += targets.size(0)
accuracy = correct / total
print('Accuracy on Test Set: {:.4f} %'.format(100 * accuracy))
return accuracy
def save_model(model, save_path):
# save model
torch.save(model.state_dict(), save_path)
import matplotlib.pyplot as plt
def show_curve(ys, title):
plot curlve for Loss and Accuacy
Args:
ys: loss or acc list
title: loss or accuracy
x = np.array(range(len(ys)))
y = np.array(ys)
plt.plot(x, y, c='b')
plt.axis()
plt.title('{} curve'.format(title))
plt.xlabel('epoch')
plt.ylabel('{}'.format(title))
plt.show()
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# mean and std of cifar10 in 3 channels
cifar10_mean = (0.49, 0.48, 0.45)
cifar10_std = (0.25, 0.24, 0.26)
# define transform operations of train dataset
train_transform = transforms.Compose([
# data augmentation
transforms.Pad(4),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32),
transforms.ToTensor(),
transforms.Normalize(cifar10_mean, cifar10_std)])
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(cifar10_mean, cifar10_std)])
# torchvision.datasets provide CIFAR-10 dataset for classification
train_dataset = torchvision.datasets.CIFAR10(root='./data/',
train=True,
transform=train_transform,
download=True)
test_dataset = torchvision.datasets.CIFAR10(root='./data/',
train=False,
transform=test_transform)
# Data loader: provides single- or multi-process iterators over the dataset.
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=100,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=100,
shuffle=False)
def fit(model, num_epochs, optimizer, device):
train and evaluate an classifier num_epochs times.
We use optimizer and cross entropy loss to train the model.
Args:
model: CNN network
num_epochs: the number of training epochs
optimizer: optimize the loss function
# loss and optimizer
loss_func = nn.CrossEntropyLoss()
model.to(device)
loss_func.to(device)
# log train loss and test accuracy
losses = []
accs = []
for epoch in range(num_epochs):
print('Epoch {}/{}:'.format(epoch + 1, num_epochs))
# train step
loss = train(model, train_loader, loss_func, optimizer, device)
losses.append(loss)
# evaluate step
accuracy = evaluate(model, test_loader, device)
accs.append(accuracy)
# show curve
show_curve(losses, "train loss")
show_curve(accs, "test accuracy")
# hyper parameters
num_epochs = 10
lr = 0.01
image_size = 32
num_classes = 10
# declare and define an objet of MyCNN
mycnn = MyCNN(image_size, num_classes)
print(mycnn)
# Device configuration, cpu, cuda:0/1/2/3 available
device = torch.device('cuda:0')
optimizer = torch.optim.Adam(mycnn.parameters(), lr=lr)
# start training on cifar10 dataset
fit(mycnn, num_epochs, optimizer, device)
# 3x3 convolution
def conv3x3(in_channels, out_channels, stride=1):
return nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
# Residual block
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self).__init__()
self.conv1 = conv3x3(in_channels, out_channels, stride)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(out_channels, out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.downsample = downsample
def forward(self, x):
Defines the computation performed at every call.
x: N * C * H * W
residual = x
# if the size of input x changes, using downsample to change the size of residual
if self.downsample:
residual = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=10):
block: ResidualBlock or other block
layers: a list with 3 positive num.
super(ResNet, self).__init__()
self.in_channels = 16
self.conv = conv3x3(3, 16)
self.bn = nn.BatchNorm2d(16)
self.relu = nn.ReLU(inplace=True)
# layer1: image size 32
self.layer1 = self.make_layer(block, 16, num_blocks=layers[0])
# layer2: image size 32 -> 16
self.layer2 = self.make_layer(block, 32, num_blocks=layers[1], stride=2)
# layer1: image size 16 -> 8
self.layer3 = self.make_layer(block, 64, num_blocks=layers[2], stride=2)
# global avg pool: image size 8 -> 1
self.avg_pool = nn.AvgPool2d(8)
self.fc = nn.Linear(64, num_classes)
def make_layer(self, block, out_channels, num_blocks, stride=1):
make a layer with num_blocks blocks.
downsample = None
if (stride != 1) or (self.in_channels != out_channels):
# use Conv2d with stride to downsample
downsample = nn.Sequential(
conv3x3(self.in_channels, out_channels, stride=stride),
nn.BatchNorm2d(out_channels))
# first block with downsample
layers = []
layers.append(block(self.in_channels, out_channels, stride, downsample))
self.in_channels = out_channels
# add num_blocks - 1 blocks
for i in range(1, num_blocks):
layers.append(block(out_channels, out_channels))
# return a layer containing layers
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv(x)
out = self.bn(out)
out = self.relu(out)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.avg_pool(out)
# view: here change output size from 4 dimensions to 2 dimensions
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
resnet = ResNet(ResidualBlock, [2, 2, 2])
print(resnet)
# Hyper-parameters
num_epochs = 10
lr = 0.001
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(resnet.parameters(), lr=lr)
fit(resnet, num_epochs, optimizer, device)
resnet = ResNet(ResidualBlock, [2, 2, 2])
num_epochs = 10
lr = 0.0009
device = torch.device('cuda:0')
optimizer = torch.optim.Adam(resnet.parameters(), lr=lr)
fit(resnet, num_epochs, optimizer, device)
from torch import nn
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
# The output of AdaptiveAvgPool2d is of size H x W, for any input size.
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
self.fc1 = nn.Linear(channel, channel // reduction)
self.fc2 = nn.Linear(channel // reduction, channel)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
b, c, _, _ = x.shape
out = self.avg_pool(x).view(b, c)
out = self.fc1(out)
out = self.fc2(out)
out = self.sigmoid(out).view(b, c, 1, 1)
return out * x
class SEResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, downsample=None, reduction=16):
super(SEResidualBlock, self).__init__()
self.conv1 = conv3x3(in_channels, out_channels, stride)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(out_channels, out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
self.se = SELayer(out_channels, reduction)
self.downsample = downsample
def forward(self, x):
residual = x
if self.downsample:
residual = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.se(out)
out += residual
out = self.relu(out)
return out
se_resnet = ResNet(SEResidualBlock, [2, 2, 2])
print(se_resnet)
# Hyper-parameters
num_epochs = 10
lr = 0.001
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(se_resnet.parameters(), lr=lr)
fit(se_resnet, num_epochs, optimizer, device)
import math
class VGG(nn.Module):
def __init__(self, cfg):
super(VGG, self).__init__()
self.features = self._make_layers(cfg)
# linear layer
self.classifier = nn.Linear(512, 10)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = self.classifier(out)
return out
def _make_layers(self, cfg):
cfg: a list define layers this layer contains
'M': MaxPool, number: Conv2d(out_channels=number) -> BN -> ReLU
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.BatchNorm2d(x),
nn.ReLU(inplace=True)]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
cfg = {
'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
vggnet = VGG(cfg['VGG11'])
print(vggnet)
# Hyper-parameters
num_epochs = 10
lr = 1e-3
# Device configuration
device = torch.device('cuda:0')
# optimizer
optimizer = torch.optim.Adam(vggnet.parameters(), lr=lr)
fit(vggnet, num_epochs, optimizer, device)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Week 5
Step2: 下面在图像上简单一下测试我们的conv函数,这里使用3*3的高斯核对下面的图像进行滤波.
Step4: 上面我们实现了实现了对单通道输入单通道输出的卷积.在CNN中,一般使用到的都是多通道输入多通道输出的卷积,要实现多通道的卷积, 我们只需要对循环调用上面的conv函数即可.
Step5: 接下来, 让我们测试我们写好的myconv2d函数.
Step6: 在Pytorch中,已经为我们提供了卷积和卷积层的实现.使用同样的input和weights,以及stride,padding,pytorch的卷积的结果应该和我们的一样.可以在下面的代码中进行验证.
Step7: 作业
Step8: 卷积层
Step9: 请问
Step11: GPU
Step15: 这样,一个简单的CNN模型就写好了.与前面的课堂内容相似,我们需要对完成网络进行训练与评估的代码.
Step16: 准备数据与训练模型
Step18: 训练过程中使用交叉熵(cross-entropy)损失函数与Adam优化器来训练我们的分类器网络.
Step20: ResNet
Step23: 下面是一份针对cifar10数据集的ResNet的实现.
Step24: 使用fit函数训练实现的ResNet,观察结果变化.
Step25: 作业
Step26: 作业
Step28: Vgg
|
3,369
|
<ASSISTANT_TASK:>
Python Code:
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_4
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
%%bigquery
CREATE OR REPLACE MODEL
babyweight.final_model
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
# TODO: Add FEATURE CROSS of:
# is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks
)
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from original dataset
))
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from simulated dataset
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lab Task #1
Step2: Get training information and evaluate
Step3: Now let's evaluate our trained model on our eval dataset.
Step4: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
Step5: Lab Task #2
Step6: Let's first look at our training statistics.
Step7: Now let's evaluate our trained model on our eval dataset.
Step8: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
Step9: Lab Task #3
Step10: Modify above prediction query using example from simulated dataset
|
3,370
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import xgboost as xgb
import numpy as np
import seaborn as sns
from hyperopt import hp
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
%matplotlib inline
train = pd.read_csv('bike.csv')
train['datetime'] = pd.to_datetime( train['datetime'] )
train['day'] = train['datetime'].map(lambda x: x.day)
def assing_test_samples(data, last_training_day=0.3, seed=1):
days = data.day.unique()
np.random.seed(seed)
np.random.shuffle(days)
test_days = days[: int(len(days) * 0.3)]
data['is_test'] = data.day.isin(test_days)
def select_features(data):
columns = data.columns[ (data.dtypes == np.int64) | (data.dtypes == np.float64) | (data.dtypes == np.bool) ].values
return [feat for feat in columns if feat not in ['count', 'casual', 'registered'] and 'log' not in feat ]
def get_X_y(data, target_variable):
features = select_features(data)
X = data[features].values
y = data[target_variable].values
return X,y
def train_test_split(train, target_variable):
df_train = train[train.is_test == False]
df_test = train[train.is_test == True]
X_train, y_train = get_X_y(df_train, target_variable)
X_test, y_test = get_X_y(df_test, target_variable)
return X_train, X_test, y_train, y_test
def fit_and_predict(train, model, target_variable):
X_train, X_test, y_train, y_test = train_test_split(train, target_variable)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return (y_test, y_pred)
def post_pred(y_pred):
y_pred[y_pred < 0] = 0
return y_pred
def rmsle(y_true, y_pred, y_pred_only_positive=True):
if y_pred_only_positive: y_pred = post_pred(y_pred)
diff = np.log(y_pred+1) - np.log(y_true+1)
mean_error = np.square(diff).mean()
return np.sqrt(mean_error)
assing_test_samples(train)
def etl_datetime(df):
df['year'] = df['datetime'].map(lambda x: x.year)
df['month'] = df['datetime'].map(lambda x: x.month)
df['hour'] = df['datetime'].map(lambda x: x.hour)
df['minute'] = df['datetime'].map(lambda x: x.minute)
df['dayofweek'] = df['datetime'].map(lambda x: x.dayofweek)
df['weekend'] = df['datetime'].map(lambda x: x.dayofweek in [5,6])
etl_datetime(train)
train['{0}_log'.format('count')] = train['count'].map(lambda x: np.log2(x) )
for name in ['registered', 'casual']:
train['{0}_log'.format(name)] = train[name].map(lambda x: np.log2(x+1) )
def objective(space):
model = xgb.XGBRegressor(
max_depth = space['max_depth'],
n_estimators = int(space['n_estimators']),
subsample = space['subsample'],
colsample_bytree = space['colsample_bytree'],
learning_rate = space['learning_rate'],
reg_alpha = space['reg_alpha']
)
X_train, X_test, y_train, y_test = train_test_split(train, 'count')
eval_set = [( X_train, y_train), ( X_test, y_test)]
(_, registered_pred) = fit_and_predict(train, model, 'registered_log')
(_, casual_pred) = fit_and_predict(train, model, 'casual_log')
y_test = train[train.is_test == True]['count']
y_pred = (np.exp2(registered_pred) - 1) + (np.exp2(casual_pred) -1)
score = rmsle(y_test, y_pred)
print "SCORE:", score
return{'loss':score, 'status': STATUS_OK }
space ={
'max_depth': hp.quniform("x_max_depth", 2, 20, 1),
'n_estimators': hp.quniform("n_estimators", 100, 1000, 1),
'subsample': hp.uniform ('x_subsample', 0.8, 1),
'colsample_bytree': hp.uniform ('x_colsample_bytree', 0.1, 1),
'learning_rate': hp.uniform ('x_learning_rate', 0.01, 0.1),
'reg_alpha': hp.uniform ('x_reg_alpha', 0.1, 1)
}
trials = Trials()
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=15,
trials=trials)
print(best)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling
Step2: Tuning hyperparmeters using Bayesian optimization algorithms
|
3,371
|
<ASSISTANT_TASK:>
Python Code:
hosts = []
n_hosts = 1000
for i in range(n_hosts):
if i < n_hosts / 2:
hosts.append(Host(color='blue'))
else:
hosts.append(Host(color='red'))
# Pick 5 red hosts and 5 blue hosts at random, and infect it with a virus of the same color.
blue_hosts = [h for h in hosts if h.color == 'blue']
blue_hosts = sample(blue_hosts, 10)
blue_virus = Virus(seg1color='blue', seg2color='blue')
for h in blue_hosts:
h.viruses.append(blue_virus)
red_hosts = [h for h in hosts if h.color == 'red']
red_hosts = sample(red_hosts, 10)
red_virus = Virus(seg1color='red', seg2color='red')
for h in red_hosts:
h.viruses.append(red_virus)
p_immune = 1E-3 # 1 = always successful even under immune pressure
# 0 = always unsuccessful under immune pressure.
p_replicate = 0.95 # probability of replication given that a host is infected.
p_contact = 1 - 1E-1/n_hosts # probability of contacting a host of the same color.
p_same_color = 0.99 # probability of successful infection given segment of same color.
p_diff_color = 0.9 # probability of successful infection given segment of different color.
# Set up number of timesteps to run simulation
n_timesteps = 100
# Set up a defaultdict for storing data
data = defaultdict(list)
# Run simulation
for t in range(n_timesteps):
# First part, clear up old infections.
for h in hosts:
h.increment_time()
h.remove_old_viruses()
h.remove_immune_viruses()
# Step to replicate viruses present in hosts.
infected_hosts = [h for h in hosts if h.is_infected()]
for h in infected_hosts:
if bernoulli.rvs(p_replicate): # we probabilistically allow replication to occur
h.replicate_virus()
# Step to transmit the viruses present in hosts.
infected_hosts = [h for h in hosts if h.is_infected()]
num_contacts = 0
for h in infected_hosts:
same_color = bernoulli.rvs(p_contact)
if same_color:
new_host = choice([h2 for h2 in hosts if h2.color == h.color])
num_contacts += 0
else:
new_host = choice([h2 for h2 in hosts if h2.color != h.color])
num_contacts += 1
virus = h.viruses[-1] # choose the newly replicated virus every time.
# Determine whether to transmit or not.
p_transmit = 1
### First, check immunity ###
if virus.seg1color in new_host.immunity:
p_transmit = p_transmit * p_immune
elif virus.seg1color not in new_host.immunity:
pass
### Next, check seg1.
if virus.seg1color == new_host.color:
p_transmit = p_transmit * p_same_color
else:
p_transmit = p_transmit * p_diff_color
### Finally, check seg2.
if virus.seg2color == new_host.color:
p_transmit = p_transmit * p_same_color
else:
p_transmit = p_transmit * p_diff_color
# Determine whether to transmit or not, by using a Bernoulli trial.
transmit = bernoulli.rvs(p_transmit)
# Perform transmission step
if transmit:
new_host.viruses.append(virus)
# # Capture data in the summary graph.
# if virus.is_mixed():
# G.edge[h.color][new_host.color]['mixed'] += 1
# else:
# G.edge[h.color][new_host.color]['clonal'] += 1
else:
pass
### INSPECT THE SYSTEM AND RECORD DATA###
num_immunes = 0 # num immune hosts
num_infected = 0 # num infected hosts
num_blue_immune = 0 # num blue immune hosts
num_red_immune = 0 # num red immune hosts
num_uninfected = 0 # num uninfected hosts
num_mixed = 0 # num mixed viruses
num_original = 0 # num original colour viruses
num_red_virus = 0 # num red viruses
num_blue_virus = 0 # num blue viruses
for h in hosts:
if len(h.immunity) > 0:
num_immunes += 1
if h.is_infected() > 0:
num_infected += 1
if 'blue' in h.immunity:
num_blue_immune += 1
if 'red' in h.immunity:
num_red_immune += 1
if not h.is_infected():
num_uninfected += 1
for v in h.viruses:
if v.is_mixed():
num_mixed += 1
else:
if v.seg1color == 'blue' and v.seg2color == 'blue':
num_blue_virus += 1
elif v.seg1color == 'red' and v.seg2color == 'red':
num_red_virus += 1
num_original += 1
# Record data that was captured
data['n_immune'].append(num_immunes)
data['n_infected'].append(num_infected)
data['n_blue_immune'].append(num_blue_immune)
data['n_red_immune'].append(num_red_immune)
data['n_uninfected'].append(num_uninfected)
data['n_mixed'].append(num_mixed)
data['n_original'].append(num_original)
data['n_red_virus'].append(num_red_virus)
data['n_blue_virus'].append(num_blue_virus)
data['n_contacts'].append(num_contacts)
### INSPECT THE SYSTEM ###
# Reassortment successful in establishing infection or not?
plt.plot(data['n_red_virus'], color='red', label='red')
plt.plot(data['n_blue_virus'], color='blue', label='blue')
plt.plot(data['n_original'], color='black', label='original')
plt.plot(data['n_mixed'], color='purple', label='mixed')
plt.ylabel('Number of Viruses')
plt.xlabel('Time Step')
plt.title('Viruses')
plt.legend()
np.array_equal(np.array(data['n_mixed']), np.zeros(100))
np.where(np.array(data['n_mixed']) == np.max(data['n_mixed']))[0] - np.where(np.array(data['n_original']) == np.max(data['n_original']))[0]
plt.plot(data['n_infected'], color='green', label='infected')
plt.plot(data['n_immune'], color='purple', label='immune')
plt.plot(data['n_blue_immune'], color='blue', label='blue immune')
plt.plot(data['n_red_immune'], color='red', label='red immune')
plt.ylabel('Number')
plt.xlabel('Time Steps')
plt.title('Hosts')
plt.legend()
plt.plot(data['n_contacts'], color='olive', label='contacts')
plt.title('Contact Frequency')
np.where(np.array(data['n_contacts']) >= 1)[0]
import pandas as pd
pd.DataFrame(data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialization
Step2: Parameters
Step3: Run Simulation!
Step4: Result
Step5: Result
Step6: Result
|
3,372
|
<ASSISTANT_TASK:>
Python Code:
import gzip
import pandas
# Download genotyping platform SNPs
adf_files = [
'A-GEOD-8882/A-GEOD-8882.adf.txt',
'A-GEOD-6434/A-GEOD-6434.adf.txt',
'A-AFFY-107/A-AFFY-107.adf.txt',
'A-AFFY-72/A-AFFY-72.adf.txt',
]
base_url = 'http://www.ebi.ac.uk/arrayexpress/files'
for adf_file in adf_files:
! wget --no-verbose --timestamping --directory-prefix download/chips/adf {base_url}/{adf_file}
! gzip --force download/chips/{adf_file.rsplit('/', 1)[-1]}
def parse_adf(path):
Parse array description files from ArrayExpress.
with gzip.open(path, 'rt') as read_file:
for i, line in enumerate(read_file):
if line.strip() == '[main]':
skip = i + 1
break
return pandas.read_table(path, skiprows=skip)
#Illumina HumanHap550 Platform: https://www.ebi.ac.uk/arrayexpress/arrays/A-GEOD-6434/
df = parse_adf('download/chips/A-GEOD-6434.adf.txt.gz')
rsids = df['Reporter Database Entry [dbsnp]']
rsids = rsids[rsids.str.startswith('rs').fillna(False)]
with open("download/chips/hh550-snp-for-bed.txt", 'w') as write_file:
write_file.write('\n'.join(rsids))
'{} SNPs with rs#s on HumanHap550'.format(len(rsids))
#Illumina HumanOmni1-Quad: https://www.ebi.ac.uk/arrayexpress/arrays/A-GEOD-8882/
df = parse_adf('download/chips/A-GEOD-8882.adf.txt.gz')
rsids = df['Reporter Database Entry [dbsnp]']
rsids = rsids[rsids.str.startswith('rs').fillna(False)]
with open('download/chips/ho1-snp-for-bed.txt', 'w') as write_file:
write_file.write('\n'.join(rsids))
'{} SNPs with rs#s on HumanOmni1'.format(len(rsids))
#Affy 500k Array set is made up of 2 arrays (Sty and Nsp)
#Sty Array: https://www.ebi.ac.uk/arrayexpress/arrays/A-AFFY-72/
df = parse_adf('download/chips/A-AFFY-72.adf.txt.gz')
rsids = df['Composite Element Database Entry[ncbi_dbsnp:126:126]']
rsids_0 = rsids[rsids.str.startswith('rs').fillna(False)]
#Nsp Array: https://www.ebi.ac.uk/arrayexpress/arrays/A-AFFY-107/
df = parse_adf('download/chips/A-AFFY-107.adf.txt.gz')
rsids = df['Composite Element Database Entry[dbsnp:v128]']
rsids_1 = rsids[rsids.str.startswith('rs').fillna(False)]
rsids = sorted(set(rsids_0) | set(rsids_1))
with open('download/chips/affy500-snp-for-bed.txt', 'w') as write_file:
write_file.write('\n'.join(rsids))
'{} SNPs with rs#s on Affy500k'.format(len(rsids))
url = 'https://raw.githubusercontent.com/dhimmel/kg/de9c303442f01acde89aa956acb2f53888295169/data/common-SNPs/common-snps.bed.gz'
! wget --no-verbose --output-document download/platforms/bed/kg3.bed.gz {url}
# Download Entrez mappings from http://dx.doi.org/10.6084/m9.figshare.103113
url = 'http://files.figshare.com/230645/entrez.txt'
entrez_df = pandas.read_table(url, names=['chrom', 'chromStart', 'chromEnd', 'name'])
entrez_df['chrom'] = 'chr' + entrez_df['chrom'].astype(str)
entrez_df = entrez_df.replace({'chrom': {'chr23': 'chrX', 'chr24': 'chrY'}})
entrez_df.to_csv('download/entrez-genes.bed', index=False, header=False, sep='\t')
entrez_df.tail(2)
platforms = 'hh550', 'ho1', 'affy500', 'exac', 'kg3'
# Base pairs added upstream and downstream of each entry
window = 10000
# Count the overlaps for genotyping platforms
for platform in platforms:
! bedtools window -w {window} -c -a download/entrez-genes.bed \
-b download/platforms/bed/{platform}.bed.gz > data/platforms/{platform}-entrez.bed
# Read SNPs per Gene
columns = ['chromosome', 'chromosome_start', 'chromosome_end', 'entrez_gene_id', 'snps']
dfs = list()
for platform in platforms:
path = 'data/platforms/{}-entrez.bed'.format(platform)
columns[-1] = 'snps_{}'.format(platform)
df = pandas.read_table(path, names=columns)
dfs.append(df)
count_df = dfs[0]
for df in dfs[1:]:
count_df = count_df.merge(df)
count_df.to_csv('data/platforms/combined.tsv', index=False, sep='\t')
count_df.head(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Create BAM Files for SNPs for genotyping chips
Step3: Manual step
Step4: Download Entrez Gene locations
Step5: Compute SNPs per gene using bedtools
Step6: Combine to a single file
|
3,373
|
<ASSISTANT_TASK:>
Python Code:
import nltk
nltk.help.upenn_tagset()
nltk.help.upenn_tagset('WP$')
nltk.help.upenn_tagset('PDT')
nltk.help.upenn_tagset('DT')
nltk.help.upenn_tagset('POS')
nltk.help.upenn_tagset('RBR')
nltk.help.upenn_tagset('RBS')
nltk.help.upenn_tagset('MD')
from pprint import pprint
sent = 'Beautiful is better than ugly.'
tokens = nltk.tokenize.word_tokenize(sent)
pos_tags = nltk.pos_tag(tokens)
pprint(pos_tags)
truths = [[(u'Pierre', u'NNP'), (u'Vinken', u'NNP'), (u',', u','), (u'61', u'CD'),
(u'years', u'NNS'), (u'old', u'JJ'), (u',', u','), (u'will', u'MD'),
(u'join', u'VB'), (u'the', u'DT'), (u'board', u'NN'), (u'as', u'IN'),
(u'a', u'DT'), (u'nonexecutive', u'JJ'), (u'director', u'NN'),
(u'Nov.', u'NNP'), (u'29', u'CD'), (u'.', u'.')],
[(u'Mr.', u'NNP'), (u'Vinken', u'NNP'), (u'is', u'VBZ'), (u'chairman', u'NN'),
(u'of', u'IN'), (u'Elsevier', u'NNP'), (u'N.V.', u'NNP'), (u',', u','),
(u'the', u'DT'), (u'Dutch', u'NNP'), (u'publishing', u'VBG'),
(u'group', u'NN'), (u'.', u'.'), (u'Rudolph', u'NNP'), (u'Agnew', u'NNP'),
(u',', u','), (u'55', u'CD'), (u'years', u'NNS'), (u'old', u'JJ'),
(u'and', u'CC'), (u'former', u'JJ'), (u'chairman', u'NN'), (u'of', u'IN'),
(u'Consolidated', u'NNP'), (u'Gold', u'NNP'), (u'Fields', u'NNP'),
(u'PLC', u'NNP'), (u',', u','), (u'was', u'VBD'), (u'named', u'VBN'),
(u'a', u'DT'), (u'nonexecutive', u'JJ'), (u'director', u'NN'), (u'of', u'IN'),
(u'this', u'DT'), (u'British', u'JJ'), (u'industrial', u'JJ'),
(u'conglomerate', u'NN'), (u'.', u'.')],
[(u'A', u'DT'), (u'form', u'NN'),
(u'of', u'IN'), (u'asbestos', u'NN'), (u'once', u'RB'), (u'used', u'VBN'),
(u'to', u'TO'), (u'make', u'VB'), (u'Kent', u'NNP'), (u'cigarette', u'NN'),
(u'filters', u'NNS'), (u'has', u'VBZ'), (u'caused', u'VBN'), (u'a', u'DT'),
(u'high', u'JJ'), (u'percentage', u'NN'), (u'of', u'IN'),
(u'cancer', u'NN'), (u'deaths', u'NNS'),
(u'among', u'IN'), (u'a', u'DT'), (u'group', u'NN'), (u'of', u'IN'),
(u'workers', u'NNS'), (u'exposed', u'VBN'), (u'to', u'TO'), (u'it', u'PRP'),
(u'more', u'RBR'), (u'than', u'IN'), (u'30', u'CD'), (u'years', u'NNS'),
(u'ago', u'IN'), (u',', u','), (u'researchers', u'NNS'),
(u'reported', u'VBD'), (u'.', u'.')]]
import pandas as pd
def proj(pair_list, idx):
return [p[idx] for p in pair_list]
data = []
for truth in truths:
sent_toks = proj(truth, 0)
true_tags = proj(truth, 1)
nltk_tags = nltk.pos_tag(sent_toks)
for i in range(len(sent_toks)):
# print('{}\t{}\t{}'.format(sent_toks[i], true_tags[i], nltk_tags[i][1])) # if you do not want to use DataFrame
data.append( (sent_toks[i], true_tags[i], nltk_tags[i][1] ) )
headers = ['token', 'true_tag', 'nltk_tag']
df = pd.DataFrame(data, columns = headers)
df
# this finds out the tokens that the true_tag and nltk_tag are different.
df[df.true_tag != df.nltk_tag]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Or this summary table (also c.f. https
Step2: Various algorithms can be used to perform POS tagging. In general, the accuracy is pretty high (state-of-the-art can reach approximately 97%). However, there are still incorrect tags. We demonstrate this below.
|
3,374
|
<ASSISTANT_TASK:>
Python Code:
# Importing libraries
%pylab inline
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import preprocessing
import numpy as np
# Convert variable data into categorical, continuous, discrete,
# and dummy variable lists the following into a dictionary
s = ["Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41",
"Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5",
"Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32"]
varTypes = dict()
varTypes['categorical'] = s[0].split(', ')
varTypes['continuous'] = s[1].split(', ')
varTypes['discrete'] = s[2].split(', ')
varTypes['dummy'] = ["Medical_Keyword_"+str(i) for i in range(1,49)]
#Prints out each of the the variable types as a check
#for i in iter(varTypes['dummy']):
#print i
#Import training data
d_raw = pd.read_csv('prud_files/train.csv')
d = d_raw.copy()
len(d.columns)
# Get all the columns that have NaNs
d = d_raw.copy()
a = pd.isnull(d).sum()
nullColumns = a[a>0].index.values
#for c in nullColumns:
#d[c].fillna(-1)
#Determine the min and max values for the NaN columns
a = pd.DataFrame(d, columns=nullColumns).describe()
a_min = a[3:4]
a_max = a[7:8]
nullList = ['Family_Hist_4',
'Medical_History_1',
'Medical_History_10',
'Medical_History_15',
'Medical_History_24',
'Medical_History_32']
pd.DataFrame(a_max, columns=nullList)
# Convert all NaNs to -1 and sum up all medical keywords across columns
df = d.fillna(-1)
b = pd.DataFrame(df[varTypes["dummy"]].sum(axis=1), columns=["Medical_Keyword_Sum"])
df= pd.concat([df,b], axis=1, join='outer')
#Turn split train to test on or off.
#If on, 10% of the dataset is used for feature training
#If off, training set is loaded from file
splitTrainToTest = 1
if(splitTrainToTest):
d_gb = df.groupby("Response")
df_test = pd.DataFrame()
for name, group in d_gb:
df_test = pd.concat([df_test, group[:len(group)/10]], axis=0, join='outer')
print "test data is 10% training data"
else:
d_test = pd.read_csv('prud_files/test.csv')
df_test = d_test.fillna(-1)
b = pd.DataFrame(df[varTypes["dummy"]].sum(axis=1), columns=["Medical_Keyword_Sum"])
df_test= pd.concat([df_test,b], axis=1, join='outer')
print "test data is prud_files/test.csv"
df_cat = df[["Id","Response"]+varTypes["categorical"]]
df_disc = df[["Id","Response"]+varTypes["discrete"]]
df_cont = df[["Id","Response"]+varTypes["continuous"]]
df_dummy = df[["Id","Response"]+varTypes["dummy"]]
df_cat_test = df_test[["Id","Response"]+varTypes["categorical"]]
df_disc_test = df_test[["Id","Response"]+varTypes["discrete"]]
df_cont_test = df_test[["Id","Response"]+varTypes["continuous"]]
df_dummy_test = df_test[["Id","Response"]+varTypes["dummy"]]
## Extract categories of each column
df_n = df[["Response", "Medical_Keyword_Sum"]+varTypes["categorical"]+varTypes["discrete"]+varTypes["continuous"]].copy()
df_test_n = df_test[["Response","Medical_Keyword_Sum"]+varTypes["categorical"]+varTypes["discrete"]+varTypes["continuous"]].copy()
#Get all the Product Info 2 categories
a = pd.get_dummies(df["Product_Info_2"]).columns.tolist()
norm_PI2_dict = dict()
#Create an enumerated dictionary of Product Info 2 categories
i=1
for c in a:
norm_PI2_dict.update({c:i})
i+=1
print norm_PI2_dict
df_n = df_n.replace(to_replace={'Product_Info_2':norm_PI2_dict})
df_test_n = df_test_n.replace(to_replace={'Product_Info_2':norm_PI2_dict})
df_n
# normalizes a single dataframe column and returns the result
def normalize_df(d):
min_max_scaler = preprocessing.MinMaxScaler()
x = d.values.astype(np.float)
#return pd.DataFrame(min_max_scaler.fit_transform(x))
return pd.DataFrame(min_max_scaler.fit_transform(x))
def normalize_cat(d):
for x in varTypes["discrete"]:
try:
a = pd.DataFrame(normalize_df(d_disc[x]))
a.columns=[str("n"+x)]
d_disc = pd.concat([d_disc, a], axis=1, join='outer')
except Exception as e:
print e.args
print "Error on "+str(x)+" w error: "+str(e)
return d_disc
def normalize_disc(d_disc):
for x in varTypes["discrete"]:
try:
a = pd.DataFrame(normalize_df(d_disc[x]))
a.columns=[str("n"+x)]
d_disc = pd.concat([d_disc, a], axis=1, join='outer')
except Exception as e:
print e.args
print "Error on "+str(x)+" w error: "+str(e)
return d_disc
# t= categorical, discrete, continuous
def normalize_cols(d, t = "categorical"):
for x in varTypes[t]:
try:
a = pd.DataFrame(normalize_df(d[x]))
a.columns=[str("n"+x)]
a = pd.concat(a, axis=1, join='outer')
except Exception as e:
print e.args
print "Error on "+str(x)+" w error: "+str(e)
return a
def normalize_response(d):
a = pd.DataFrame(normalize_df(d["Response"]))
a.columns=["nResponse"]
#d_cat = pd.concat([d_cat, a], axis=1, join='outer')
return a
df_n_2 = df_n.copy()
df_n_test_2 = df_test_n.copy()
df_n_2 = df_n_2[["Response"]+varTypes["categorical"]+varTypes["discrete"]]
df_n_test_2 = df_n_test_2[["Response"]+varTypes["categorical"]+varTypes["discrete"]]
df_n_2 = df_n_2.apply(normalize_df, axis=1)
df_n_test_2 = df_n_test_2.apply(normalize_df, axis=1)
df_n_3 = pd.concat([df["Id"],df_n["Medical_Keyword_Sum"],df_n_2, df_n[varTypes["continuous"]]],axis=1,join='outer')
df_n_test_3 = pd.concat([df_test["Id"],df_test_n["Medical_Keyword_Sum"],df_n_test_2, df_test_n[varTypes["continuous"]]],axis=1,join='outer')
train_data = df_n_3.values
test_data = df_n_test_3.values
from sklearn import linear_model
clf = linear_model.Lasso(alpha = 0.1)
clf.fit(X_train, Y_train)
pred = clf.predict(X_test)
print accuracy_score(pred, Y_test)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators = 1)
#model = model.fit(train_data[0:,2:],train_data[0:,0])
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
clf = GaussianNB()
clf.fit(train_data[0:,2:],train_data[0:,0])
pred = clf.predict(X_test)
print accuracy_score(pred, Y_test)
from sklearn.metrics import accuracy_score
df_n.columns.tolist()
d_cat = df_cat.copy()
d_cat_test = df_cat_test.copy()
d_cont = df_cont.copy()
d_cont_test = df_cont_test.copy()
d_disc = df_disc.copy()
d_disc_test = df_disc_test.copy()
#df_cont_n = normalize_cols(d_cont, "continuous")
#df_cont_test_n = normalize_cols(d_cont_test, "continuous")
df_cat_n = normalize_cols(d_cat, "categorical")
df_cat_test_n = normalize_cols(d_cat_test, "categorical")
df_disc_n = normalize_cols(d_disc, "discrete")
df_disc_test_n = normalize_cols(d_disc, "discrete")
a = df_cat_n.iloc[:,62:]
# TODO: Clump into function
#rows are normalized into binary columns of groupings
# Define various group by data streams
df = d
gb_PI2 = df.groupby('Product_Info_1')
gb_PI2 = df.groupby('Product_Info_2')
gb_Ins_Age = df.groupby('Ins_Age')
gb_Ht = df.groupby('Ht')
gb_Wt = df.groupby('Wt')
gb_response = df.groupby('Response')
#Outputs rows the differnet categorical groups
for c in df.columns:
if (c in varTypes['categorical']):
if(c != 'Id'):
a = [ str(x)+", " for x in df.groupby(c).groups ]
print c + " : " + str(a)
df_prod_info = pd.DataFrame(d, columns=["Response"]+ [ "Product_Info_"+str(x) for x in range(1,8)])
df_emp_info = pd.DataFrame(d, columns=["Response"]+ [ "Employment_Info_"+str(x) for x in range(1,6)])
# continous
df_bio = pd.DataFrame(d, columns=["Response", "Ins_Age", "Ht", "Wt","BMI"])
# all the values are discrete (0 or 1)
df_med_kw = pd.DataFrame(d, columns=["Response"]+ [ "Medical_Keyword_"+str(x) for x in range(1,48)])
plt.figure(0)
plt.subplot(121)
plt.title("Categorical - Histogram for Risk Response")
plt.xlabel("Risk Response (1-7)")
plt.ylabel("Frequency")
plt.hist(df.Response)
plt.savefig('images/hist_Response.png')
print df.Response.describe()
print ""
plt.subplot(122)
plt.title("Normalized - Histogram for Risk Response")
plt.xlabel("Normalized Risk Response (1-7)")
plt.ylabel("Frequency")
plt.hist(df_cat_n.nResponse)
plt.savefig('images/hist_norm_Response.png')
print df_cat_n.nResponse.describe()
print ""
def plotContinuous(d, t):
plt.title("Continuous - Histogram for "+ str(t))
plt.xlabel("Normalized "+str(t)+"[0,1]")
plt.ylabel("Frequency")
plt.hist(d)
plt.savefig("images/hist_"+str(t)+".png")
#print df.iloc[:,:1].describe()
print ""
for i in range(i,len(df_cat.columns:
plt.figure(1)
plotContinuous(df.Ins_Age, "Ins_Age")
plt.show()
df_disc.describe()[7:8]
plt.figure(1)
plt.title("Continuous - Histogram for Ins_Age")
plt.xlabel("Normalized Ins_Age [0,1]")
plt.ylabel("Frequency")
plt.hist(df.Ins_Age)
plt.savefig('images/hist_Ins_Age.png')
print df.Ins_Age.describe()
print ""
plt.figure(2)
plt.title("Continuous - Histogram for BMI")
plt.xlabel("Normalized BMI [0,1]")
plt.ylabel("Frequency")
plt.hist(df.BMI)
plt.savefig('images/hist_BMI.png')
print df.BMI.describe()
print ""
plt.figure(3)
plt.title("Continuous - Histogram for Wt")
plt.xlabel("Normalized Wt [0,1]")
plt.ylabel("Frequency")
plt.hist(df.Wt)
plt.savefig('images/hist_Wt.png')
print df.Wt.describe()
print ""
plt.show()
plt.show()
k=1
for i in range(1,8):
'''
print "The iteration is: "+str(i)
print df['Product_Info_'+str(i)].describe()
print ""
'''
plt.figure(i)
if(i == 4):
plt.title("Continuous - Histogram for Product_Info_"+str(i))
plt.xlabel("Normalized value: [0,1]")
plt.ylabel("Frequency")
plt.hist(df['Product_Info_'+str(i)])
plt.savefig('images/hist_Product_Info_'+str(i)+'.png')
else:
if(i != 2):
plt.subplot(1,2,1)
plt.title("Cat-Hist- Product_Info_"+str(i))
plt.xlabel("Categories")
plt.ylabel("Frequency")
plt.hist(df['Product_Info_'+str(i)])
plt.savefig('images/hist_Product_Info_'+str(i)+'.png')
plt.subplot(1,2,2)
plt.title("Normalized - Histogram of Product_Info_"+str(i))
plt.xlabel("Categories")
plt.ylabel("Frequency")
plt.hist(df_cat_n['nProduct_Info_'+str(i)])
plt.savefig('images/hist_norm_Product_Info_'+str(i)+'.png')
elif(i == 2):
plt.title("Cat-Hist Product_Info_"+str(i))
plt.xlabel("Categories")
plt.ylabel("Frequency")
df.Product_Info_2.value_counts().plot(kind='bar')
plt.savefig('images/hist_Product_Info_'+str(i)+'.png')
plt.show()
catD = df.loc[:,varTypes['categorical']]
contD = df.loc[:,varTypes['continuous']]
disD = df.loc[:,varTypes['discrete']]
dummyD = df.loc[:,varTypes['dummy']]
respD = df.loc[:,['id','Response']]
prod_info = [ "Product_Info_"+str(i) for i in range(1,8)]
a = catD.loc[:, prod_info[1]]
stats = catD.groupby(prod_info[1]).describe()
c = gb_PI2.Response.count()
plt.figure(0)
plt.scatter(c[0],c[1])
plt.figure(0)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((a.describe())['count']))
plt.ylabel("Frequency")
for i in range(1,8):
a = catD.loc[:, "Product_Info_"+str(i)]
if(i is not 4):
print a.describe()
print ""
plt.figure(i)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((catD.groupby(key).describe())['count']))
plt.ylabel("Frequency")
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if a.dtype in (np.int64, np.float, float, int):
a.hist()
# Random functions
#catD.Product_Info_1.describe()
#catD.loc[:, prod_info].groupby('Product_Info_2').describe()
#df[varTypes['categorical']].hist()
catD.head(5)
#Exploration of the discrete data
disD.describe()
disD.head(5)
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
plt.title("Histogram of "+str(key))
plt.xlabel("Categories " + str((df.groupby(key).describe())['count']))
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
df[key].hist()
i+=1
#Iterate through each 'discrete' column of data
#Perform a 2D histogram later
i=0
for key in varTypes['discrete']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
fig, axes = plt.subplots(nrows = 1, ncols = 2)
#Histogram based on normalized value counts of the data set
disD[key].value_counts().hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#Cumulative histogram based on normalized value counts of the data set
disD[key].value_counts().hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#2D Histogram
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
x = catD[key].value_counts(normalize=True)
y = df['Response']
plt.hist2d(x[1], y, bins=40, norm=LogNorm())
plt.colorbar()
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
#(1.*df[key].value_counts()/len(df[key])).hist()
df[key].value_counts(normalize=True).plot(kind='bar')
i+=1
df.loc('Product_Info_1')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Seperation of columns into categorical, continous and discrete
Step2: Importing life insurance data set
Step3: Pre-processing raw dataset for NaN values
Step4: Create or import the test data set
Step5: Data transformation and extraction
Step6: Categorical normalization
Step7: Grouping of various categorical data sets
Step8: Histograms and descriptive statistics for Product_Info_1-7
Step9: Split dataframes into categorical, continuous, discrete, dummy, and response
Step10: Descriptive statistics and scatter plot relating Product_Info_2 and Response
|
3,375
|
<ASSISTANT_TASK:>
Python Code:
from liquidSVM import *
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Load test and training data
reg = LiquidData('reg-1d')
model = lsSVM(reg.train,display=1)
result, err = model.test(reg.test)
err[0,0]
plt.plot(reg.test.data, reg.test.target, '.')
x = np.linspace(-.2,1.4)
y = model.predict(x)
plt.plot(x, y, 'r-', linewidth=2)
plt.ylim(-.2,.8);
model = lsSVM(reg, display=1)
result, err = model.lastResult
err[0,0]
banana = LiquidData('banana-mc')
model = mcSVM(banana.train)
print(banana.train.data.T.shape)
plt.scatter(banana.train.data[:,0],banana.train.data[:,1], c=banana.train.target)
x = np.arange(-1.1,1.1,.05)
X,Y = np.meshgrid(x, x)
z = np.array(np.meshgrid(x,x)).reshape(2,-1).T
print(x.shape,X.shape, z.shape)
Z = model.predict(z).reshape(len(x),len(x))
#contour(x,x,z, add=T, levels=1:4,col=1,lwd=4)
CS = plt.contour(X, Y, Z, 4, linewidth=4)
result,err = model.test(banana.test)
err[:,0]
covtype = LiquidData('covtype.5000')
model = mcSVM(covtype, display=1, useCells=True)
result, err = model.lastResult
err[0,0]
co = LiquidData('covtype.10000')
%time mcSVM(co.train);
%time mcSVM(co.train, useCells=True);
co = LiquidData('covtype.50000')
%time mcSVM(co.train,useCells=True);
co = LiquidData('covtype-full')
%time mcSVM(co.train,useCells=True);
banana = LiquidData('banana-mc')
for mcType in ["AvA_hinge", "OvA_hinge", "AvA_ls", "OvA_hinge"]:
print("\n======", mcType, "======")
model = mcSVM(banana.train, mcType=mcType)
result, err = model.test(banana.test)
print("global err:", err[0,0])
print("task errs:", err[1:,0])
print(result[:3,])
banana_bc = LiquidData('banana-bc')
m = mcSVM(banana_bc.train, mcType="OvA_ls",display=1)
result, err = m.test(banana_bc.test)
probs = (result+1) / 2.0
print(probs[:5,:])
plt.hist(probs, 100);
banana = LiquidData('banana-mc')
m = mcSVM(banana.train, mcType="OvA_ls",display=1)
result, err = m.test(banana.test)
probs = (result[:,1:]+1) / 2.0
print(result.shape, probs.shape)
print(np.hstack((result,probs))[:5,:].round(2))
reg = LiquidData('reg-1d')
quantiles_list = [ 0.05, 0.1, 0.5, 0.9, 0.95 ]
model = qtSVM(reg.train, weights=quantiles_list)
result, err = model.test(reg.test)
err[:,0]
plt.plot(reg.test.data,reg.test.target,'.')
plt.ylim(-.2,.8)
x = np.arange(-0.2,1.4,0.05).reshape((-1,1))
lines = model.predict(x)
for i in range(len(quantiles_list)):
plt.plot(x, lines[:,i], '-', linewidth=2)
reg = LiquidData('reg-1d')
expectiles_list = [ .05, 0.1, 0.5, 0.9, 0.95 ]
model = exSVM(reg.train, weights=expectiles_list)
result, err = model.test(reg.test)
err[:,0]
plt.plot(reg.test.data,reg.test.target,'.')
plt.ylim(-.2,.8)
x = np.arange(-0.2,1.4,0.05).reshape((-1,1))
lines = model.predict(x)
for i in range(len(expectiles_list)):
plt.plot(x, lines[:,i], '-', linewidth=2)
banana = LiquidData('banana-bc')
constraint = 0.08
constraintFactors = np.array([1/2,2/3,1,3/2,2])
# class=-1 specifies the normal class
model = nplSVM(banana.train, nplClass=-1, constraintFactors=constraintFactors, constraint=constraint)
result, err = model.test(banana.test)
false_alarm_rate = (result[banana.test.target==-1,]==1).mean(0)
detection_rate = (result[banana.test.target==1,]==1).mean(0)
np.vstack( (constraint * constraintFactors,false_alarm_rate,detection_rate) ).round(3)
banana = LiquidData('banana-bc')
model = rocSVM(banana.train,display=1)
result, err = model.test(banana.test)
false_positive_rate = (result[banana.test.target==-1,:]==1).mean(0)
detection_rate = (result[banana.test.target==1,]==1).mean(0)
print(err.T.round(3))
print("1-DR:", 1-detection_rate)
print("FPR:",false_positive_rate)
plt.plot(false_positive_rate, detection_rate, 'x-')
plt.xlim(0,1); plt.ylim(0,1)
plt.plot([0,1],[0,1], '--');
LiquidData('reg-1d');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some stuff we need for this notebook
Step2: LS-Regression
Step3: Now reg.train contains the training data and reg.test the testing data.
Step4: Now you can test with any test set
Step5: We also can plot the regression
Step6: As a convenience, since reg already contains .train and .test
Step7: Multi-class
Step8: The following performs multi-class classification
Step9: In this case err[
Step10: Cells
Step11: A major issue with SVMs is that for larger sample sizes the kernel matrix
Step12: This is about 5 times faster! (The user time is about three times the elapsed time since we are using 2 threads.)
Step13: Note that with this data set useCells=F here only works if your system has enough free memory (~26GB).
Step14: If you run into memory issues turn cells on
Step15: The first element in the errors gives the overall test error.
Step16: And for multi-class classification it is similar
Step17: Quantile regression
Step18: In this plot you see estimations for two lower and upper quantiles as well as the median
Step19: Neyman-Pearson-Learning
Step20: You can see that the false alarm rate in the test set meet the
Step21: This shows nice learning, since the ROC curve is near the north-west corner.
|
3,376
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
x1 = tf.reshape(x1, (-1,4,4,512))
x1 = tf.layers.batch_normalization(x1,training=training)
x1 = tf.maximum(alpha * x1, x1)
# Output layer, 32x32x3
x2= tf.layers.conv2d_transpose(x1,256, 5 , strides=2, padding =same)
x2 = tf.layers.batch_normalization(x2,training=training)
x2 = tf.maximum(alpha * x2, x2)
x3= tf.layers.conv2d_transpose(x2 ,128, 5 , strides=2, padding =same)
x3 = tf.layers.batch_normalization(x3 ,training=training)
x3 = tf.maximum(alpha * x3, x3)
logits = tf.layers.conv2d_transpose(x3 ,output_dim, 5 , strides=2, padding =same)
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha*x,x)
# 16*16*64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalizationa(x2, training=True )
relu2 = tf.maximum(alpha*bn2,bn2)
#8*8*128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalizationa(x3, training=True )
relu3 = tf.maximum(alpha*bn3,bn3)
#4*4*256
flat = tf.reshape(relu3 , (-1, 4*4*256))
logits = tf.layers.dense(flat,1)
out = tf.sigmoid(logits)
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
3,377
|
<ASSISTANT_TASK:>
Python Code:
# import word2vec model from gensim
from gensim.models.word2vec import Word2Vec
# load pre-trained model
model = Word2Vec.load_word2vec_format('eswikinews.bin', binary=True)
def presidents_comp(country):
### Su código debe ir aquí
return []
for country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:
print country
for president in presidents_comp(country):
print ' ', president
def presidents_analogy(country):
### Su código debe ir aquí
return []
for country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:
print country
for president in presidents_analogy(country):
print ' ', president
def antonimo(palabra):
### Su código debe ir aquí
return '-'
for palabra in ['blanco', 'menor', 'rapido', 'arriba']:
print palabra, antonimo(palabra)
model.doesnt_match("azul rojo abajo verde".split())
print model.similarity('azul', 'rojo')
print model.similarity('azul', 'abajo')
import numpy as np
def no_es_como_las_otras(lista):
### Su código debe ir aquí
return '-'
print no_es_como_las_otras("azul rojo abajo verde".split())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Comparando composicionalidad y analogía.
Step2: El siguiente paso es usar analogías para encontrar el presidente de un país dado.
Step3: ¿Cual versión funciona mejor? Explique claramente. ¿Por qué cree que este es le caso?
Step4: Busque más ejemplos en los que funcione y otros en los que no funcione. Explique.
Step5: La idea es implementar la misma funcionalidad por nuestra cuenta. La condición es que solo podemos usar la función
|
3,378
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image("../docs/images/gds.png")
import pp
c = pp.Component()
w = pp.c.waveguide(width=0.6)
wr = w.ref()
c.add(wr)
pp.qp(c)
c = pp.Component()
wr = c << pp.c.waveguide(width=0.6)
pp.qp(c)
c = pp.Component()
wr1 = c << pp.c.waveguide(width=0.6)
wr2 = c << pp.c.waveguide(width=0.6)
wr2.movey(10)
pp.qp(c)
@pp.autoname
def dbr_cell(w1=0.5, w2=0.6, l1=0.2, l2=0.4, waveguide_function=pp.c.waveguide):
c = pp.Component()
c1 = c << waveguide_function(length=l1, width=w1)
c2 = c << waveguide_function(length=l2, width=w2)
c2.connect(port="W0", destination=c1.ports["E0"])
c.add_port("W0", port=c1.ports["W0"])
c.add_port("E0", port=c2.ports["E0"])
return c
w1 = 0.5
w2 = 0.6
l1 = 0.2
l2 = 0.4
n = 3
waveguide_function = pp.c.waveguide
c = pp.Component()
cell = dbr_cell(w1=w1, w2=w2, l1=l1, l2=l2, waveguide_function=waveguide_function)
pp.qp(cell)
cell_array = c.add_array(device=cell, columns=n, rows=1, spacing=(l1 + l2, 100))
pp.qp(c)
p0 = c.add_port("W0", port=cell.ports["W0"])
p1 = c.add_port("E0", port=cell.ports["E0"])
p1.midpoint = [(l1 + l2) * n, 0]
pp.qp(c)
pp.qp(pp.c.bend_circular())
c = pp.Component("sample_reference_connect")
mmi = c << pp.c.mmi1x2()
b = c << pp.c.bend_circular()
b.connect("W0", destination=mmi.ports["E1"])
pp.qp(c)
import pp
bend180 = pp.c.bend_circular180()
wg_heater = pp.c.waveguide_heater()
wg = pp.c.waveguide()
# Define a map between symbols and (component, input port, output port)
string_to_device_in_out_ports = {
"A": (bend180, "W0", "W1"),
"B": (bend180, "W1", "W0"),
"H": (wg_heater, "W0", "E0"),
"-": (wg, "W0", "E0"),
}
# Generate a sequence
# This is simply a chain of characters. Each of them represents a component
# with a given input and and a given output
sequence = "AB-H-H-H-H-BA"
component = pp.c.component_sequence(sequence, string_to_device_in_out_ports)
pp.qp(component)
pp.show(component)
from pp.components.waveguide import _arbitrary_straight_waveguide
@pp.autoname
def phase_modulator_waveguide(length, wg_width=0.5, cladding=3.0, si_outer_clad=1.0):
Phase modulator waveguide mockup
a = wg_width / 2
b = a + cladding
c = b + si_outer_clad
windows = [
(-c, -b, pp.LAYER.WG),
(-b, -a, pp.LAYER.SLAB90),
(-a, a, pp.LAYER.WG),
(a, b, pp.LAYER.SLAB90),
(b, c, pp.LAYER.WG),
]
component = _arbitrary_straight_waveguide(length=length, windows=windows)
return component
@pp.autoname
def test_cutback_phase(straight_length=100.0, bend_radius=10.0, n=2):
bend180 = pp.c.bend_circular(radius=bend_radius, start_angle=-90, theta=180)
pm_wg = phase_modulator_waveguide(length=straight_length)
wg_short = pp.c.waveguide(length=1.0)
wg_short2 = pp.c.waveguide(length=2.0)
wg_heater = pp.c.waveguide_heater(length=10.0)
taper = pp.c.taper_strip_to_ridge()
# Define a map between symbols and (component, input port, output port)
string_to_device_in_out_ports = {
"I": (taper, "1", "wg_2"),
"O": (taper, "wg_2", "1"),
"S": (wg_short, "W0", "E0"),
"P": (pm_wg, "W0", "E0"),
"A": (bend180, "W0", "W1"),
"B": (bend180, "W1", "W0"),
"H": (wg_heater, "W0", "E0"),
"-": (wg_short2, "W0", "E0"),
}
# Generate a sequence
# This is simply a chain of characters. Each of them represents a component
# with a given input and and a given output
repeated_sequence = "SIPOSASIPOSB"
heater_seq = "-H-H-H-H-"
sequence = repeated_sequence * n + "SIPO" + heater_seq
return pp.c.component_sequence(sequence, string_to_device_in_out_ports)
c = test_cutback_phase(n=1)
pp.qp(c)
c = test_cutback_phase(n=2)
pp.qp(c)
import pp
components = {
"C": pp.routing.package_optical2x2(component=pp.c.coupler, port_spacing=40.0),
"X": pp.c.crossing45(port_spacing=40.0),
"-": pp.c.compensation_path(crossing45=pp.c.crossing45(port_spacing=40.0)),
}
lattice =
CX
CX
c = pp.c.component_lattice(lattice=lattice, components=components)
pp.qp(c)
lattice =
CCX
CCX
c = pp.c.component_lattice(lattice=lattice, components=components)
pp.qp(c)
lattice =
C-X
CXX
CXX
C-X
c = pp.c.component_lattice(lattice=lattice, components=components)
pp.qp(c)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adding a component reference
Step2: We have two ways to add a reference to our device
Step3: or we can do it in a single line (my preference)
Step4: in both cases we can move the reference wr after created
Step5: Adding a reference array
Step6: Finally we need to add ports to the new component
Step7: Connecting references
Step8: component_sequence
Step10: As the sequence is defined as a string you can use the string operations to build complicated sequences
Step14: component_lattice
|
3,379
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
from scipy.io import loadmat
from modshogun import RealFeatures, MulticlassLabels, Math
# load the dataset
dataset = loadmat('../../../data/multiclass/usps.mat')
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 1000 examples for training
Xtrain = RealFeatures(Xall[:,0:1000])
Ytrain = MulticlassLabels(Yall[0:1000])
# 4000 examples for validation
Xval = RealFeatures(Xall[:,1001:5001])
Yval = MulticlassLabels(Yall[1001:5001])
# the rest for testing
Xtest = RealFeatures(Xall[:,5002:-1])
Ytest = MulticlassLabels(Yall[5002:-1])
# initialize the random number generator with a fixed seed, for repeatability
Math.init_random(10)
from modshogun import NeuralNetwork, NeuralInputLayer, NeuralLogisticLayer, NeuralSoftmaxLayer
from modshogun import DynamicObjectArray
# setup the layers
layers = DynamicObjectArray()
layers.append_element(NeuralInputLayer(256)) # input layer, 256 neurons
layers.append_element(NeuralLogisticLayer(256)) # first hidden layer, 256 neurons
layers.append_element(NeuralLogisticLayer(128)) # second hidden layer, 128 neurons
layers.append_element(NeuralSoftmaxLayer(10)) # output layer, 10 neurons
# create the networks
net_no_reg = NeuralNetwork(layers)
net_no_reg.quick_connect()
net_no_reg.initialize_neural_network()
net_l2 = NeuralNetwork(layers)
net_l2.quick_connect()
net_l2.initialize_neural_network()
net_l1 = NeuralNetwork(layers)
net_l1.quick_connect()
net_l1.initialize_neural_network()
net_dropout = NeuralNetwork(layers)
net_dropout.quick_connect()
net_dropout.initialize_neural_network()
# import networkx, install if necessary
try:
import networkx as nx
except ImportError:
import pip
pip.main(['install', '--user', 'networkx'])
import networkx as nx
G = nx.DiGraph()
pos = {}
for i in range(8):
pos['X'+str(i)] = (i,0) # 8 neurons in the input layer
pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer
for j in range(8): G.add_edge('X'+str(j),'H'+str(i))
if i<4:
pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer
for j in range(8): G.add_edge('H'+str(j),'U'+str(i))
if i<6:
pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer
for j in range(4): G.add_edge('U'+str(j),'Y'+str(i))
nx.draw(G, pos, node_color='y', node_size=750)
from modshogun import MulticlassAccuracy
def compute_accuracy(net, X, Y):
predictions = net.apply_multiclass(X)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(predictions, Y)
return accuracy*100
net_no_reg.set_epsilon(1e-6)
net_no_reg.set_max_num_epochs(600)
# uncomment this line to allow the training progress to be printed on the console
#from modshogun import MSG_INFO; net_no_reg.io.set_loglevel(MSG_INFO)
net_no_reg.set_labels(Ytrain)
net_no_reg.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print "Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%"
# turn on L2 regularization
net_l2.set_l2_coefficient(3e-4)
net_l2.set_epsilon(1e-6)
net_l2.set_max_num_epochs(600)
net_l2.set_labels(Ytrain)
net_l2.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print "With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%"
# turn on L1 regularization
net_l1.set_l1_coefficient(3e-5)
net_l1.set_epsilon(e-6)
net_l1.set_max_num_epochs(600)
net_l1.set_labels(Ytrain)
net_l1.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print "With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%"
from modshogun import NNOM_GRADIENT_DESCENT
# set the dropout probabilty for neurons in the hidden layers
net_dropout.set_dropout_hidden(0.5)
# set the dropout probabilty for the inputs
net_dropout.set_dropout_input(0.2)
# limit the maximum incoming weights vector lengh for neurons
net_dropout.set_max_norm(15)
net_dropout.set_epsilon(1e-6)
net_dropout.set_max_num_epochs(600)
# use gradient descent for optimization
net_dropout.set_optimization_method(NNOM_GRADIENT_DESCENT)
net_dropout.set_gd_learning_rate(0.5)
net_dropout.set_gd_mini_batch_size(100)
net_dropout.set_labels(Ytrain)
net_dropout.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print "With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%"
from modshogun import NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR
# prepere the layers
layers_conv = DynamicObjectArray()
# input layer, a 16x16 image single channel image
layers_conv.append_element(NeuralInputLayer(16,16,1))
# the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 10, 2, 2, 2, 2))
# the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps
layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))
# output layer
layers_conv.append_element(NeuralSoftmaxLayer(10))
# create and initialize the network
net_conv = NeuralNetwork(layers_conv)
net_conv.quick_connect()
net_conv.initialize_neural_network()
# 50% dropout in the input layer
net_conv.set_dropout_input(0.5)
# max-norm regularization
net_conv.set_max_norm(1.0)
# set gradient descent parameters
net_conv.set_optimization_method(NNOM_GRADIENT_DESCENT)
net_conv.set_gd_learning_rate(0.01)
net_conv.set_gd_mini_batch_size(100)
net_conv.set_epsilon(0.0)
net_conv.set_max_num_epochs(100)
# start training
net_conv.set_labels(Ytrain)
net_conv.train(Xtrain)
# compute accuracy on the validation set
print "With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%"
print "Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%"
predictions = net_conv.apply_multiclass(Xtest)
_=figure(figsize=(10,12))
# plot some images, with the predicted label as the title of each image
# this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg
for i in range(100):
ax=subplot(10,10,i+1)
title(int(predictions[i]))
ax.imshow(Xtest[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax.set_xticks([])
ax.set_yticks([])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating the network
Step2: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.
Step3: Training
Step4: Training without regularization
Step5: Training with L2 regularization
Step6: Training with L1 regularization
Step7: Training with dropout
Step8: Convolutional Neural Networks
Step9: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization
Step10: Evaluation
Step11: We can also look at some of the images and the network's response to each of them
|
3,380
|
<ASSISTANT_TASK:>
Python Code:
labVersion = 'cs190_week3_v_1_3'
# load testing library
from test_helper import Test
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
# TODO: Replace <FILL IN> with appropriate code
numPoints = rawData.count()
print numPoints
samplePoints = rawData.take(5)
print samplePoints
# TEST Load and check the data (1a)
Test.assertEquals(numPoints, 6724, 'incorrect value for numPoints')
Test.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')
from pyspark.mllib.regression import LabeledPoint
import numpy as np
# Here is a sample raw data point:
# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'
# In this raw data point, 2001.0 is the label, and the remaining values are features
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(line):
Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
splitline = np.fromstring(line, dtype=float, sep=',')
return LabeledPoint(splitline[0], splitline[1:])
parsedSamplePoints = rawData.map(parsePoint)
firstPointFeatures = parsedSamplePoints.first().features
firstPointLabel = parsedSamplePoints.first().label
print firstPointFeatures, firstPointLabel
d = len(firstPointFeatures)
print d
# TEST Using LabeledPoint (1b)
Test.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')
expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]
Test.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),
'incorrect features for firstPointFeatures')
Test.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')
Test.assertTrue(d == 12, 'incorrect number of features')
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sampleMorePoints = rawData.take(50)
# You can uncomment the line below to see randomly selected features. These will be randomly
# selected each time you run the cell. Note that you should run this cell with the line commented
# out when answering the lab quiz questions.
# sampleMorePoints = rawData.takeSample(False, 50)
parsedSampleMorePoints = map(parsePoint, sampleMorePoints)
dataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot
fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,
gridColor='#eeeeee', gridWidth=1.1)
image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)
for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):
plt.text(x, y, s, color='#999999', size='10')
plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')
pass
# TODO: Replace <FILL IN> with appropriate code
parsedDataInit = rawData.map(parsePoint)
onlyLabels = parsedDataInit.map(lambda x: x.label)
minYear = onlyLabels.min()
maxYear = onlyLabels.max()
print maxYear, minYear
# TEST Find the range (1c)
Test.assertEquals(len(parsedDataInit.take(1)[0].features), 12,
'unexpected number of features in sample point')
sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')
yearRange = maxYear - minYear
Test.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')
# TODO: Replace <FILL IN> with appropriate code
parsedData = parsedDataInit.map(lambda x: LabeledPoint(x.label - minYear, x.features))
# Should be a LabeledPoint
print type(parsedData.take(1)[0])
# View the first point
print '\n{0}'.format(parsedData.take(1))
# TEST Shift labels (1d)
oldSampleFeatures = parsedDataInit.take(1)[0].features
newSampleFeatures = parsedData.take(1)[0].features
Test.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),
'new features do not match old features')
sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')
minYearNew = parsedData.map(lambda lp: lp.label).min()
maxYearNew = parsedData.map(lambda lp: lp.label).max()
Test.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')
Test.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')
# get data for plot
oldData = (parsedDataInit
.map(lambda lp: (lp.label, 1))
.reduceByKey(lambda x, y: x + y)
.collect())
x, y = zip(*oldData)
# generate layout and plot data
fig, ax = preparePlot(np.arange(1920, 2050, 20), np.arange(0, 150, 20))
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
ax.set_xlabel('Year'), ax.set_ylabel('Count')
pass
# get data for plot
newData = (parsedData
.map(lambda lp: (lp.label, 1))
.reduceByKey(lambda x, y: x + y)
.collect())
x, y = zip(*newData)
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
ax.set_xlabel('Year (shifted)'), ax.set_ylabel('Count')
pass
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights, seed)
parsedTrainData.cache()
parsedValData.cache()
parsedTestData.cache()
nTrain = parsedTrainData.count()
nVal = parsedValData.count()
nTest = parsedTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print parsedData.count()
# TEST Training, validation, and test sets (1e)
Test.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,
'parsedTrainData has wrong number of partitions')
Test.assertEquals(parsedValData.getNumPartitions(), numPartitions,
'parsedValData has wrong number of partitions')
Test.assertEquals(parsedTestData.getNumPartitions(), numPartitions,
'parsedTestData has wrong number of partitions')
Test.assertEquals(len(parsedTrainData.take(1)[0].features), 12,
'parsedTrainData has wrong number of features')
sumFeatTwo = (parsedTrainData
.map(lambda lp: lp.features[2])
.sum())
sumFeatThree = (parsedValData
.map(lambda lp: lp.features[3])
.reduce(lambda x, y: x + y))
sumFeatFour = (parsedTestData
.map(lambda lp: lp.features[4])
.reduce(lambda x, y: x + y))
Test.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],
2526.87757656, 297.340394298, 184.235876654),
'parsed Train, Val, Test data has unexpected values')
Test.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')
Test.assertEquals(nTrain, 5371, 'unexpected value for nTrain')
Test.assertEquals(nVal, 682, 'unexpected value for nVal')
Test.assertEquals(nTest, 671, 'unexpected value for nTest')
# TODO: Replace <FILL IN> with appropriate code
averageTrainYear = (parsedTrainData
.map(lambda x: x.label)
.mean())
print averageTrainYear
# TEST Average label (2a)
Test.assertTrue(np.allclose(averageTrainYear, 53.9316700801),
'incorrect value for averageTrainYear')
# TODO: Replace <FILL IN> with appropriate code
def squaredError(label, prediction):
Calculates the the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
return (label - prediction)**2.
def calcRMSE(labelsAndPreds):
Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
return np.sqrt(labelsAndPreds
.map(lambda x: squaredError(x[0], x[1]))
.mean())
labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])
# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291
exampleRMSE = calcRMSE(labelsAndPreds)
print exampleRMSE
# TEST Root mean squared error (2b)
Test.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')
Test.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPredsTrain = parsedTrainData.map(lambda x: (x.label, averageTrainYear))
rmseTrainBase = calcRMSE(labelsAndPredsTrain)
labelsAndPredsVal = parsedValData.map(lambda x: (x.label, averageTrainYear))
rmseValBase = calcRMSE(labelsAndPredsVal)
labelsAndPredsTest = parsedTestData.map(lambda x: (x.label, averageTrainYear))
rmseTestBase = calcRMSE(labelsAndPredsTest)
print 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)
print 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)
print 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)
# TEST Training, validation and test RMSE (2c)
Test.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],
[21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
cmap = get_cmap('YlOrRd')
norm = Normalize()
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, lp.label))
.map(lambda (l, p): squaredError(l, p))
.collect())
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))
plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
pass
predictions = np.asarray(parsedValData
.map(lambda lp: averageTrainYear)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, averageTrainYear))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))
ax.set_xlim(53, 55)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
from pyspark.mllib.linalg import DenseVector
# TODO: Replace <FILL IN> with appropriate code
def gradientSummand(weights, lp):
Calculates the gradient summand for a given weight and `LabeledPoint`.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
weights (DenseVector): An array of model weights (betas).
lp (LabeledPoint): The `LabeledPoint` for a single observation.
Returns:
DenseVector: An array of values the same length as `weights`. The gradient summand.
return ((weights.dot(lp.features) - lp.label) * lp.features)
exampleW = DenseVector([1, 1, 1])
exampleLP = LabeledPoint(2.0, [3, 1, 4])
# gradientSummand = (dot([1 1 1], [3 1 4]) - 2) * [3 1 4] = (8 - 2) * [3 1 4] = [18 6 24]
summandOne = gradientSummand(exampleW, exampleLP)
print summandOne
exampleW = DenseVector([.24, 1.2, -1.4])
exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])
summandTwo = gradientSummand(exampleW, exampleLP)
print summandTwo
# TEST Gradient summand (3a)
Test.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')
Test.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')
# TODO: Replace <FILL IN> with appropriate code
def getLabeledPrediction(weights, observation):
Calculates predictions and returns a (label, prediction) tuple.
Note:
The labels should remain unchanged as we'll use this information to calculate prediction
error later.
Args:
weights (np.ndarray): An array with one weight for each features in `trainData`.
observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the
features for the data point.
Returns:
tuple: A (label, prediction) tuple.
return (observation.label, weights.dot(observation.features))
weights = np.array([1.0, 1.5])
predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),
LabeledPoint(1.5, np.array([.5, .5]))])
labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))
print labelsAndPredsExample.collect()
# TEST Use weights to make predictions (3b)
Test.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],
'incorrect definition for getLabeledPredictions')
# TODO: Replace <FILL IN> with appropriate code
def linregGradientDescent(trainData, numIters):
Calculates the weights and error for a linear regression model trained with gradient descent.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
trainData (RDD of LabeledPoint): The labeled data for use in training the model.
numIters (int): The number of iterations of gradient descent to perform.
Returns:
(np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the
final weights (one weight per feature) for the model, and training errors will contain
an error (RMSE) for each iteration of the algorithm.
# The length of the training data
n = trainData.count()
# The number of features in the training data
d = len(trainData.take(1)[0].features)
w = np.zeros(d)
alpha = 1.0
# We will compute and store the training error after each iteration
errorTrain = np.zeros(numIters)
for i in range(numIters):
# Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)
# tuples. Note that the weights all equal 0 for the first iteration, so the predictions will
# have large errors to start.
labelsAndPredsTrain = trainData.map(lambda x: getLabeledPrediction(w, x))
errorTrain[i] = calcRMSE(labelsAndPredsTrain)
# Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).
# Note that `gradient` sould be a `DenseVector` of length `d`.
gradient = trainData.map(lambda x: gradientSummand(w, x)).sum()
# Update the weights
alpha_i = alpha / (n * np.sqrt(i+1))
w -= alpha_i * gradient
return w, errorTrain
# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent
# note: the resulting model will not be useful; the goal here is to verify that
# linregGradientDescent is working properly
exampleN = 10
exampleD = 3
exampleData = (sc
.parallelize(parsedTrainData.take(exampleN))
.map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))
print exampleData.take(2)
exampleNumIters = 5
exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)
print exampleWeights
# TEST Gradient descent (3c)
expectedOutput = [48.88110449, 36.01144093, 30.25350092]
Test.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')
expectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]
Test.assertTrue(np.allclose(exampleErrorTrain, expectedError),
'value of exampleErrorTrain is incorrect')
# TODO: Replace <FILL IN> with appropriate code
numIters = 50
weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters)
labelsAndPreds = parsedValData.map(lambda x: (x.label, weightsLR0.dot(x.features)))
rmseValLR0 = calcRMSE(labelsAndPreds)
print 'Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase,
rmseValLR0)
# TEST Train the model (3d)
expectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,
15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]
Test.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')
norm = Normalize()
clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))
ax.set_ylim(2, 6)
plt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$')
pass
norm = Normalize()
clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))
ax.set_ylim(17.8, 21.2)
plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xticklabels(map(str, range(6, 66, 10)))
ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')
pass
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 500 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = 'l2' # regType
useIntercept = True # intercept
# TODO: Replace <FILL IN> with appropriate code
firstModel = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, None, reg, regType,
useIntercept)
# weightsLR1 stores the model weights; interceptLR1 stores the model intercept
weightsLR1 = firstModel.weights
interceptLR1 = firstModel.intercept
print weightsLR1, interceptLR1
# TEST LinearRegressionWithSGD (4a)
expectedIntercept = 13.3335907631
expectedWeights = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,
11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]
Test.assertTrue(np.allclose(interceptLR1, expectedIntercept), 'incorrect value for interceptLR1')
Test.assertTrue(np.allclose(weightsLR1, expectedWeights), 'incorrect value for weightsLR1')
# TODO: Replace <FILL IN> with appropriate code
samplePoint = parsedTrainData.take(1)[0]
samplePrediction = firstModel.predict(samplePoint.features)
print samplePrediction
# TEST Predict (4b)
Test.assertTrue(np.allclose(samplePrediction, 56.8013380112),
'incorrect value for samplePrediction')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPreds = parsedValData.map(lambda x: (x.label, firstModel.predict(x.features)))
rmseValLR1 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}' +
'\n\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)
# TEST Evaluate RMSE (4c)
Test.assertTrue(np.allclose(rmseValLR1, 19.691247), 'incorrect value for rmseValLR1')
# TODO: Replace <FILL IN> with appropriate code
bestRMSE = rmseValLR1
bestRegParam = reg
bestModel = firstModel
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
for reg in [1e-10, 1e-5, 1]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseValGrid = calcRMSE(labelsAndPreds)
print rmseValGrid
if rmseValGrid < bestRMSE:
bestRMSE = rmseValGrid
bestRegParam = reg
bestModel = model
rmseValLRGrid = bestRMSE
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n' +
'\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)
# TEST Grid search (4d)
Test.assertTrue(np.allclose(17.017170, rmseValLRGrid), 'incorrect value for rmseValLRGrid')
predictions = np.asarray(parsedValData
.map(lambda lp: bestModel.predict(lp.features))
.collect())
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, bestModel.predict(lp.features)))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
ax.set_xlim(15, 82), ax.set_ylim(-5, 105)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)
ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')
pass
# TODO: Replace <FILL IN> with appropriate code
reg = bestRegParam
modelRMSEs = []
for alpha in [1e-5, 10]:
for numIters in [500, 5]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseVal = calcRMSE(labelsAndPreds)
print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)
modelRMSEs.append(rmseVal)
# TEST Vary alpha and the number of iterations (4e)
expectedResults = sorted([56.969705, 56.892949, 355124752.221221])
Test.assertTrue(np.allclose(sorted(modelRMSEs)[:3], expectedResults), 'incorrect value for modelRMSEs')
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results, to save the time required to run 36 models
numItersParams = [10, 50, 100, 250, 500, 1000]
regParams = [1e-8, 1e-6, 1e-4, 1e-2, 1e-1, 1]
rmseVal = np.array([[ 20.36769649, 20.36770128, 20.36818057, 20.41795354, 21.09778437, 301.54258421],
[ 19.04948826, 19.0495 , 19.05067418, 19.16517726, 19.97967727, 23.80077467],
[ 18.40149024, 18.40150998, 18.40348326, 18.59457491, 19.82155716, 23.80077467],
[ 17.5609346 , 17.56096749, 17.56425511, 17.88442127, 19.71577117, 23.80077467],
[ 17.0171705 , 17.01721288, 17.02145207, 17.44510574, 19.69124734, 23.80077467],
[ 16.58074813, 16.58079874, 16.58586512, 17.11466904, 19.6860931 , 23.80077467]])
numRows, numCols = len(numItersParams), len(regParams)
rmseVal = np.array(rmseVal)
rmseVal.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,
gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(numItersParams)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(rmseVal,interpolation='nearest', aspect='auto',
cmap = colors)
# Zoom into the bottom left
numItersParamsZoom, regParamsZoom = numItersParams[-3:], regParams[:4]
rmseValZoom = rmseVal[-3:, :4]
numRows, numCols = len(numItersParamsZoom), len(regParamsZoom)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,
gridWidth=0.)
ax.set_xticklabels(regParamsZoom), ax.set_yticklabels(numItersParamsZoom)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(rmseValZoom,interpolation='nearest', aspect='auto',
cmap = colors)
pass
# TODO: Replace <FILL IN> with appropriate code
import itertools
def twoWayInteractions(lp):
Creates a new `LabeledPoint` that includes two-way interactions.
Note:
For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these
would be appended to the original [x, y] feature list.
Args:
lp (LabeledPoint): The label and features for this observation.
Returns:
LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features
should include the features from `lp` followed by the two-way interaction features.
interactions = np.outer(lp.features, lp.features).flatten()
return LabeledPoint(lp.label, np.hstack((lp.features, interactions)))
print twoWayInteractions(LabeledPoint(0.0, [2, 3]))
# Transform the existing train, validation, and test sets to include two-way interactions.
trainDataInteract = parsedTrainData.map(twoWayInteractions)
valDataInteract = parsedValData.map(twoWayInteractions)
testDataInteract = parsedTestData.map(twoWayInteractions)
# TEST Add two-way interactions (5a)
twoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayExample.features),
sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),
'incorrect features generatedBy twoWayInteractions')
twoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayPoint.features),
sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),
'incorrect features generated by twoWayInteractions')
Test.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')
Test.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),
'incorrect features in trainDataInteract')
Test.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),
'incorrect features in valDataInteract')
Test.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),
'incorrect features in testDataInteract')
# TODO: Replace <FILL IN> with appropriate code
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
reg = 1e-10
modelInteract = LinearRegressionWithSGD.train(trainDataInteract, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPredsInteract = valDataInteract.map(lambda lp: (lp.label, modelInteract.predict(lp.features)))
rmseValInteract = calcRMSE(labelsAndPredsInteract)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = ' +
'{3:.3f}\n\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,
rmseValLRGrid, rmseValInteract)
# TEST Build interaction model (5b)
Test.assertTrue(np.allclose(rmseValInteract, 15.6894664683), 'incorrect value for rmseValInteract')
# TODO: Replace <FILL IN> with appropriate code
labelsAndPredsTest = testDataInteract.map(lambda lp: (lp.label, modelInteract.predict(lp.features)))
rmseTestInteract = calcRMSE(labelsAndPredsTest)
print ('Test RMSE:\n\tBaseline = {0:.3f}\n\tLRInteract = {1:.3f}'
.format(rmseTestBase, rmseTestInteract))
# TEST Evaluate interaction model on test data (5c)
Test.assertTrue(np.allclose(rmseTestInteract, 16.3272040537),
'incorrect value for rmseTestInteract')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step3: (1b) Using LabeledPoint
Step5: Visualization 1
Step6: (1c) Find the range
Step7: (1d) Shift labels
Step8: Visualization 2
Step9: (1e) Training, validation, and test sets
Step10: Part 2
Step13: (2b) Root mean squared error
Step14: (2c) Training, validation and test RMSE
Step15: Visualization 3
Step17: Part 3
Step19: (3b) Use weights to make predictions
Step21: (3c) Gradient descent
Step22: (3d) Train the model
Step23: Visualization 4
Step24: Part 4
Step25: (4b) Predict
Step26: (4c) Evaluate RMSE
Step27: (4d) Grid search
Step28: Visualization 5
Step29: (4e) Vary alpha and the number of iterations
Step30: Visualization 6
Step32: Part 5
Step33: (5b) Build interaction model
Step34: (5c) Evaluate interaction model on test data
|
3,381
|
<ASSISTANT_TASK:>
Python Code:
# Import the Iris Dataset and Build a GLM
import h2o
h2o.init()
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
# import the iris dataset:
# this dataset is used to classify the type of iris plant
# the original dataset can be found at https://archive.ics.uci.edu/ml/datasets/Iris
# iris = h2o.import_file("http://h2o-public-test-data.s3.amazonaws.com/smalldata/iris/iris_wheader.csv")
iris = h2o.import_file("../../smalldata/iris/iris_wheader.csv")
# convert response column to a factor
iris['class'] = iris['class'].asfactor()
# set the predictor names and the response column name
predictors = iris.col_names[:-1]
response = 'class'
# split into train and validation
train, valid = iris.split_frame(ratios = [.8], seed=1234)
# build model
model = H2OGeneralizedLinearEstimator(family = 'multinomial')
model.train(x = predictors, y = response, training_frame = train, validation_frame = valid)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# hide progress bar
h2o.no_progress()
# specify the model to you:
model = model
# specify the dataframe to use
data_pdp = iris
# specify the feature of interest, available features include:
# ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
# col = "sepal_len"
# col = 'sepal_wid'
col = 'petal_len'
# col = 'petal_wid'
# create a copy of the column of interest, so that values are preserved after each run
col_data = data_pdp[col]
# get a list of the classes in your target
classes = h2o.as_list(data_pdp['class'].unique(), use_pandas=False,header=False)
classes = [class_val[0] for class_val in classes]
# create bins for the pdp plot
bins = data_pdp[col].quantile(prob=list(np.linspace(0.05,1,19)))[:,1].unique()
bins = bins.as_data_frame().values.tolist()
bins = [bin_val[0] for bin_val in bins]
bins.sort()
# Loop over each class and print the pdp for the given feature
for class_val in classes:
mean_responses = []
for bin_val in bins:
# warning this line modifies the dataset.
# when you rerun on a new column make sure to return
# all columns to their original values.
data_pdp[col] = bin_val
response = model.predict(data_pdp)
mean_response = response[:,class_val].mean()[0]
mean_responses.append(mean_response)
mean_responses
pdp_manual = pd.DataFrame({col: bins, 'mean_response':mean_responses},columns=[col,'mean_response'])
plt.plot(pdp_manual[col], pdp_manual.mean_response);
plt.xlabel(col);
plt.ylabel('mean_response');
plt.title('PDP for Class {0}'.format(class_val));
plt.show()
# reset col value to original value for future runs of this cell
data_pdp[col] = col_data
# h2o multinomial PDP class setosa
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-setosa"])
# h2o multinomial PDP class versicolor
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-versicolor"])
# h2o multinomial PDP class virginica
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-virginica"])
# h2o multinomial PDP all classes
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-setosa", "Iris-versicolor", "Iris-virginica"])
# h2o multinomial PDP all classes with stddev
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=True, plot=True, targets=["Iris-setosa", "Iris-versicolor", "Iris-virginica"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify Feature of Interest
Step2: Generate a PDP per class manualy
Step3: Use target parameter and plot H2O multinomial PDP
|
3,382
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sklearn
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
%matplotlib inline
data_fs = pd.read_csv(r'data/data_fs.csv', low_memory=False)
data_fs.head(10)
# fill nan with 0
data_fs = data_fs.fillna(0)
# our goal is to predict the "price_doc" feature.
y = data_fs[["price_doc"]]
X = data_fs.drop("price_doc", axis=1)
X = X.drop("timestamp", axis=1)
# one-hot encoding
X = pd.get_dummies(X, sparse=True)
# Let's split our dataset into train 70 % and test 30% by using sklearn.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Look at first 10 rows what you get.
X_train.head(10)
print("Train size =", X_train.shape)
print("Test size =", X_test.shape)
y_train.hist(bins=100)
y_train_log = np.log(y_train)
y_test_log = np.log(y_test)
y_train_log.hist(bins=100)
from sklearn.ensemble import RandomForestRegressor
### BEGIN Solution
random_forest = RandomForestRegressor(n_estimators=250, random_state=101, n_jobs=4)
random_forest.fit(X_train, y_train_log.values.ravel())
std = np.std([random_forest.feature_importances_ for random_forest in random_forest.estimators_], axis=0)
importances = random_forest.feature_importances_
indices = np.argsort(importances)[::-1]
FEAT_NUM = 20
plt.figure(figsize=(15,10))
plt.title("Top %d important features" % (FEAT_NUM), size=16)
plt.bar(range(FEAT_NUM), importances[indices][:FEAT_NUM],
color="r", yerr=std[indices[:FEAT_NUM]], align="center")
plt.xticks(range(FEAT_NUM), [X_train.columns[indices[f]] for f in range(FEAT_NUM)],
rotation='vertical', size=16)
plt.yticks(size=16)
plt.xlim([-1, FEAT_NUM])
plt.show()
### END Solution
### BEGIN Solution
# Print the feature ranking
print("Feature ranking:")
for f in range(FEAT_NUM):
print("%d. %s (%f)" % (f + 1, (X_train.columns[indices[f]]), importances[indices[f]]))
### END Solution
X_train_cut = X_train.filter([X_train.columns[x] for x in indices[:20]], axis=1)
X_test_cut = X_test.filter([X_test.columns[x] for x in indices[:20]], axis=1)
print("New shape of training samples: ", X_train_cut.shape)
print("New shape of testing samples: ", X_test_cut.shape)
from sklearn.metrics import mean_squared_log_error
from sklearn import linear_model
from sklearn.metrics import mean_squared_log_error
def comparator(X_train, y_train, X_test, y_test):
Parameters
==========
X_train: ndarray - training inputs
y_train: ndarray - training targets
X_test: ndarray - test inputs
y_test: ndarray - test targets
Returns
=======
pd.DataFrame - table of RMSLE scores of each model on test and train datasets
methods = {
"Linear Regression": sklearn.linear_model.LinearRegression(n_jobs=4),
"Lasso": linear_model.Lasso(random_state=101),
"Ridge": linear_model.Ridge(random_state=101),
"Dtree": sklearn.tree.DecisionTreeRegressor(random_state=101),
"RFR": sklearn.ensemble.RandomForestRegressor(random_state=101, n_estimators =100, n_jobs=4)
}
error_train = []
error_test = []
### BEGIN Solution
for model in methods.values():
model.fit(X_train, y_train.values.ravel())
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
error_train.append(mean_squared_log_error(y_train_pred, y_train))
error_test.append(mean_squared_log_error(y_test_pred, y_test))
### END Solution
return pd.DataFrame({
"Methods": list(methods.keys()),
"Train loss": error_train,
"Test loss": error_test
})
### BEGIN Solution
result = comparator(X_train_cut, y_train_log, X_test_cut, y_test_log)
print(result)
### END Solution
from sklearn.metrics import make_scorer
import warnings
warnings.filterwarnings("ignore")
def selection_step(model, X, y, used_features=(), cv=3):
Parameters
==========
X: ndarray - training inputs
y: ndarray - training targets
used_features: - list of features
cv: int - number of folds
Returns
=======
scores - dictionary of scores
scores = {}
### BEGIN Solution
for feature in X.columns:
if feature not in used_features:
feat_set = list(used_features).copy()
feat_set.append(feature)
rmsle = abs(cross_val_score(model, X[feat_set], y.values.ravel(),
scoring=make_scorer(mean_squared_log_error),
error_score=np.nan, cv=cv, n_jobs=4).mean())
scores[feature] = rmsle
### END Solution
return scores
def forward_steps(X, y, n_rounds, method):
Parameters
==========
X: ndarray - training inputs
y: ndarray - training targets
n_rounds: int - early stop when score doesn't increase n_rounds
method: sklearn model
Returns
=======
feat_best_list - list of features
feat_best_list = []
last_score = np.inf
### BEGIN Solution
round = 0
count = 0
while (round < n_rounds):
round = round + 1
count = count + 1
if (len(feat_best_list) == X.shape[1]):
break
scores = selection_step(method, X, y, feat_best_list)
best_feat = min(scores, key=scores.get)
feat_best_list.append(best_feat)
print(round, best_feat)
if (scores[best_feat] < last_score):
last_score = scores[best_feat]
round = 0
### END Solution
return feat_best_list
### BEGIN Solution
from sklearn import tree
# DecisionTreeRegressor
print("Decision Tree Regressor feature ranking")
clf = sklearn.tree.DecisionTreeRegressor(random_state=101)
best_features = forward_steps(X_train, y_train_log, 3, clf)
### END Solution
### BEGIN Solution
result = comparator(X_train[best_features], y_train_log, X_test[best_features], y_test_log)
print(result)
### END Solution
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
X, y = make_moons(n_samples=300, shuffle=True, noise=0.05, random_state=1011)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=1011)
y_test[y_test == 0] = -1
y_train[y_train == 0] = -1
from sklearn.tree import DecisionTreeRegressor
from scipy.optimize import minimize
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.metrics import accuracy_score, precision_score, f1_score
class FuncSeries:
def __init__(self):
self.func_series = []
def __call__(self, X):
sum = self.func_series[0](X)
for f in self.func_series[1:]:
sum += f(X)
return sum
def append(self, func):
self.func_series.append(func)
class GradientBoostingClassifier(BaseEstimator, ClassifierMixin):
def __init__(self, estimators=5):
self.estimators = estimators
self.func_series = FuncSeries()
def fit(self, X, y):
self.func_series.append(lambda X: np.zeros(X.shape[0]))
for i in range(self.estimators):
residuals = 2 * y / (1 + np.exp(2 * y * self.func_series(X)))
clf = DecisionTreeRegressor(max_depth=3)
clf.fit(X, residuals)
self.func_series.append(lambda X: clf.predict(X))
return self
def predict(self, X):
predicted = np.sign(self.func_series(X)).astype(np.int)
predicted[predicted == 0] = -1
return predicted
class GBM:
def __init__(self, estimator, estimator_params, n_estimators):
self.base_estimator = estimator
self.params = estimator_params
self.n_estimators = n_estimators
self.cascade = []
def fit(self, X, y):
for i in range(self.n_estimators):
s = y / (1.0 + np.exp(y * self._output(X)))
new_estimator = self.base_estimator(**self.params)
new_estimator.fit(X, s)
self.cascade.append(new_estimator)
def _output(self, X):
res = np.zeros(X.shape[0])
for i in range(len(self.cascade)):
res += self.cascade[i].predict(X)
return res
def predict_proba(self, X):
return 1.0 / (1.0 + np.exp(-self._output(X)))
def predict(self, X):
res = np.sign(self._output(X))
res[res == 0] = -1
return res
### BEGIN Solution
model = GradientBoostingClassifier(estimators=6)
params = {
'max_depth' : 2
}
# model = GBM(DecisionTreeRegressor, params, 100)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
results = {'Accuracy' : accuracy_score(y_test, y_pred),
'Precision': precision_score(y_test, y_pred),
'F1_score' : f1_score(y_test, y_pred)}
print ("Results:")
for key, value in results.items():
print ("%s %.3f" % (key, value))
### END Solution
from mlxtend.plotting import plot_decision_regions
plt.figure(figsize=(10,7))
plt.title("Decision boundary", size=16)
plot_decision_regions(X=X_train, y=y_train, clf=model, legend=2)
plt.tight_layout()
plt.show()
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
import catboost as ctb
import lightgbm as lgb
data = load_breast_cancer()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4,
random_state=0x0BADBEEF)
### BEGIN Solution
default_depth = 6
default_estimators = 100
clfs = {
"CatBoost" : ctb.CatBoostClassifier(logging_level='Silent',
max_depth=default_depth,
n_estimators=default_estimators),
"LGBM" : lgb.LGBMClassifier(max_depth=default_depth,
n_estimators=default_estimators),
"XGBC" : xgb.XGBClassifier(max_depth=default_depth,
n_estimators=default_estimators)
}
plt.figure(figsize=(10,8))
for key, clf in clfs.items():
probas = clf.fit(X_train, y_train).predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probas[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=2, alpha=0.8,
label='ROC %s (AUC = %0.2f)' % (key, roc_auc))
plt.xlabel('False Positive Rate', size=14)
plt.ylabel('True Positive Rate', size=14)
plt.title('Receiver operating characteristic example', size=14)
plt.legend(loc="lower right")
plt.show()
### END Solution
tunned_params = [{
'max_depth' : range(2, 8, 2),
'n_estimators' : range(40, 160, 20)
}]
cv = StratifiedKFold(n_splits=3)
plt.figure(figsize=(10,8))
for key, clf in models.items():
gs = GridSearchCV(clf, tunned_params, cv=3, iid=True,
scoring='roc_auc', n_jobs=4, return_train_score=True)
gs.fit(X_train, y_train)
clf = gs.best_estimator_
probas = clf.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probas[:, 1])
plt.plot(fpr, tpr, lw=2, alpha=0.6,
label='%s | %s = %d; %s = %d'
% (key, 'max_depth', gs.best_params_['max_depth'],
'n_estimators', gs.best_params_['n_estimators']))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
estimator_params = [{
'n_estimators' : range(40, 160, 20)
}]
depth_params = [{
'max_depth' : range(2, 10, 2),
}]
f, axes = plt.subplots(1, 2, sharex=True, figsize=(15, 7))
axes[0].set_title('Relative time')
axes[1].set_title('Absolute time')
def plot_time_chart(axes, params, key):
grid_cv = GridSearchCV(model, params, cv=3, iid=True,
scoring='roc_auc', n_jobs=4, return_train_score=True)
grid_cv.fit(X_train, y_train)
axes[0].plot(params[0][key],
np.array(grid_cv.cv_results_['mean_fit_time']) / grid_cv.cv_results_['mean_fit_time'][0],
label=grid_cv.best_estimator_.__class__.__name__)
axes[1].plot(params[0][key],
grid_cv.cv_results_['mean_fit_time'],
label=grid_cv.best_estimator_.__class__.__name__)
for model in models.values():
plot_time_chart(axes, estimator_params, 'n_estimators')
axes[0].legend()
axes[1].legend()
plt.tight_layout()
plt.show()
i = 0
f, axes = plt.subplots(1, 2, sharex=True, figsize=(15, 7))
axes[0].set_title('Relative time')
axes[1].set_title('Absolute time')
for model in models.values():
plot_time_chart(axes, depth_params, 'max_depth')
axes[0].legend()
axes[1].legend()
plt.tight_layout()
plt.show()
import torch.nn.functional as F
import matplotlib.pyplot as plt
import torch
x = torch.arange(-2, 2, .01, requires_grad=True)
x.sum().backward() # to create x.grad
f, axes = plt.subplots(2, 2, sharex=True, figsize=(15, 7))
axes[0, 0].set_title('Values')
axes[0, 1].set_title('Derivatives')
for i, function_set in (0, (('ReLU', F.relu), ('ELU', F.elu), ('Softplus', F.softplus))), \
(1, (('Sign', torch.sign), ('Sigmoid', torch.sigmoid), ('Softsign', F.softsign), ('Tanh', torch.tanh))):
for function_name, activation in function_set:
### BEGIN Solution
x.grad.data.zero_()
y = activation(x)
axes[i, 0].plot(x.data.numpy(), y.data.numpy(), label=function_name)
y.sum().backward()
axes[i, 1].plot(x.data.numpy(), x.grad.data.numpy(), label=function_name)
### END Solution
axes[i, 0].legend()
axes[i, 1].legend()
plt.tight_layout()
plt.show()
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_digits
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
digits, targets = load_digits(return_X_y=True)
digits = digits.astype(np.float32) / 255
digits_train, digits_test, targets_train, targets_test = train_test_split(digits, targets, random_state=0)
train_size = digits_train.shape[0]
test_size = digits_test.shape[0]
input_size = 8*8
classes_n = 10
class Linear:
def __init__(self, input_size, output_size):
self.thetas = np.random.randn(input_size, output_size)
self.thetas_grads = np.empty_like(self.thetas)
self.bias = np.random.randn(output_size)
self.bias_grads = np.empty_like(self.bias)
self.input = None
self.out = None
def forward(self, x):
self.input = x
output = np.matmul(x, self.thetas) + self.bias
self.out = output
return output
def backward(self, x, output_grad):
### BEGIN Solution
self.input = self.input.reshape(-1,1)
input_grad = np.matmul(self.thetas, output_grad)
self.thetas_grads += self.input @ output_grad.T
self.bias_grads += output_grad.sum(axis=1)
assert self.thetas_grads.shape == self.thetas.shape
assert self.bias_grads.shape == self.bias.shape
### END Solution
return input_grad
class LogisticActivation:
def __init__(self):
self.input = None
self.out = None
def forward(self, x):
self.input = x
output = 1/(1 + np.exp(-x))
self.out = output
return output
def backward(self, x, output_grad):
### BEGIN Solution
self.out = self.out.reshape(-1,1)
input_grad = output_grad * self.out * (1. - self.out)
### END Solution
return input_grad
class SoftMaxActivation:
def __init__(self):
self.input = None
self.out = None
def forward(self, x):
self.input = x
output = np.exp(x) / np.exp(x).sum(axis=-1, keepdims=True)
self.out = output
return output
def backward(self, x, output_grad):
### BEGIN Solution
self.out = self.out.reshape(-1,1)
input_grad = output_grad * self.out * (1. - self.out)
### END Solution
return input_grad
class MLP:
def __init__(self, input_size, hidden_layer_size, output_size):
self.linear1 = Linear(input_size, hidden_layer_size)
self.activation1 = LogisticActivation()
self.linear2 = Linear(hidden_layer_size, output_size)
self.softmax = SoftMaxActivation()
def forward(self, x):
return self.softmax.forward((self.linear2.forward(self.activation1.forward(self.linear1.forward(x)))))
def backward(self, x, output_grad):
### BEGIN Solution
output_grad = self.linear2.backward(x, output_grad)
output_grad = self.activation1.backward(x, output_grad)
output_grad = self.linear1.backward(x, output_grad)
### END Solution
### BEGIN Solution
def cross_entropy_loss(predicted, target):
target_vector = np.zeros_like(predicted)
if (predicted.ndim != 1):
target_vector[np.arange(len(target)), target] = 1
cost = -np.sum(target_vector * np.log2(predicted), axis=1)
else:
target_vector[target] = 1
cost = -np.sum(target_vector * np.log2(predicted))
return cost
def grad_cross_entropy_loss(predicted, target):
target_vector = np.zeros_like(predicted)
target_vector[target] = 1
return (predicted - target_vector).reshape(-1, 1)
### END Solution
np.random.seed(0)
mlp = MLP(input_size=input_size, hidden_layer_size=100, output_size=classes_n)
epochs_n = 250
learning_curve = [0] * epochs_n
test_curve = [0] * epochs_n
x_train = digits_train
x_test = digits_test
y_train = targets_train
y_test = targets_test
learning_rate = 1e-2
for epoch in range(epochs_n):
if epoch % 10 == 0:
print('Starting epoch', epoch)
for sample_i in range(train_size):
x = x_train[sample_i]
target = y_train[sample_i]
### BEGIN Solution
# ... zero the gradients
mlp.linear1.thetas_grads = np.zeros_like(mlp.linear1.thetas_grads)
mlp.linear1.bias_grads = np.zeros_like(mlp.linear1.bias_grads)
mlp.linear2.thetas_grads = np.zeros_like(mlp.linear2.thetas_grads)
mlp.linear2.bias_grads = np.zeros_like(mlp.linear2.bias_grads)
# prediction = mlp.forward(x)
predicted_value = mlp.forward(x)
loss = cross_entropy_loss(predicted_value, target) # use cross entropy loss
loss_grad = grad_cross_entropy_loss(predicted_value, target)
learning_curve[epoch] += loss
grad = mlp.backward(x, loss_grad)
# ... perform backward pass
# ... update the weights simply with weight -= grad * learning_rate
mlp.linear1.thetas -= learning_rate * mlp.linear1.thetas_grads
mlp.linear1.bias -= learning_rate * mlp.linear1.bias_grads
mlp.linear2.thetas -= learning_rate * mlp.linear2.thetas_grads
mlp.linear2.bias -= learning_rate * mlp.linear2.bias_grads
learning_curve[epoch] /= train_size
prediction = mlp.forward(x_test)
loss = cross_entropy_loss(prediction, y_test).mean()
test_curve[epoch] = loss
### END Solution
plt.plot(learning_curve)
plt.plot(test_curve)
predictions = mlp.forward(digits).argmax(axis=1)
pd.DataFrame(confusion_matrix(targets, predictions))
from torch.utils.data import DataLoader, Dataset
import torch.nn.functional as F
import PIL.Image as Image
from torch import nn
import numpy as np
import torch.optim as optim
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torchvision import transforms, utils
from PIL import Image
import os
import os.path
import sys
import progressbar
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
dt = pd.read_csv(r'data/cats_dogs/train.csv')
dt.head()
Image.open('data/' + dt['path'].iloc[1])
#Change class name
class ImageFolder(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
Args:
csv_file (string): Path to csv file
root_dir (string): Root directory path.
self.dt = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __getitem__(self, idx):
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
path = self.root_dir + '/' + self.dt.iloc[idx]['path']
target = self.dt.iloc[idx]['y']
with open(path, 'rb') as f:
sample= Image.open(f).convert('RGB')
sample = self.transform(sample)
return sample, target
def __len__(self):
return self.dt.shape[0]
root_dir = './data'
image_size = 224
batch_size = 8
workers = 2
ngpu = 2
dataset = ImageFolder('data/cats_dogs/train.csv', root_dir)
len(dataset)
data_transform_train = transforms.Compose([
transforms.RandomResizedCrop(image_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
data_transform_test = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
### BEGIN Solution
dataset_train = ImageFolder('data/cats_dogs/train.csv', root_dir, transform=data_transform_train)
dataset_val = ImageFolder('data/cats_dogs/validation.csv', root_dir, transform=data_transform_test)
train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size,
shuffle=True, num_workers=workers)
val_loader = torch.utils.data.DataLoader(dataset_val, batch_size=batch_size,
shuffle=False, num_workers=workers)
### END Solution
for X,y in train_loader:
print(X[0])
print(y[0])
plt.imshow(np.array(X[0,0,:,:]))
break
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.convol = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(16, 32, 3, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(32, 64, 3, padding=1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Dropout(),
nn.ReLU()
)
self.linear = nn.Sequential(
nn.Linear(64 * 28 * 28, 256),
nn.ReLU(),
nn.Linear(256, 84),
nn.ReLU(),
nn.Dropout(),
nn.Linear(84, 2),
nn.LogSoftmax(dim=1)
)
def forward(self, input):
x = self.convol(input)
x = x.view(x.size(0), -1)
x = self.linear(x)
return x
def create_model(net, device):
model = net.to(device)
if (device.type == 'cuda') and (ngpu > 1):
model = nn.DataParallel(model, list(range(ngpu)))
return model
### BEGIN Solution
criterion = nn.CrossEntropyLoss().cuda()
### END Solution
def save_checkpoint(model, path):
torch.save({
'model_state_dict': model.state_dict(),
}, path)
def load_checkpoint(model, path):
checkpoint = torch.load(path)
model.load_state_dict(checkpoint['model_state_dict'])
def accuracy_score(model):
correct = 0
total = 0
model.eval()
with torch.no_grad():
for data in val_loader:
samples = data[0].to(device)
labels = data[1].to(device)
outputs = model(samples)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network: %d %%' % (100 * correct / total))
model.train()
### BEGIN Solution
def train(model, optimizer):
model_losses_train = []
model_losses_val = []
model_accuracy_val = []
num_epochs = 100
it = 0
with progressbar.ProgressBar(max_value = num_epochs * len(train_loader)) as bar:
for epoch in range(num_epochs):
train_loss = 0
for i, data in enumerate(train_loader, 0):
model.zero_grad()
labels = data[1].to(device)
samples = data[0].type(torch.FloatTensor).to(device)
output = model(samples)
err = criterion(output, labels)
err.backward()
optimizer.step()
train_loss += err.item()
bar.update(it)
it += 1
if epoch % 5 == 0:
model_losses_train.append(train_loss / len(train_loader))
model.eval()
with torch.no_grad():
test_loss = 0
for data_val in val_loader:
labels_val = data_val[1].to(device)
samples_val = data_val[0].type(torch.FloatTensor).to(device)
output_val = model(samples_val)
err_val = criterion(output_val, labels_val)
test_loss += err_val.item()
model_losses_val.append(test_loss / len(val_loader))
model.train()
plt.figure(figsize=(10,5))
plt.title("CE Loss During Training")
plt.plot(model_losses_train, label="Train")
plt.plot(model_losses_val, label="Validation")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
### END Solution
# Model = Net(), optimizer = Adam, lr = 1e-5
model = create_model(Net(), device)
lr = 1e-5
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-7)
train(model, optimizer)
save_checkpoint(model, './adam_relu_lr=1e-5')
accuracy_score(model)
# Model = Net(), optimizer = Adam, lr = 1e-7
model = create_model(Net(), device)
lr = 1e-5
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-7)
train(model, optimizer)
save_checkpoint(model, './adam_relu_lr=1e-7')
accuracy_score(model)
# Model = Net(), optimizer = SGD, lr = 1e-5, momentum=0.9
model = create_model(Net(), device)
lr = 1e-5
optimizer = optim.SGD(model.parameters(), lr = 1e-5, momentum = 0.9)
train(model, optimizer)
save_checkpoint(model, './sgd_relu_lr=1e-5')
accuracy_score(model)
model = create_model(Net(), device)
load_checkpoint(model, './adam_relu_lr=1e-5')
accuracy_score(model)
class Sign(nn.Module):
def __init__(self):
super(Sign, self).__init__()
def forward(self, input):
return torch.sign(input)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.convol = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
Sign(),
nn.MaxPool2d(2, 2),
nn.Conv2d(16, 32, 3, padding=1),
Sign(),
nn.MaxPool2d(2, 2),
nn.Conv2d(32, 64, 3, padding=1),
Sign(),
nn.MaxPool2d(2, 2),
nn.Dropout(),
Sign()
)
self.linear = nn.Sequential(
nn.Linear(64 * 28 * 28, 256),
Sign(),
nn.Linear(256, 84),
Sign(),
nn.Dropout(),
nn.Linear(84, 2),
nn.LogSoftmax(dim=1)
)
def forward(self, input):
x = self.convol(input)
x = x.view(x.size(0), -1)
x = self.linear(x)
return x
model = create_model(Net(), device)
lr = 1e-5
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-7)
train(model, optimizer)
accuracy_score(model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Look at the first 10 rows of this dataset.
Step2: The dataset has many NaN's and also a lot of categorical features. So at first, you should preprocess the data. We can deal with categorical features by using one-hot encoding. To do that we can use pandas.get_dummies.
Step3: Okay, now let's see how much data we have.
Step4: There are too many features in this dataset and not all of them are equally important for our problem. Besides, using the whole dataset as-is to train a linear model will, for sure, lead to overfitting. Instead of painful and time consuming manual selection of the most relevant data, we will use the methods of automatic feature selection.
Step5: There is a big variance in it and it's far from being a normal distribution. In the real-world problems it happens all the time
Step6: Now it looks more like the data we want to deal with.
Step7: Print the 20 most important features and their values.
Step8: <br>
Step10: You will need to do it for the next tasks too, so we recommend you to implement
Step11: Now apply this function
Step14: <br>
Step15: Use the function implemented above and use DecisionTreeRegressor to get the best features according to this algorithm and print them.
Step16: Use Linear Regression, Ridge regression, Random forest and DecisionTree to get the RMSLE score using these features. Remember the function you wrote earlier.
Step17: <br>
Step18: Your task is
Step19: <br/> <!--Intentionally left blank-->
Step20: <br>
Step21: Answer the following questions. Which of these functions may be, and which -- definitely are a poor choise as an activation function in a neural network? Why?
Step22: Prepare the dataset.
Step23: Implement the MLP with backprop.
Step24: Task 8 (3 pt.)
Step25: Make sure you are using the right device.
Step26: First take a look at the data.
Step29: Implement your Dataset class.
Step30: Define the augmentation tranform and instantiate training and validation subsets of your Dataset and the correpsonding DataLoaders.
Step31: Make sure that dataloader works as expected by observing one sample from it.
Step32: Implement your model below. You can use any layers that you want, but in general the structure of your model should be
Step33: Send your model to GPU, if you have it.
Step34: Implement your loss function below, or use the predefined loss, suitable for this task.
Step35: Try two different optimizers and choose one. For the optimizer of your choice, try two different sets of parameters (e.g learning rate). Explain both of your choices and back them with the learning performance of the network (see the rest of the task).
Step36: Load Checkpoints and check accuracy score
Step37: Task 9 (1 pt.)
|
3,383
|
<ASSISTANT_TASK:>
Python Code:
x = np.mat(np.arange(-1.,1.,0.01)).T
N = len(x)
degree = 10
#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))
B = np.hstack((np.power(x,i) for i in range(degree+1)))
B = B[:,np.random.permutation(degree+1)]
#B = np.hstack((np.power(x,i) for i in range(degree+1)))
#B = np.random.randn(N,degree+1)
A, R, = np.linalg.qr(B)
# Append an extra basis for outliers
K = A.shape[1]
plt.figure(figsize=(5,20))
#plt.show()
idx = range(0,N,20)
idx.append(N-1)
for i in range(K):
plt.subplot(K,1,i+1)
plt.stem(x[idx],A[idx,i])
plt.plot(x,A[:,i])
plt.gcf().gca().set_xlim([-1.2, 1.2])
plt.gcf().gca().set_ylim([-0.3, 0.3])
plt.gcf().gca().axis('off')
plt.show()
import numpy as np
import numpy.linalg as la
np.set_printoptions(precision=3, suppress=True)
def house(x):
'''Householder Reflection'''
m = len(x)
e = np.mat(np.zeros((m,1)))
e[0] = 1
v = la.norm(x)*e - x
z = v/la.norm(v)
H = np.eye(m) - 2*z*z.T
return z,H
x = np.mat('[1;2;3;4]')
z,H = house(x)
print(x)
print(H*x)
print(x - 2*z*(z.T*x))
np.set_printoptions(precision=3, suppress=True)
m = 5
n = 7
A = np.mat(np.random.randn(m, n))
R = A.copy()
Q = np.eye(m)
print('A')
print(A)
for i in range(np.min((n,m))-1):
z,H = house(R[i:,i])
R[i:,i:] = H*R[i:,i:]
Q[:,i:] = Q[:,i:]*H.T
#print('q =')
#print(q)
#print('Q =')
#print(Q)
print('R =')
print(R)
print('--'*4)
print(Q*R)
print('--'*4)
print(A)
def random_house(m, k=None):
'''Generate a random Householder Reflection'''
if k is None:
k = m
e = np.mat(np.zeros((k,1)))
e[0] = 1
x = np.mat(np.random.randn(k,1))
v = la.norm(x)*e - x
q = v/la.norm(v)
q = np.vstack((np.zeros((m-k,1)), q))
H = np.eye(m) - 2*q*q.T
return q,H
q, H = random_house(5)
print(q)
np.linalg.matrix_rank(H)
N = 5
A = [random_house(N,k)[1] for k in range(N,1,-1)]
qq = np.hstack([random_house(N,k)[0] for k in range(N,1,-1)])
qq
A[:,0].T*A[:,1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Householder Reflection
Step2: We have explicitely constructed $H$ (or $P$) but this is not needed. It is sufficient to store the Householder vectors $v$ to construct/implement transformations by $H$.
|
3,384
|
<ASSISTANT_TASK:>
Python Code:
import psycopg2
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS plpythonu;")
cursor.execute(
CREATE OR REPLACE FUNCTION prep_sum2d ()
RETURNS integer
AS $$
# GD is a so-called "global dictionary" made available
# to us by PL/Python. It allows us to share information
# across functions/code within a single session. We
# will use it to "share" our Numba-compiled function
# with a different Python function defined later in
# this cell.
if 'numba' in GD and 'numpy' in GD:
numpy = GD['numpy']
numba = GD['numba']
else:
import numpy
import numba
GD['numpy'] = numpy
GD['numba'] = numba
# Define our compute-intensive function to play with.
# (This is the example offered on the main Numba webpage.)
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
# Store it in PL/Python's special 'GD' dict for ease of later use.
GD['sum2d'] = sum2d
# Compile a version of sum2d using Numba, and store it for later use.
jitsum2d = numba.jit(sum2d, target='gpu')
csum2d = jitsum2d.compile(numba.double(numba.double[:,::1]))
GD['jitsum2d'] = jitsum2d
GD['csum2d'] = csum2d
return 1
$$ LANGUAGE plpythonu;
)
#cursor.execute("DROP FUNCTION speedtest_sum2d();")
cursor.execute(
CREATE OR REPLACE FUNCTION speedtest_sum2d ()
RETURNS float
AS $$
import time
if 'numba' in GD and 'numpy' in GD:
numpy = GD['numpy']
numba = GD['numba']
else:
import numpy
import numba
GD['numpy'] = numpy
GD['numba'] = numba
sum2d = GD['sum2d']
jitsum2d = GD['jitsum2d']
csum2d = GD['csum2d']
# Create some random input data to play with.
arr = numpy.random.randn(100, 100)
# Exercise the pure-Python function, sum2d.
start = time.time()
res = sum2d(arr)
duration = time.time() - start
plpy.log("Result from python is %s in %s (msec)" % (res, duration*1000))
csum2d(arr) # Warm up
# Exercise the Numba version of that same function, csum2d.
start = time.time()
res = csum2d(arr)
duration2 = time.time() - start
plpy.log("Result from compiled is %s in %s (msec)" % (res, duration2*1000))
plpy.log("Speed up is %s" % (duration / duration2))
return (duration / duration2)
$$ LANGUAGE plpythonu;
)
conn.commit()
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT prep_sum2d();")
rows = cursor.fetchall()
conn.commit()
rows
with psycopg2.connect(database='radon_fingerprints',
host='localhost',
port=5437) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT prep_sum2d();")
cursor.execute("SELECT speedtest_sum2d();")
rows = cursor.fetchall()
conn.commit()
rows
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Create Compute-Intensive Python Function
Step4: Quick Test on the Setup Function
Step5: Compare Performance of the Numba Compiled Version with the Pure-Python Function
|
3,385
|
<ASSISTANT_TASK:>
Python Code:
# Author: Remi Flamary <remi.flamary@unice.fr>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import ot
# necessary for 3d plot even if not used
from mpl_toolkits.mplot3d import Axes3D # noqa
from matplotlib.collections import PolyCollection # noqa
#import ot.lp.cvx as cvx
#%% parameters
problems = []
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
# Gaussian distributions
a1 = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
a2 = ot.datasets.make_1D_gauss(n, m=60, s=8)
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
#%% parameters
a1 = 1.0 * (x > 10) * (x < 50)
a2 = 1.0 * (x > 60) * (x < 80)
a1 /= a1.sum()
a2 /= a2.sum()
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
#%% parameters
a1 = np.zeros(n)
a2 = np.zeros(n)
a1[10] = .25
a1[20] = .5
a1[30] = .25
a2[80] = 1
a1 /= a1.sum()
a2 /= a2.sum()
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
#%% plot
nbm = len(problems)
nbm2 = (nbm // 2)
pl.figure(2, (20, 6))
pl.clf()
for i in range(nbm):
A = problems[i][0]
bary_l2 = problems[i][1][0]
bary_wass = problems[i][1][1]
bary_wass2 = problems[i][1][2]
pl.subplot(2, nbm, 1 + i)
for j in range(n_distributions):
pl.plot(x, A[:, j])
if i == nbm2:
pl.title('Distributions')
pl.xticks(())
pl.yticks(())
pl.subplot(2, nbm, 1 + i + nbm)
pl.plot(x, bary_l2, 'r', label='L2 (Euclidean)')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
if i == nbm - 1:
pl.legend()
if i == nbm2:
pl.title('Barycenters')
pl.xticks(())
pl.yticks(())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gaussian Data
Step2: Dirac Data
Step3: Final figure
|
3,386
|
<ASSISTANT_TASK:>
Python Code:
file_listcal = "alma_sourcecat_searchresults_20180419.csv"
q = databaseQuery()
listcal = q.read_calibratorlist(file_listcal, fluxrange=[0.1, 999999])
len(listcal)
print("Name: ", listcal[0][0])
print("J2000 RA, dec: ", listcal[0][1], listcal[0][2])
print("Alias: ", listcal[0][3])
print("Flux density: ", listcal[0][4])
print("Band: ", listcal[0][5])
print("Freq: ", listcal[0][6])
print("Obs date: ", listcal[0][4])
report, resume = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \
maxFreqRes=999999999, array='12m', \
excludeCycle0=True, \
selectPol=False, \
minTimeBand={3:60., 6:60., 7:60.}, \
silent=True)
print("Name: ", resume[0][0])
print("From NED: ")
print("Name: ", resume[0][3])
print("J2000 RA, dec: ", resume[0][4], resume[0][5])
print("z: ", resume[0][6])
print("Total # of projects: ", resume[0][7])
print("Total # of UIDs: ", resume[0][8])
print("Gal lon: ", resume[0][9])
print("Gal lat: ", resume[0][10])
report_non, resume_non = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \
maxFreqRes=999999999, array='12m', \
excludeCycle0=True, \
selectPol=False, \
minTimeBand={3:60., 6:60., 7:60.}, \
nonALMACAL = True, \
silent=True)
report_almacal, resume_almacal = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \
maxFreqRes=999999999, array='12m', \
excludeCycle0=True, \
selectPol=False, \
minTimeBand={3:60., 6:60., 7:60.}, \
onlyALMACAL = True, \
silent=True)
for i, obj in enumerate(resume):
for j, cal in enumerate(listcal):
if obj[0] == cal[0]: # same name
obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume"
for i, obj in enumerate(resume_non):
for j, cal in enumerate(listcal):
if obj[0] == cal[0]: # same name
obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume"
for i, obj in enumerate(resume_almacal):
for j, cal in enumerate(listcal):
if obj[0] == cal[0]: # same name
obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume"
def collect_z_and_flux(Band, resume_array):
z = []
flux = []
for idata in resume_array:
if idata[6] is not None: # select object which has redshift information
fluxnya = idata[-1][0]
bandnya = idata[-1][1]
freqnya = idata[-1][2]
datenya = idata[-1][3]
for i, band in enumerate(bandnya):
if band == str(Band): # take only first data
flux.append(fluxnya[i])
z.append(idata[6])
break
return z, flux
z3, f3 = collect_z_and_flux(3, resume)
print("Number of seleted source in B3: ", len(z3))
z6, f6 = collect_z_and_flux(6, resume)
print("Number of seleted source in B6: ", len(z6))
z7, f7 = collect_z_and_flux(7, resume)
print("Number of seleted source in B7: ", len(z7))
znon3, fnon3 = collect_z_and_flux(3, resume_non)
print("Number of seleted source in B3: ", len(znon3))
znon6, fnon6 = collect_z_and_flux(6, resume_non)
print("Number of seleted source in B6: ", len(znon6))
znon7, fnon7 = collect_z_and_flux(7, resume_non)
print("Number of seleted source in B7: ", len(znon7))
zalm3, falm3 = collect_z_and_flux(3, resume_almacal)
print("Number of seleted source in B3: ", len(zalm3))
zalm6, falm6 = collect_z_and_flux(6, resume_almacal)
print("Number of seleted source in B6: ", len(zalm6))
zalm7, falm7 = collect_z_and_flux(7, resume_almacal)
print("Number of seleted source in B7: ", len(zalm7))
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(znon3, fnon3, 'r.', zalm3, falm3, 'b.')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B3")
plt.subplot(222)
plt.plot(znon6, fnon6, 'r.', zalm6, falm6, 'b.')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B6")
plt.subplot(223)
plt.plot(znon7, fnon7, 'r.', zalm7, falm7, 'b.')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B7")
# plt.subplot(224)
# plt.plot(z3, f3, 'ro', z6, f6, 'go', z7, f7, 'bo', alpha=0.3)
# plt.xlabel("z")
# plt.ylabel("Flux density (Jy)")
# plt.title("B3, B6, B7")
# from matplotlib import rcParams
# rcParams['font.family'] = ['Humor Sans', 'Comic Sans MS']
# rcParams['font.size'] = 14.0
# rcParams['axes.linewidth'] = 1.5
# rcParams['lines.linewidth'] = 2.0
# rcParams['figure.facecolor'] = 'white'
# rcParams['grid.linewidth'] = 0.0
# rcParams['axes.unicode_minus'] = False
# rcParams['xtick.major.size'] = 8
# rcParams['xtick.major.width'] = 3
# rcParams['ytick.major.size'] = 8
# rcParams['ytick.major.width'] = 3
plt.xkcd()
plt.figure(figsize=(10,6))
#plt.subplot(121)
plt.hist(z3, bins=20, label="all sample", histtype="step", linewidth=4)
plt.hist(znon3, bins=20, label="our final target", histtype="step", linewidth=4)
plt.hist(zalm3, bins=20, label="ALMACAL I", histtype="step", linewidth=4)
plt.xlabel(r"Redshift $(z)$")
plt.ylabel("Number of Calibrator")
plt.title("Fig 1. Redshift distribution of sample")
plt.legend()
# plt.subplot(122)
# plt.hist(z3, bins=15, label="all", normed=True, cumulative=True, histtype='step')
# plt.hist(znon3, bins=15, label="nonALMACAL", normed=True, cumulative=True, histtype='step')
# plt.hist(zalm3, bins=15, label="ALMACAL", normed=True, cumulative=True, histtype='step')
# plt.legend()
# plt.xlabel(r"$z$")
# plt.ylabel("Normalized-Cumulative N")
# plt.show()
plt.savefig("sample_redshift.png", transparent=True, bbox_tight=True)
plt.title?
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Tcmb0=2.725)
def calc_power(z, flux):
z = redshift
flux in Jy
z = np.array(z)
flux = np.array(flux)
dL = cosmo.luminosity_distance(z).to(u.meter).value # Luminosity distance
luminosity = 4.0*np.pi*dL*dL/(1.0+z) * flux * 1e-26
return z, luminosity
z3, l3 = calc_power(z3, f3)
z6, l6 = calc_power(z6, f6)
z7, l7 = calc_power(z7, f7)
znon3, lnon3 = calc_power(znon3, fnon3)
znon6, lnon6 = calc_power(znon6, fnon6)
znon7, lnon7 = calc_power(znon7, fnon7)
zalm3, lalm3 = calc_power(zalm3, falm3)
zalm6, lalm6 = calc_power(zalm6, falm6)
zalm7, lalm7 = calc_power(zalm7, falm7)
zdummy = np.linspace(0.001, 2.5, 100)
fdummy = 0.1 # Jy, our cut threshold
zdummy, Ldummy0 = calc_power(zdummy, fdummy)
zdummy, Ldummy3 = calc_power(zdummy, np.max(f3))
zdummy, Ldummy6 = calc_power(zdummy, np.max(f6))
zdummy, Ldummy7 = calc_power(zdummy, np.max(f7))
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(znon3, np.log10(lnon3), 'ro', zalm3, np.log10(lalm3), 'bo', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy3), 'r--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3")
plt.subplot(222)
plt.plot(znon6, np.log10(lnon6), 'ro', zalm6, np.log10(lalm6), 'bo', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy6), 'g--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6")
plt.subplot(223)
plt.plot(znon7, np.log10(lnon7), 'ro', zalm7, np.log10(lalm7), 'bo', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy7), 'b--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7")
# plt.subplot(224)
# plt.plot(znon3, np.log10(lnon3), 'r*', znon6, np.log10(lnon6), 'g*', znon7, np.log10(lnon7), 'b*', \
# zdummy, np.log10(Ldummy0), 'k--', \
# zdummy, np.log10(Ldummy3), 'r--', \
# zdummy, np.log10(Ldummy6), 'g--', \
# zdummy, np.log10(Ldummy7), 'b--', alpha=0.5)
# plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(l3, bins=20, label="all", histtype="step")
plt.hist(lnon3, bins=20, label="nonALMACAL", histtype="step")
plt.hist(lalm3, bins=20, label="ALMACAL", histtype="step")
plt.legend()
plt.xlabel(r"$L_{\nu}$")
plt.ylabel("N")
plt.subplot(122)
plt.hist(l3, bins=15, label="all", normed=True, cumulative=True, histtype='step')
plt.hist(lnon3, bins=15, label="nonALMACAL", normed=True, cumulative=True, histtype='step')
plt.hist(lalm3, bins=15, label="ALMACAL", normed=True, cumulative=True, histtype='step')
plt.legend()
plt.xlabel(r"$L_{\nu}$")
plt.ylabel("Normalized-Cumulative N")
plt.show()
from scipy.stats import ks_2samp
print(ks_2samp(z3, znon3))
print(ks_2samp(z3, zalm3))
print(ks_2samp(znon3, zalm3))
print(ks_2samp(l3, lnon3))
print(ks_2samp(l3, lalm3))
print(ks_2samp(lnon3, lalm3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example, retrieve all the calibrator with a flux > 0.1 Jy
Step2: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7]
Step3: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED.
Step4: Sometimes there is no redshift information found in NED
Step5: Select objects which has redshift
Step6: Plot Flux vs Redshift, NonALMACAL vs ALMACAL
Step7: Distribution of redshift
Step8: Plot log(Luminosity) vs redshift, nonALMACAL vs ALMACAL
Step10: How to calculate luminosity
Step11: Plot $\log_{10}(L)$ vs $z$
Step12: Black-dashed line are for 0.1 Jy flux.
|
3,387
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# 텐서플로 패키지 가져오기
import tensorflow as tf
mirrored_strategy = tf.distribute.MirroredStrategy()
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
# 복제본의 수로 전체 배치 크기를 계산.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
with mirrored_strategy.scope():
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return cross_entropy
per_example_losses = mirrored_strategy.run(
step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_example_losses, axis=0)
return mean_loss
with mirrored_strategy.scope():
for inputs in dist_dataset:
print(train_step(inputs))
with mirrored_strategy.scope():
iterator = iter(dist_dataset)
for _ in range(10):
print(train_step(next(iterator)))
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 텐서플로로 분산 훈련하기
Step2: 전략의 종류
Step3: MirroredStrategy 인스턴스가 생겼습니다. 텐서플로가 인식한 모든 GPU를 사용하고, 장치 간 통신에는 NCCL을 사용할 것입니다.
Step4: 장치 간 통신 방법을 바꾸고 싶다면, cross_device_ops 인자에 tf.distribute.CrossDeviceOps 타입의 인스턴스를 넘기면 됩니다. 현재 기본값인 tf.distribute.NcclAllReduce 이외에 tf.distribute.HierarchicalCopyAllReduce와 tf.distribute.ReductionToOneDevice 두 가지 추가 옵션을 제공합니다.
Step5: CentralStorageStrategy
Step6: CentralStorageStrategy 인스턴스가 만들어졌습니다. 인식한 모든 GPU와 CPU를 사용합니다. 각 복제본의 변수 변경사항은 모두 수집된 후 변수에 적용됩니다.
Step7: MultiWorkerMirroredStrategy에 사용할 수 있는 수집 연산 구현은 현재 두 가지입니다. CollectiveCommunication.RING는 gRPC를 사용한 링 네트워크 기반의 수집 연산입니다. CollectiveCommunication.NCCL는 Nvidia의 NCCL을 사용하여 수집 연산을 구현한 것입니다. CollectiveCommunication.AUTO로 설정하면 런타임이 알아서 구현을 고릅니다. 최적의 수집 연산 구현은 GPU의 수와 종류, 클러스터의 네트워크 연결 등에 따라 다를 수 있습니다. 예를 들어 다음과 같이 지정할 수 있습니다.
Step8: 다중 GPU를 사용하는 것과 비교해서 다중 워커를 사용하는 것의 가장 큰 차이점은 다중 워커에 대한 설정 부분입니다. 클러스터를 구성하는 각 워커에 "TF_CONFIG" 환경변수를 사용하여 클러스터 설정을 하는 것이 텐서플로의 표준적인 방법입니다. 아래쪽 "TF_CONFIG" 항목에서 어떻게 하는지 자세히 살펴보겠습니다.
Step9: 위 예에서는 MirroredStrategy를 사용했기 때문에, 하나의 장비가 다중 GPU를 가진 경우에 사용할 수 있습니다. strategy.scope()로 분산 처리할 부분을 코드에 지정할 수 있습니다. 이 범위(scope) 안에서 모델을 만들면, 일반적인 변수가 아니라 미러링된 변수가 만들어집니다. 이 범위 안에서 컴파일을 한다는 것은 작성자가 이 전략을 사용하여 모델을 훈련하려고 한다는 의미입니다. 이렇게 구성하고 나서, 일반적으로 실행하는 것처럼 모델의 fit 함수를 호출합니다.
Step10: 위에서는 훈련과 평가 입력을 위해 tf.data.Dataset을 사용했습니다. 넘파이(numpy) 배열도 사용할 수 있습니다.
Step11: 데이터셋이나 넘파이를 사용하는 두 경우 모두 입력 배치가 동일한 크기로 나누어져서 여러 개로 복제된 작업에 전달됩니다. 예를 들어, MirroredStrategy를 2개의 GPU에서 사용한다면, 크기가 10개인 배치(batch)가 두 개의 GPU로 배분됩니다. 즉, 각 GPU는 한 단계마다 5개의 입력을 받게 됩니다. 따라서 GPU가 추가될수록 각 에포크(epoch) 당 훈련 시간은 줄어들게 됩니다. 일반적으로는 가속기를 더 추가할 때마다 배치 사이즈도 더 키웁니다. 추가한 컴퓨팅 자원을 더 효과적으로 사용하기 위해서입니다. 모델에 따라서는 학습률(learning rate)을 재조정해야 할 수도 있을 것입니다. 복제본의 수는 strategy.num_replicas_in_sync로 얻을 수 있습니다.
Step12: 현재 어떤 것이 지원됩니까?
Step13: 다음으로는 입력 데이터셋을 만든 다음, tf.distribute.Strategy.experimental_distribute_dataset 메서드를 호출하여 전략에 맞게 데이터셋을 분배합니다.
Step14: 그리고 나서는 한 단계의 훈련을 정의합니다. 그래디언트를 계산하기 위해 tf.GradientTape를 사용합니다. 이 그래디언트를 적용하여 우리 모델의 변수를 갱신하기 위해서는 옵티마이저를 사용합니다. 분산 훈련을 위하여 이 훈련 작업을 step_fn 함수 안에 구현합니다. 그리고 step_fn을 앞에서 만든 dist_dataset에서 얻은 입력 데이터와 함께 tf.distrbute.Strategy.experimental_run_v2메서드로 전달합니다.
Step15: 위 코드에서 몇 가지 더 짚어볼 점이 있습니다.
Step16: 위 예에서는 dist_dataset을 차례대로 처리하며 훈련 입력 데이터를 얻었습니다. tf.distribute.Strategy.make_experimental_numpy_dataset를 사용하면 넘파이 입력도 쓸 수 있습니다. tf.distribute.Strategy.experimental_distribute_dataset 함수를 호출하기 전에 이 API로 데이터셋을 만들면 됩니다.
Step17: tf.distribute.Strategy API를 사용하여 사용자 정의 훈련 루프를 분산 처리 하는 가장 단순한 경우를 살펴보았습니다. 현재 API를 개선하는 과정 중에 있습니다. 이 API를 사용하려면 사용자 쪽에서 꽤 많은 작업을 해야 하므로, 나중에 별도의 더 자세한 가이드로 설명하도록 하겠습니다.
Step18: 위 예제에서는 기본으로 제공되는 추정기를 사용하였지만, 직접 만든 추정기도 동일한 코드로 사용할 수 있습니다. train_distribute가 훈련을 어떻게 분산시킬지를 지정하고, eval_distribute가 평가를 어떻게 분산시킬지를 지정합니다. 케라스와 함께 사용할 때 훈련과 평가에 동일한 분산 전략을 사용했던 것과는 차이가 있습니다.
|
3,388
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# Set some Pandas options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 25)
from datetime import datetime
now = datetime.now()
now
now.day
now.weekday()
from datetime import date, time
time(3, 24)
date(1970, 9, 3)
my_age = now - datetime(1970, 9, 3)
my_age
my_age.days/365.
segments = pd.read_csv("data/AIS/transit_segments.csv")
segments.head()
segments.seg_length.hist(bins=500)
segments.seg_length.apply(np.log).hist(bins=500)
segments.st_time.dtype
datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')
from dateutil.parser import parse
parse(segments.st_time.ix[0])
segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))
pd.to_datetime(segments.st_time)
pd.to_datetime([None])
vessels = pd.read_csv("data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
[v for v in vessels.type.unique() if v.find('/')==-1]
vessels.type.value_counts()
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=range(3)+range(3), score=np.random.random(size=6)))
df1, df2
pd.merge(df1, df2)
pd.merge(df1, df2, how='outer')
segments.head(1)
vessels.head(1)
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
segments_merged.head()
vessels.merge(segments, left_index=True, right_on='mmsi').head()
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()
np.concatenate([np.random.random(5), np.random.random(5)])
np.r_[np.random.random(5), np.random.random(5)]
np.c_[np.random.random(5), np.random.random(5)]
mb1 = pd.read_excel('data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
mb1.head()
mb1.columns = mb2.columns = ['Count']
mb1.index.name = mb2.index.name = 'Taxon'
mb1.head()
mb1.index[:3]
mb1.index.is_unique
pd.concat([mb1, mb2], axis=0).shape
pd.concat([mb1, mb2], axis=0).index.is_unique
pd.concat([mb1, mb2], axis=1).shape
pd.concat([mb1, mb2], axis=1).head()
pd.concat([mb1, mb2], axis=1).values[:5]
pd.concat([mb1, mb2], axis=1, join='inner').head()
mb1.combine_first(mb2).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()
cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None)
cdystonia.head()
stacked = cdystonia.stack()
stacked
stacked.unstack().head()
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
cdystonia2.index.is_unique
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
cdystonia_long = cdystonia[['patient','site','id','treat','age','sex']].drop_duplicates().merge(
twstrs_wide, right_index=True, left_on='patient', how='inner').head()
cdystonia_long
cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'].unstack('week').head()
pd.melt(cdystonia_long, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
cdystonia.pivot('patient', 'obs')
cdystonia.pivot_table(rows=['site', 'treat'], cols='week', values='twstrs', aggfunc=max).head(20)
pd.crosstab(cdystonia.sex, cdystonia.site)
vessels.duplicated(cols='names')
vessels.drop_duplicates(['names'])
cdystonia.treat.value_counts()
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
vals = pd.Series([float(i)**10 for i in range(10)])
vals
np.log(vals)
vals = vals.replace(0, 1e-6)
np.log(vals)
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
top5 = vessels.type.apply(lambda s: s in vessels.type.value_counts().index[:5])
vessels5 = vessels[top5]
pd.get_dummies(vessels5.type).head(10)
cdystonia.age.describe()
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','ancient'])[:30]
pd.qcut(cdystonia.age, 4)[:30]
quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
pd.get_dummies(quantiles).head(10)
new_order = np.random.permutation(len(segments))
new_order[:30]
segments.take(new_order).head()
segments.head()
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
cdystonia_grouped
for patient, group in cdystonia_grouped:
print patient
print group
print
cdystonia_grouped.agg(mean).head()
cdystonia_grouped.mean().head()
cdystonia_grouped.mean().add_suffix('_mean').head()
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)
cdystonia.groupby(['week','site']).mean().head()
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
cdystonia_grouped['twstrs'].mean().head()
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head()
chunks = dict(list(cdystonia_grouped))
chunks[4]
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
cdystonia2.head(10)
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
def top(df, column, n=5):
return df.sort_index(by=column, ascending=False)[:n]
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments
top3segments.head(20)
mb1.index[:3]
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))
mb_class = mb1.copy()
mb_class.index = class_index
mb_class.head()
mb_class.groupby(level=0).sum().head(10)
from IPython.core.display import HTML
HTML(filename='data/titanic.html')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Date/Time data handling
Step2: In addition to datetime there are simpler objects for date and time information only, respectively.
Step3: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times
Step4: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
Step5: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram
Step6: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful
Step7: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.
Step8: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.
Step9: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.
Step10: We can convert all the dates in a particular column by using the apply method.
Step11: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.
Step12: Pandas also has a custom NA value for missing datetime objects, NaT.
Step13: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
Step14: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
Step15: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Step16: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Step17: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.
Step18: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Step19: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.
Step20: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Step21: This operation is also called binding or stacking.
Step22: Let's give the index and columns meaningful labels
Step23: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
Step24: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated
Step25: However, the index is no longer unique, due to overlap between the two DataFrames.
Step26: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.
Step27: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.
Step28: If we wanted to use the second table to fill values absent from the first table, we could use combine_first.
Step29: We can also create a hierarchical index based on keys identifying the original tables.
Step30: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict.
Step31: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Step32: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways
Step33: To complement this, unstack pivots from rows back to columns.
Step34: For this dataset, it makes sense to create a hierarchical index based on the patient and observation
Step35: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
Step36: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking
Step37: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized
Step38: This illustrates the two formats for longitudinal data
Step39: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table
Step40: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
Step41: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
Step42: Data transformation
Step43: Value replacement
Step44: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
Step45: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
Step46: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.
Step47: We can also perform the same replacement that we used map for with replace
Step48: Inidcator variables
Step49: Discretization
Step50: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 90's
Step51: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False
Step52: Since the data are now ordinal, rather than numeric, we can give them labels
Step53: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default
Step54: Alternatively, one can specify custom quantiles to act as cut points
Step55: Note that you can easily combine discretiztion with the generation of indicator variables shown above
Step56: Permutation and sampling
Step57: Using this sequence as an argument to the take method results in a reordered DataFrame
Step58: Compare this ordering with the original
Step59: Exercise
Step60: This grouped dataset is hard to visualize
Step61: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups
Step62: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
Step63: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Step64: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation
Step65: If we wish, we can easily aggregate according to multiple keys
Step66: Alternately, we can transform the data, using a function of our choice with the transform method
Step67: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns
Step68: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed
Step69: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by type this way
Step70: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index
Step71: Apply
Step72: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship
Step73: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Step74: Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
Step75: Using the string methods split and join we can create an index that just uses the first three classifications
Step76: However, since there are multiple taxonomic units with the same class, our index is no longer unique
Step77: We can re-establish a unique index by summing all rows with the same class, using groupby
Step78: Exercise
|
3,389
|
<ASSISTANT_TASK:>
Python Code:
d = None
with open('..\..\group_analysis.json') as f:
d = json.load(f)
df_num_groups = pd.DataFrame(data={'Min. Num. of Groups': d['min_num_groups'], 'Avg. Num. of Groups': d['avg_num_groups'], 'Max. Num. of Groups': d['max_num_groups']})
df_num_groups
plt.figure()
ax = df_num_groups.plot(title='Number of Groups Analysis')
plt.xlabel('Frame nº')
plt.ylabel('Number of Groups')
plt.show()
df_group_elements = pd.DataFrame(data={'Min. Group Elements': d['min_group_elements'], 'Avg. Group Elements': d['avg_group_elements'], 'Max. Group Elements': d['max_group_elements']})
df_group_elements
plt.figure()
ax = df_group_elements.plot(title='Number of Group Elements')
plt.xlabel('Frame nº')
plt.ylabel('Number of Group Elements')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Number of groups for frame
Step2: Number of elements on each group for frame
|
3,390
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
import pytz
import requests
from urllib.error import HTTPError
# output color
from prettyprinting import *
tz = pytz.timezone('Europe/Madrid')
url_perfiles_2017 = 'http://www.ree.es/sites/default/files/01_ACTIVIDADES/Documentos/Documentacion-Simel/perfiles_iniciales_2017.xlsx'
def _gen_ts(mes, dia, hora, año):
Generación de timestamp a partir de componentes de fecha.
Ojo al DST y cambios de hora.
try:
return dt.datetime(año, mes, dia, hora - 1) #, tzinfo=tz)
except ValueError as e:
print_err('Cambio de hora (día con 25h): "{}"; el 2017-{}-{}, hora={}'.format(e, mes, dia, hora))
return dt.datetime(2017, mes, dia, hora - 2) #, tzinfo=tz)
def get_data_coeficientes_perfilado_2017(url_perfiles_2017):
Extrae la información de las dos hojas del Excel proporcionado por REE
con los perfiles iniciales para 2017.
# Coeficientes de perfilado y demanda de referencia (1ª hoja)
cols_sheet1 = ['Mes', 'Día', 'Hora',
'Pa,0m,d,h', 'Pb,0m,d,h', 'Pc,0m,d,h', 'Pd,0m,d,h',
'Demanda de Referencia 2017 (MW)']
perfiles_2017 = pd.read_excel(url_perfiles_2017, header=None,
skiprows=[0, 1], names=cols_sheet1)
perfiles_2017['ts'] = [_gen_ts(mes, dia, hora, 2017)
for mes, dia, hora in zip(perfiles_2017.Mes,
perfiles_2017.Día,
perfiles_2017.Hora)]
# Coefs Alfa, Beta, Gamma (2ª hoja):
coefs_alpha_beta_gamma = pd.read_excel(url_perfiles_2017, sheetname=1)
return perfiles_2017.set_index('ts'), coefs_alpha_beta_gamma
# Extracción:
perf_demref_2017, coefs_abg = get_data_coeficientes_perfilado_2017(url_perfiles_2017)
print_info(coefs_abg)
perf_demref_2017.head()
# Conversión de formato de dataframe de perfiles 2017 a finales (para uniformizar):
cols_usar = ['Pa,0m,d,h', 'Pb,0m,d,h', 'Pc,0m,d,h', 'Pd,0m,d,h']
perfiles_2017 = perf_demref_2017[cols_usar].copy()
perfiles_2017.columns = ['COEF. PERFIL {}'.format(p) for p in 'ABCD']
perfiles_2017.head()
def get_data_perfiles_finales_mes(año, mes=None):
Lee el fichero CSV comprimido con los perfiles finales de consumo eléctrico para
el mes dado desde la web de REE. Desecha columnas de fecha e información de DST.
Introduce (:int: año, :int: mes) o (:datetime_obj: ts)
mask_ts = 'http://www.ree.es/sites/default/files/simel/perff/PERFF_{:%Y%m}.gz'
if (type(año) is int) and (mes is not None):
ts = dt.datetime(año, mes, 1, 0, 0)
else:
ts = año
url_perfiles_finales = mask_ts.format(ts)
print_info('Descargando perfiles finales del mes de {:%b de %Y} en {}'
.format(ts, url_perfiles_finales))
# Intenta descargar perfiles finales, y si falla, recurre a los estimados para 2017:
try:
perfiles_finales = pd.read_csv(url_perfiles_finales, sep=';',
encoding='latin_1', compression='gzip'
).dropna(how='all', axis=1)
except HTTPError as e:
print_warn('HTTPError: {}. Se utilizan perfiles estimados de 2017.'.format(e))
return perfiles_2017[(perfiles_2017.index.year == ts.year)
& (perfiles_2017.index.month == ts.month)]
cols_date = ['MES', 'DIA', 'HORA', 'AÑO']
zip_date = zip(*[perfiles_finales[c] for c in cols_date])
perfiles_finales['ts'] = [_gen_ts(*args) for args in zip_date]
cols_date.append('VERANO(1)/INVIERNO(0)')
# perfiles_finales['dst'] = perfiles_finales['VERANO(1)/INVIERNO(0)'].astype(bool)
return perfiles_finales.set_index('ts').drop(cols_date, axis=1)
perfiles_finales_2016_11 = get_data_perfiles_finales_mes(2016, 11)
print_ok(perfiles_finales_2016_11.head())
perfiles_2017_02 = get_data_perfiles_finales_mes(2017, 2)
perfiles_2017_02.head()
def extract_perfiles_intervalo(t0, tf):
t_ini = pd.Timestamp(t0)
t_fin = pd.Timestamp(tf)
assert(t_fin > t_ini)
marca_fin = '{:%Y%m}'.format(t_fin)
marca_ini = '{:%Y%m}'.format(t_ini)
if marca_ini == marca_fin:
perfiles = get_data_perfiles_finales_mes(t_ini)
else:
dates = pd.DatetimeIndex(start=t_ini.replace(day=1),
end=t_fin.replace(day=1), freq='MS')
perfiles = pd.concat([get_data_perfiles_finales_mes(t) for t in dates])
return perfiles.loc[t_ini:t_fin].iloc[:-1]
# Ejemplo de generación de valores de consumo horario a partir de consumo total y perfiles de uso:
t0, tf = '2016-10-31', '2017-01-24'
consumo_total_interv_kWh = 836.916
print_secc('Consumo horario estimado para el intervalo {} -> {}, con E={:.3f} kWh'
.format(t0, tf, consumo_total_interv_kWh))
# perfiles finales:
perfs_interv = extract_perfiles_intervalo(t0, tf)
print_ok(perfs_interv.head())
print_ok(perfs_interv.tail())
# Estimación con perfil A:
suma_perfiles_interv = perfs_interv['COEF. PERFIL A'].sum()
mch_pa = perfs_interv['COEF. PERFIL A'] * consumo_total_interv_kWh / suma_perfiles_interv
consumo_diario_est = mch_pa.groupby(pd.TimeGrouper('D')).sum()
print_red('CHECK CONSUMO TOTAL: {:.3f} == {:.3f} == {:.3f} kWh'
.format(consumo_total_interv_kWh, consumo_diario_est.sum(), mch_pa.sum()))
# Plot del consumo diario estimado en el intervalo:
print_secc('Consumo horario diario estimado:')
ax = consumo_diario_est.plot(figsize=(16, 9), color='blue', lw=2)
params_lines = dict(lw=1, linestyle=':', alpha=.6)
xlim = consumo_diario_est[0], consumo_diario_est.index[-1]
ax.hlines([consumo_diario_est.mean()], *xlim, color='orange', **params_lines)
ax.hlines([consumo_diario_est.max()], *xlim, color='red', **params_lines)
ax.hlines([consumo_diario_est.min()], *xlim, color='green', **params_lines)
ax.set_title('Consumo diario estimado (Total={:.1f} kWh)'.format(consumo_total_interv_kWh))
ax.set_ylabel('kWh/día')
ax.set_xlabel('')
ax.set_ylim((0, consumo_diario_est.max() * 1.1))
ax.grid('on', axis='x');
# Copia a otro notebook:
pd.DataFrame(mch_pa.rename('kWh')).to_clipboard()
# Consumo medio por día de la semana (patrón semanal de consumo):
media_diaria = mch_pa.groupby(pd.TimeGrouper('D')).sum()
media_semanal = media_diaria.groupby(lambda x: x.weekday).mean().round(1)
días_semana = ['Lunes', 'Martes', 'Miércoles', 'Jueves', 'Viernes', 'Sábado', 'Domingo']
media_semanal.columns = días_semana
print_ok(media_semanal)
ax = media_semanal.T.plot(kind='bar', figsize=(16, 9), color='orange', legend=False)
ax.set_xticklabels(días_semana, rotation=0)
ax.set_title('Patrón semanal de consumo')
ax.set_ylabel('kWh/día')
ax.grid('on', axis='y')
ax.hlines([media_diaria.mean()], -1, 7, lw=3, color='blue', linestyle=':');
# Comprobación de perfiles al cabo del año:
perfiles_2016 = extract_perfiles_intervalo('2016-01-01', '2016-12-31')
print_ok(perfiles_2016.sum())
perfiles_2017.sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Perfiles de consumo del PVPC para clientes sin registro horario
Step4: Descarga de CSV's mensuales con los perfiles finales de consumo
Step5: Descarga de perfiles horarios para un intervalo dado
Step6: Estimación de consumo horario a partir de consumo total en un intervalo
|
3,391
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
# Defines the chart color scheme using Tableu's Tableau10
plt.style.use('https://gist.githubusercontent.com/mbonix/8478091db6a2e6836341c2bb3f55b9fc/raw/7155235ed03e235c38b66c160d402192ad4d94d9/tableau10.mplstyle')
%matplotlib inline
# List of stocks tickers
STOCKS = ['NASDAQ:AAPL', 'NASDAQ:GOOGL', 'NASDAQ:MSFT', 'NYSE:MCD', 'NYSE:KO']
# Analysis period
START = '12-30-2006'
END = '12-31-2016'
data = web.DataReader(STOCKS, 'google', pd.to_datetime(START), pd.to_datetime(END))
prices = data.loc['Close', :, :]
prices.tail(10)
prices.plot(figsize=(10, 8), title='Stock prices');
norm_prices = 100 * prices / prices.iloc[0, :]
norm_prices.head(10)
norm_prices.plot(figsize=(10, 8), title='Stock prices ({} = 100)'.format(START));
absolute_returns = (norm_prices.iloc[-1, :]).to_frame().T
absolute_returns.index = ['Absolute Return (%)']
absolute_returns
compound_returns = 100 * ((1 + absolute_returns / 100) ** 0.1 - 1)
compound_returns.index = ['Compound Annual Return (%)']
compound_returns
yearly_returns = prices.resample('A').last().pct_change()
yearly_returns
ax = yearly_returns.plot(figsize=(10, 8), title='Stock yearly returns', ls='', marker='o');
ax.set_xlim(ax.get_xlim()[0], ax.get_xlim()[1] + 1)
ax.axhline(0, color='gray');
yearly_avgs = yearly_returns.mean().to_frame().T
yearly_avgs.index = ['Average Yearly Return (%)']
yearly_avgs
import numpy as np
yearly_log_returns = prices.resample('A').last().apply(np.log).diff()
yearly_log_returns
ax = yearly_log_returns.plot(figsize=(10, 8), title='Stock yearly log returns', ls='', marker='o');
ax.set_xlim(ax.get_xlim()[0], ax.get_xlim()[1] + 1)
ax.axhline(0, color='gray');
yearly_log_avgs = yearly_log_returns.mean().to_frame().T
yearly_log_avgs.index = ['Average Yearly Log Return (%)']
yearly_log_avgs
(1 + yearly_returns).apply(np.log)
(yearly_log_returns).apply(np.exp) - 1
prices.loc[:, ['NASDAQ:AAPL', 'NYSE:MCD']].pct_change().plot(figsize=(10, 8), title="Apple and McDonald's daily returns")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, let's choose a bunch of stocks to analyze. They are
Step2: Now, let's download stock data from Yahoo!Finance, using pandas-datareader module.
Step3: We'll work only on Close prices, so let's discard the other data.
Step4: Now, let's plot prices.
Step5: Alphabet, or Google, seems to be the winner in these 10 years. But comparison of stocks is difficult because of different ranges of prices. Let's try to normalize them, rebasing to 100.
Step6: Now, a plot of rebased prices.
Step7: Now, it's clear that Apple and not Alphabet was the winner. Which was its return over the period, anyway? Let's calculate it!
Step8: An astonishing 967.59%! Or a 26.72% compound annual return, if you prefer.
Step9: Now, let's resample data into yearly percent returns.
Step10: Then, let's plot the returns.
Step11: And let's see how returns averaged in the period.
Step12: Then we can compute yearly logarithmic returns over the period.
Step13: Let's also plot log returns.
Step14: Again, let's compute averages. Quite different from percent returns, aren't they?
Step15: We always can transform percent returns into log returns
Step16: And vice versa
Step17: If we plot daily returns, we can clearly see a different scale over time and across stocks. Let's try with Apple and McDonald's.
|
3,392
|
<ASSISTANT_TASK:>
Python Code:
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 50
with_EOS = False
total_train_size = 57994
from time import sleep
data_path = '../../../../Dropbox/data'
ph_data_path = data_path + '/price_history'
npz_full = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020.npz'
assert path.isfile(npz_full)
csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
#npz_train = ph_data_path + '/price_history_dp_60to30_63548_46400_train.npz'
#npz_train_mobattrs = ph_data_path + '/price_history_mobattrs_dp_60to30_57994_train.npz'
# npz_test = ph_data_path + '/price_history_dp_60to30_57994_11584_test.npz'
# npz_test_mobattrs = ph_data_path + '/price_history_mobattrs_dp_60to30_57994_test.npz'
obj = PriceHistWithRelevantDeals(npz_path=npz_full, price_history_csv_path=csv_in, random_state=random_state,
verbose=False)
dic = obj.execute(relevancy_count=2)
npz_augmented = ph_data_path + '/price_history_mobattrs_date_deals_dp_60to30.npz'
dic.keys()
dic['inputs'][0].shape
for key, val in dic.iteritems():
print key, len(val)
len(dic['targets'])
len(dic['inputs'])
args = np.argwhere([curin.shape != (60, 9) for curin in dic['inputs']]).flatten()
len(args)
args = list(args)
# for cur in dic['inputs'][args]:
# print cur.shape
for arg in args:
print dic['inputs'][arg].shape
keep_args = set(range(len(dic['inputs']))).difference(args)
assert len(keep_args) == len(dic['inputs']) - len(args)
keep_args = list(keep_args)
newdic = {
'inputs': np.array([dic[key][keep_arg] for keep_arg in keep_args])
}
newdic['inputs'].shape
for key, val in dic.iteritems():
if key == 'inputs':
continue
newdic[key] = dic[key][keep_args]
print newdic[key].shape
#dic['inputs'] = np.array(dic['inputs'])
np.array(newdic['inputs']).shape
np.savez(npz_augmented, **newdic)
npz = np.load(npz_full)
for key, val in npz.iteritems():
print key,val.shape
my_current_ind = 100 #just because
target_ind = npz['sku_ids'][my_current_ind]
target_ind #this is the SKU ID we are interested in now
from relevant_deals import RelevantDeals
rd = RelevantDeals()
all_deals = rd.getSome(target_ind)
relevancy_order = 2 #2 extra sku ids to keep
relevant_sku_ids = all_deals[:relevancy_order]
relevant_sku_ids
#get everything normalized globally
df = PriceHistory27DatasetGenerator(random_state=random_state).global_norm_scale(
pd.read_csv(csv_in, index_col=0, quoting=csv.QUOTE_ALL, encoding='utf-8')
)
yearday = 1
month_ind = 2
weekday_ind = 3
year_ind = 4
yearweek_ind = 5
day_ind = 6
year_ind, month_ind, day_ind
print np.unique(npz['inputs'][my_current_ind][:, 1]) #<--- this is year day
print np.unique(npz['inputs'][my_current_ind][:, 2]) #<--- this is month
print np.unique(npz['inputs'][my_current_ind][:, 3]) #<--- this is weekday
print np.unique(npz['inputs'][my_current_ind][:, 4]) #<---- this is the year
print np.unique(npz['inputs'][my_current_ind][:, 5]) #<--- this is year week
print np.unique(npz['inputs'][my_current_ind][:, 6]) #<--- this is month day
the_input = npz['inputs'][my_current_ind]
start_item = the_input[0].astype(np.int)
start_item.shape
start_date = "{}-{:02d}-{:02d}".format(start_item[year_ind], start_item[month_ind], start_item[day_ind])
#this format is useful because we can compare them as strings without conversion
start_date
end_item = npz['inputs'][my_current_ind][-1].astype(np.int)
end_date = "{}-{:02d}-{:02d}".format(end_item[year_ind], end_item[month_ind], end_item[day_ind])
end_date
# for one sku id
cur_sku_id = relevant_sku_ids[0]
cur_sku_id
seq = PriceHistory27DatasetGenerator.extractSequence(df.loc[cur_sku_id])
len(seq)
check = seq.index[0] <= start_date and end_date <= seq.index[-1]
check
#extract the sequence of interest
begin_ind = np.argwhere(seq.index == start_date).flatten()[0]
begin_ind
ending_ind = np.argwhere(seq.index == end_date).flatten()[0]
ending_ind
seq_of_interest = seq[begin_ind:ending_ind+1]
seq_of_interest.shape
the_input.shape
sns.tsplot(seq_of_interest)
plt.show()
unbiased = PriceHistory27DatasetGenerator.removeBiasFromSeq(seq_of_interest)
sns.tsplot(unbiased)
plt.show()
ready_deal = unbiased.values[np.newaxis].T
ready_deal.shape
newinput = np.hstack((the_input, ready_deal))
newinput.shape
np.array([unbiased, unbiased]).T
aa = np.array([unbiased, unbiased]).T
ee = np.array([]).T
ee.shape
np.hstack((the_input, ee)).shape
mylist = []
for ii, jj in zip(range(-3, 0), range(0, 3)):
mylist.append((ii, jj))
mylist
map(list, zip(*mylist))[1]
sns.tsplot(aa[:, 1])
%%time
dic = PriceHistory27DatasetGenerator.merge_date_info(npz_path=npz_full)
for key, val in dic.iteritems():
print val.shape
# npz_full_with_date = ph_data_path + '/price_history_dp_60to30_63548_date_info.npz'
# np.savez(npz_full_with_date, **dic)
combinator = PriceHistoryMobileAttrsCombinator()
%%time
dic, inds, count_key_errors, key_errors = combinator.combine(npz_in=npz_full_with_date)
for key, val in dic.iteritems():
print val.shape
npz_full_mobattrs_date = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020.npz'
np.savez(npz_full_mobattrs_date, **dic)
count_key_errors#, key_errors
npz_train_mobattrs_date = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020_train.npz'
npz_test_mobattrs_date = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020_test.npz'
PriceHistory27DatasetGenerator.train_test_split(fullpath=npz_full_mobattrs_date, test_size=6200,
train_path=npz_train_mobattrs_date,
test_path=npz_test_mobattrs_date, random_state=random_state)
npz_train_mobattrs_date_small = ph_data_path + '/price_history_mobattrs_date_dp_60to30_62020_6000_train.npz'
PriceHistory27DatasetGenerator.create_subsampled(inpath=npz_train_mobattrs_date, target_size=6000,
outpath=npz_train_mobattrs_date_small, random_state=random_state)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Include relevant deals
Step2: So for the same date window if we find data from the relevant deal we are good to go
Step3: This is taking longer than expected but ok
Step4: Combine Data
Step5: Train - Test Split
|
3,393
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # interactive mode
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def imshow(inp, title=None):
Imshow for Tensor.
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch{}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
#Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
#iterate over data
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
#zero the parameter gradients
optimizer.zero_grad()
#forward
#tack history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
#backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
#statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
#load best model weights
model.load_state_dict(best_model_wts)
return model
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
visualize_model(model_ft)
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of final layer are being optimized as
# opoosed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
visualize_model(model_conv)
plt.ioff()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step3: Visualize a few images
Step4: Training the model
Step5: Visualizing the model predictions
Step6: Finetuning the convnet
Step7: Train and evaluate
Step8: ConvNet as fixed feature extractor
Step9: Train and evaluate
|
3,394
|
<ASSISTANT_TASK:>
Python Code:
import exercise_utils as eu
import math
from collections import defaultdict
# esse é o nosso dataset
users_interests = eu.get_users_interests() # se não tivermos o dataset, usar eu.get_users_interests_poor()
unique_interests = sorted(list({ interest
for user_interests in users_interests
for interest in user_interests }))
unique_interests
def make_user_interest_vector(user_interests):
Essa função recebe uma lista de interesses de um usuário e
retorna a lista unique_interests substituindo o interesses
por 0 ou 1 seguindo a lógica:
1, se o item de unique_interests está na user_interest.
0, se o item de unique_interests NÃO está na user_interest
return []
user_interest_matrix = map(make_user_interest_vector, users_interests)
# A user é formada dos seguintes dados:
# - cada LINHA representa o interesse de um usuário para todos os interesses
# - cada COLUNA é o índice global de um interesse
# - cada CÉLULA(u, i) contém 1 ou 0: 1 se o usuário u se interessa em i, 0 caso contrário
eu.print_matrix(user_interest_matrix)
def cosine_similarity(va, vb):
return 0.0
user_similarities = [[cosine_similarity(user_i_interest_vector, user_j_interest_vector)
for user_j_interest_vector in user_interest_matrix]
for user_i_interest_vector in user_interest_matrix]
eu.print_matrix(user_similarities)
# ALGORITMO: most_similar_users_to
# ENTRADA: usuario_id: identificador de um usuário. O índice do usuário na matriz dos dados.
# SAÍDA: Lista de 2-tuplas contendo ( outro usuário != user_id, similaridade entre o usuário e user_id).
# A lista retornada deve ser ordenada pela similaridade entre os usuários.
#
# BEGIN
#
# pares <- []
#
# FOR EACH outro_usuario_id, similaridade IN enumerate(user_similarities) THEN:
# IF outro_usuario_id != user_id AND similaridade > 0 THEN:
# pares.add((outro_usuario_id, similaridade))
#
# RETURN ordenar(pares, por=similaridade, em_ordem=decrescente)
#
# END
def most_similar_users_to(user_id):
return []
# ALGORITMO: user_based_suggestions
# ENTRADA: usuario_id: identificador de um usuário. O índice do usuário na matriz dos dados.
# SAÍDA: Uma lista de interesses não "marcados" pelo usuário ordenada pelo índice de recomendação do algoritmo.
#
# BEGIN
#
# // Somando as similaridades de cada interesse em um dicionário. A chave é o interesse e o valor o peso.
# sugestoes <- empty_float_dict()
# FOR EACH outro_usuario_id, similaridade IN most_similar_users_to(usuario_id) THEN:
# FOR EACH interesse IN users_interests[outro_usuario_id] THEN:
# sugestoes[intersse] += similaridade
#
# // convertendo dicionário para um lista ordenada
# sugestoes = ordernar(sugestoes.items(), por=peso, em_order=decrescente)
#
# sugestoes_final = []
# FOR EACH sugestao IN sugestoes THEN:
# IF sugestao NOT IN users_interests[user_id] THEN:
# suggestoes_final.add(sugestao)
# END
def user_based_suggestions(user_id):
return []
user_based_suggestions(0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: O código abaixo cria um vetor global de todos os possíveis interesses.
Step3: Montando a Matrix
Step4: Função de Similaridade
Step5: Usando a função de similaridade implementada, o código abaixo cria uma outra matriz. Essa nova matriz é NxN, onde N é o número de usuários. Cada célula (i, j) contém a similaridade do vetor de interesse do usuário i com o vetor de interesse do usuário j.
Step6: Algoritmo de Recomendação
Step7: Implementada a função que obtém os usuários mais similares a um dado usuário, a segunda etapa do nosso algoritmo pode usá-la para obter uma lista de recomendações de interesses para um dado usuário. Basta implementar a função a seguir de acordo com a descrição dada
Step8: Para vermos o resultado da recomendação, basta chamar a função user_based_suggestions informado o identificador do usuário
|
3,395
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-2', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
3,396
|
<ASSISTANT_TASK:>
Python Code:
from nltk.featstruct import FeatStruct
f1 = FeatStruct(
'[Vorname=Max, Nachname=Mustermann,' +
'Privat=[Strasse=Hauptstrasse, Ort=[Muenchen]]]'
)
f2 = FeatStruct(
'[Arbeit=[Strasse="Oettingenstrasse", Ort=(1)["Muenchen"]],' +
'Privat=[Ort->(1)]]')
f3 = FeatStruct(
'[Strasse="Hauptstrasse"]'
)
f4 = FeatStruct(
'[Privat=[Strasse="Hauptstrasse", Ort=["Passau"]]]'
)
print(f1.unify(f2).__repr__())
print(f2.unify(f4).__repr__())
grammar =
S -> NP[*CASE*=nom] VP
NP[*CASE*=?x] -> DET[*CASE*=?x,GEN=?y] NOM[*CASE*=?x,GEN=?y]
NOM[*CASE*=?x,GEN=?y] -> N[*CASE*=?x,GEN=?y] NP[*CASE*=gen]
NOM[*CASE*=?x,GEN=?y] -> N[*CASE*=?x,GEN=?y]
VP -> V
V -> "schläft"
DET[*CASE*=nomakk,GEN=fem] -> "die"
DET[*CASE*=nomakk,GEN=neut] -> "das"
DET[*CASE*=gen,GEN=mask] -> "des"
DET[*CASE*=gen,GEN=neut] -> "des"
DET[*CASE*=nom,GEN=mask] -> "der"
DET[*CASE*=gen,GEN=fem] -> "der"
N[*CASE*=nongen,GEN=mask] -> "Mann"
N[*CASE*=nongen,GEN=fem] -> "Frau"
N[*CASE*=nongen,GEN=neut] -> "Kind"
N[*CASE*=gen,GEN=fem] -> "Frau"
N[*CASE*=gen,GEN=mask] -> "Mannes"
N[*CASE*=gen,GEN=neut] -> "Kindes"
from IPython.display import display
import nltk
from typed_features import HierarchicalFeature, TYPE
type_hierarchy = {
"gen": [],
"nongen": ["nomakk", "dat"],
"nomakk": ["nom", "akk"],
"nom": [],
"dat": [],
"akk": []
}
CASE = HierarchicalFeature("CASE", type_hierarchy)
compiled_grammar = nltk.grammar.FeatureGrammar.fromstring(
grammar, features=(CASE, TYPE)
)
parser = nltk.FeatureEarleyChartParser(compiled_grammar)
for t in parser.parse("das Kind der Frau schläft".split()):
display(t)
list(parser.parse("des Mannes schläft".split()))
for t in parser.parse("der Mann der Frau schläft".split()):
display(t)
print(f1.unify(f4).__repr__())
print(f2.unify(f3).__repr__())
case_hierarchy = {
"nongen": ["nomakk", "dat"],
"gendat": ["gen", "dat"],
"nomakk": ["nom", "akk"],
"nom": [],
"gen": [],
"dat": [],
"akk": []
}
gen_hierarchy = {
"maskneut": ["mask", "neut"],
"mask": [],
"fem": [],
"neut": []
}
redundant_grammar =
S -> NP[*KAS*=nom] VP
NP[*KAS*=?y] -> DET[*GEN*=?x,*KAS*=?y] NOM[*GEN*=?x,*KAS*=?y]
NOM[*GEN*=?x,*KAS*=?y] -> N[*GEN*=?x,*KAS*=?y] NP[*KAS*=gen]
NOM[*GEN*=?x,*KAS*=?y] -> N[*GEN*=?x,*KAS*=?y]
DET[*GEN*=mask,*KAS*=nom] -> "der"
DET[*GEN*=maskneut,*KAS*=gen] -> "des"
DET[*GEN*=maskneut,*KAS*=dat] -> "dem"
DET[*GEN*=mask,*KAS*=akk] -> "den"
DET[*GEN*=fem,*KAS*=nomakk] -> "die"
DET[*GEN*=fem,*KAS*=gendat] -> "der"
DET[*GEN*=neut,*KAS*=nomakk] -> "das"
N[*GEN*=mask,*KAS*=nongen] -> "Mann"
N[*GEN*=mask,*KAS*=gen] -> "Mannes"
N[*GEN*=fem] -> "Frau"
N[*GEN*=neut,*KAS*=nongen] -> "Buch"
N[*GEN*=neut,*KAS*=gen] -> "Buches"
VP -> V NP[*KAS*=dat] NP[*KAS*=akk]
V -> "gibt" | "schenkt"
CASE = HierarchicalFeature("KAS", case_hierarchy)
GEN = HierarchicalFeature("GEN", gen_hierarchy)
compiled_grammar = nltk.grammar.FeatureGrammar.fromstring(
redundant_grammar, features=(CASE, GEN, TYPE)
)
parser = nltk.FeatureEarleyChartParser(compiled_grammar)
pos_sentences = [
"der Mann gibt der Frau das Buch",
"die Frau des Mannes gibt dem Mann der Frau das Buch des Buches"
]
neg_sentences = [
"des Mannes gibt der Frau das Buch",
"Mann gibt der Frau das Buch",
"der Mann gibt der Frau Buch",
"der Frau gibt dem Buch den Mann",
"das Buch der Mann gibt der Frau das Buch"
]
import sys
def test_grammar(parser, sentences):
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
sys.stdout.flush()
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
sys.stderr.flush()
test_grammar(parser, pos_sentences)
test_grammar(parser, neg_sentences)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gegeben seien folgende Merkmalstrukturen
Step2: Unifizieren Sie
Step3: f2 mit f4
Step5: Aufgabe 2 Typhierarchie im NLTK
Step6: Hier muss die Typhierarchie in Form eines Dictionary definiert werden
Step7: Folgendes sollte funktionieren
Step8: Folgendes sollte leer sein
Step9: Folgendes sollte wieder funktionieren. Betrachten Sie aufmerksam die Merkmale im Syntaxbaum.
Step10: Hausaufgaben
Step11: f2 mit f3
Step13: Aufgabe 4 Weniger Redundanz dank besonderer Merkmale
Step14: Testen Sie mit Ihren eigenen Negativbeispielen!
|
3,397
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.io import fits
import seaborn as sns
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
from desispec.io.util import write_bintable
from desiutil.log import get_logger
log = get_logger()
%matplotlib inline
sns.set(style='ticks', font_scale=1.4, palette='Set2')
col = sns.color_palette()
simdir = os.path.join(os.getenv('DESI_ROOT'), 'spectro', 'sim', 'bgs')
simseed = 555
simrand = np.random.RandomState(simseed)
overwrite_spectra = False
overwrite_redshifts = False
overwrite_results = False
def read_bgs_mock():
import desitarget.mock.io as mockio
mockfile = os.path.join(os.getenv('DESI_ROOT'), 'mocks', 'bgs', 'MXXL',
'desi_footprint', 'v0.0.4', 'BGS_r20.6.hdf5')
mockdata = mockio.read_durham_mxxl_hdf5(mockfile, rand=simrand, nside=32, nproc=nproc,
healpixels=[3151,3150,3149,3148])
print(mockdata.keys())
mockdata['VDISP'] = np.repeat(100.0, len(mockdata['RA'])) # [km/s]
return mockdata
mockdata = read_bgs_mock()
def qa_radec():
fig, ax = plt.subplots()
ax.scatter(mockdata['RA'], mockdata['DEC'], s=1)
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
qa_radec()
def qa_zmag(redshift, mag, maglabel='r (AB mag)', faintmag=20.0):
fig, ax = plt.subplots(1, 3, figsize=(15, 4))
_ = ax[0].hist(redshift, bins=100)
ax[0].set_xlabel('Redshift')
ax[0].set_ylabel('Number of Galaxies')
_ = ax[1].hist(mag, bins=100)
ax[1].axvline(x=faintmag, ls='--', color='k')
ax[1].set_xlabel(maglabel)
ax[1].set_ylabel('Number of Galaxies')
ax[2].scatter(redshift, mag, s=3, alpha=0.75)
ax[2].axhline(y=faintmag, ls='--', color='k')
ax[2].set_xlabel('Redshift')
ax[2].set_ylabel(maglabel)
plt.subplots_adjust(wspace=0.3)
qa_zmag(mockdata['Z'], mockdata['MAG'], maglabel=r'$r_{SDSS}$ (AB mag)', faintmag=20.6)
class BGStree(object):
Build a KD Tree.
def __init__(self):
from speclite import filters
from scipy.spatial import cKDTree as KDTree
from desisim.io import read_basis_templates
self.bgs_meta = read_basis_templates(objtype='BGS', onlymeta=True)
self.bgs_tree = KDTree(self._bgs())
def _bgs(self):
Quantities we care about: redshift (z), M_0.1r, and 0.1(g-r).
zobj = self.bgs_meta['Z'].data
mabs = self.bgs_meta['SDSS_UGRIZ_ABSMAG_Z01'].data
rmabs = mabs[:, 2]
gr = mabs[:, 1] - mabs[:, 2]
return np.vstack((zobj, rmabs, gr)).T
def query(self, objtype, matrix, subtype=''):
Return the nearest template number based on the KD Tree.
Args:
objtype (str): object type
matrix (numpy.ndarray): (M,N) array (M=number of properties,
N=number of objects) in the same format as the corresponding
function for each object type (e.g., self.bgs).
subtype (str, optional): subtype (only for white dwarfs)
Returns:
dist: distance to nearest template
indx: index of nearest template
if objtype.upper() == 'BGS':
dist, indx = self.bgs_tree.query(matrix)
else:
log.warning('Unrecognized SUBTYPE {}!'.format(subtype))
raise ValueError
return dist, indx
class BGStemplates(object):
Generate spectra.
def __init__(self, wavemin=None, wavemax=None, dw=0.2,
rand=None, verbose=False):
from desimodel.io import load_throughput
self.tree = BGStree()
# Build a default (buffered) wavelength vector.
if wavemin is None:
wavemin = load_throughput('b').wavemin - 10.0
if wavemax is None:
wavemax = load_throughput('z').wavemax + 10.0
self.wavemin = wavemin
self.wavemax = wavemax
self.dw = dw
self.wave = np.arange(round(wavemin, 1), wavemax, dw)
self.rand = rand
self.verbose = verbose
# Initialize the templates once:
from desisim.templates import BGS
self.bgs_templates = BGS(wave=self.wave, normfilter='sdss2010-r') # Need to generalize this!
self.bgs_templates.normline = None # no emission lines!
def bgs(self, data, index=None, mockformat='durham_mxxl_hdf5'):
Generate spectra for BGS.
Currently only the MXXL (durham_mxxl_hdf5) mock is supported. DATA
needs to have Z, SDSS_absmag_r01, SDSS_01gr, VDISP, and SEED, which are
assigned in mock.io.read_durham_mxxl_hdf5. See also BGSKDTree.bgs().
from desisim.io import empty_metatable
objtype = 'BGS'
if index is None:
index = np.arange(len(data['Z']))
input_meta = empty_metatable(nmodel=len(index), objtype=objtype)
for inkey, datakey in zip(('SEED', 'MAG', 'REDSHIFT', 'VDISP'),
('SEED', 'MAG', 'Z', 'VDISP')):
input_meta[inkey] = data[datakey][index]
if mockformat.lower() == 'durham_mxxl_hdf5':
alldata = np.vstack((data['Z'][index],
data['SDSS_absmag_r01'][index],
data['SDSS_01gr'][index])).T
_, templateid = self.tree.query(objtype, alldata)
else:
raise ValueError('Unrecognized mockformat {}!'.format(mockformat))
input_meta['TEMPLATEID'] = templateid
flux, _, meta = self.bgs_templates.make_templates(input_meta=input_meta,
nocolorcuts=True, novdisp=False,
verbose=self.verbose)
return flux, meta
# Vary galaxy properties with nominal observing conditions but split
# the sample into nsim chunks to avoid memory issues.
sim1 = dict(suffix='sim01',
use_mock=True,
nsim=10,
nspec=100,
seed=11,
)
#from desisim.simexp import reference_conditions
#ref_obsconditions = reference_conditions['BGS']
ref_obsconditions = {'AIRMASS': 1.0, 'EXPTIME': 300, 'SEEING': 1.1, 'MOONALT': -60, 'MOONFRAC': 0.0, 'MOONSEP': 180}
print(ref_obsconditions)
def bgs_write_simdata(sim, rand, overwrite=False):
Build and write a metadata table with the simulation inputs.
Currently, the only quantities that can be varied are moonfrac,
moonsep, and exptime, but more choices can be added as needed.
from desispec.io.util import makepath
simdatafile = os.path.join(simdir, sim['suffix'], 'bgs-{}-simdata.fits'.format(sim['suffix']))
makepath(simdatafile)
cols = [
('SEED', 'S20'),
('NSPEC', 'i4'),
('EXPTIME', 'f4'),
('AIRMASS', 'f4'),
('SEEING', 'f4'),
('MOONFRAC', 'f4'),
('MOONSEP', 'f4'),
('MOONALT', 'f4')]
simdata = Table(np.zeros(sim['nsim'], dtype=cols))
simdata['EXPTIME'].unit = 's'
simdata['SEEING'].unit = 'arcsec'
simdata['MOONSEP'].unit = 'deg'
simdata['MOONALT'].unit = 'deg'
simdata['SEED'] = sim['seed']
simdata['NSPEC'] = sim['nspec']
simdata['AIRMASS'] = ref_obsconditions['AIRMASS']
simdata['SEEING'] = ref_obsconditions['SEEING']
simdata['MOONALT'] = ref_obsconditions['MOONALT']
if 'moonfracmin' in sim.keys():
simdata['MOONFRAC'] = rand.uniform(sim['moonfracmin'], sim['moonfracmax'], sim['nsim'])
else:
simdata['MOONFRAC'] = ref_obsconditions['MOONFRAC']
if 'moonsepmin' in sim.keys():
simdata['MOONSEP'] = rand.uniform(sim['moonsepmin'], sim['moonsepmax'], sim['nsim'])
else:
simdata['MOONSEP'] = ref_obsconditions['MOONSEP']
if 'exptime' in sim.keys():
simdata['EXPTIME'] = rand.uniform(sim['exptimemin'], sim['exptimemax'], sim['nsim'])
else:
simdata['EXPTIME'] = ref_obsconditions['EXPTIME']
if overwrite or not os.path.isfile(simdatafile):
print('Writing {}'.format(simdatafile))
write_bintable(simdatafile, simdata, extname='SIMDATA', clobber=True)
return simdata
def simdata2obsconditions(simdata):
obs = dict(AIRMASS=simdata['AIRMASS'],
EXPTIME=simdata['EXPTIME'],
MOONALT=simdata['MOONALT'],
MOONFRAC=simdata['MOONFRAC'],
MOONSEP=simdata['MOONSEP'],
SEEING=simdata['SEEING'])
return obs
def write_templates(outfile, flux, wave, meta):
import astropy.units as u
from astropy.io import fits
hx = fits.HDUList()
hdu_wave = fits.PrimaryHDU(wave)
hdu_wave.header['EXTNAME'] = 'WAVE'
hdu_wave.header['BUNIT'] = 'Angstrom'
hdu_wave.header['AIRORVAC'] = ('vac', 'Vacuum wavelengths')
hx.append(hdu_wave)
fluxunits = 1e-17 * u.erg / (u.s * u.cm**2 * u.Angstrom)
hdu_flux = fits.ImageHDU(flux)
hdu_flux.header['EXTNAME'] = 'FLUX'
hdu_flux.header['BUNIT'] = str(fluxunits)
hx.append(hdu_flux)
hdu_meta = fits.table_to_hdu(meta)
hdu_meta.header['EXTNAME'] = 'METADATA'
hx.append(hdu_meta)
print('Writing {}'.format(outfile))
hx.writeto(outfile, clobber=True)
def bgs_make_templates(sim, rand, BGSmaker):
Generate the actual templates. If using the mock data then iterate
until we build the desired number of models after applying targeting cuts,
otherwise use specified priors on magnitude and redshift.
from desitarget.cuts import isBGS_bright, isBGS_faint
from astropy.table import vstack
if sim['use_mock']:
natatime = np.min( (50, sim['nspec']) )
ngood = 0
flux, meta = [], []
while ngood < sim['nspec']:
these = rand.choice(len(mockdata['RA']), natatime)
flux1, meta1 = BGSmaker.bgs(mockdata, index=these)
keep = np.logical_or( isBGS_bright(rflux=meta1['FLUX_R'].data),
isBGS_faint(rflux=meta1['FLUX_R'].data) )
ngood1 = np.count_nonzero(keep)
if ngood1 > 0:
ngood += ngood1
flux.append(flux1[keep, :])
meta.append(meta1[keep])
meta = vstack(meta)[:sim['nspec']]
flux = np.vstack(flux)[:sim['nspec'], :]
wave = BGSmaker.wave
else:
redshift = rand.uniform(sim['zmin'], sim['zmax'], size=sim['nspec'])
rmag = rand.uniform(sim['rmagmin'], sim['rmagmax'], size=sim['nspec'])
flux, wave, meta = BGSmaker.bgs_templates.make_templates(
nmodel=sim['nspec'], redshift=redshift, mag=rmag, seed=sim['seed'])
return flux, wave, meta
def bgs_sim_spectra(sim, overwrite=False, verbose=False):
Generate spectra for a given set of simulation parameters with
the option of overwriting files.
from desisim.scripts.quickspectra import sim_spectra
rand = np.random.RandomState(sim['seed'])
BGSmaker = BGStemplates(rand=rand, verbose=verbose)
# Generate the observing conditions table.
simdata = bgs_write_simdata(sim, rand, overwrite=overwrite)
for ii, simdata1 in enumerate(simdata):
# Generate the observing conditions dictionary.
obs = simdata2obsconditions(simdata1)
# Generate the rest-frame templates. Currently not writing out the rest-frame
# templates but we could.
flux, wave, meta = bgs_make_templates(sim, rand, BGSmaker)
truefile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}-true.fits'.format(sim['suffix'], ii))
if overwrite or not os.path.isfile(truefile):
write_templates(truefile, flux, wave, meta)
spectrafile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}.fits'.format(sim['suffix'], ii))
if overwrite or not os.path.isfile(spectrafile):
sim_spectra(wave, flux, 'bgs', spectrafile, obsconditions=obs,
sourcetype='bgs', seed=sim['seed'], expid=ii)
else:
print('File {} exists...skipping.'.format(spectrafile))
for sim in np.atleast_1d(sim1):
bgs_sim_spectra(sim, verbose=False, overwrite=overwrite_spectra)
def bgs_redshifts(sim, overwrite=False):
Fit for the redshifts.
from redrock.external.desi import rrdesi
for ii in range(sim['nsim']):
zbestfile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}-zbest.fits'.format(sim['suffix'], ii))
spectrafile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}.fits'.format(sim['suffix'], ii))
if overwrite or not os.path.isfile(zbestfile):
rrdesi(options=['--zbest', zbestfile, '--ncpu', str(nproc), spectrafile])
else:
print('File {} exists...skipping.'.format(zbestfile))
for sim in np.atleast_1d(sim1):
bgs_redshifts(sim, overwrite=overwrite_redshifts)
def bgs_gather_results(sim, overwrite=False):
Gather all the pieces so we can make plots.
from desispec.io.spectra import read_spectra
from desispec.io.zfind import read_zbest
nspec = sim['nspec']
nall = nspec * sim['nsim']
resultfile = os.path.join(simdir, sim['suffix'], 'bgs-{}-results.fits'.format(sim['suffix']))
if not os.path.isfile(resultfile) or overwrite:
pass
else:
log.info('File {} exists...skipping.'.format(resultfile))
return
cols = [
('EXPTIME', 'f4'),
('AIRMASS', 'f4'),
('MOONFRAC', 'f4'),
('MOONSEP', 'f4'),
('MOONALT', 'f4'),
('SNR_B', 'f4'),
('SNR_R', 'f4'),
('SNR_Z', 'f4'),
('TARGETID', 'i8'),
('TEMPLATEID', 'i4'),
('RMAG', 'f4'),
('GR', 'f4'),
('D4000', 'f4'),
('EWHBETA', 'f4'),
('ZTRUE', 'f4'),
('Z', 'f4'),
('ZERR', 'f4'),
('ZWARN', 'f4')]
result = Table(np.zeros(nall, dtype=cols))
result['EXPTIME'].unit = 's'
result['MOONSEP'].unit = 'deg'
result['MOONALT'].unit = 'deg'
# Read the simulation parameters data table.
simdatafile = os.path.join(simdir, sim['suffix'], 'bgs-{}-simdata.fits'.format(sim['suffix']))
simdata = Table.read(simdatafile)
for ii, simdata1 in enumerate(simdata):
# Copy over some data.
result['EXPTIME'][nspec*ii:nspec*(ii+1)] = simdata1['EXPTIME']
result['AIRMASS'][nspec*ii:nspec*(ii+1)] = simdata1['AIRMASS']
result['MOONFRAC'][nspec*ii:nspec*(ii+1)] = simdata1['MOONFRAC']
result['MOONSEP'][nspec*ii:nspec*(ii+1)] = simdata1['MOONSEP']
result['MOONALT'][nspec*ii:nspec*(ii+1)] = simdata1['MOONALT']
# Read the metadata table.
truefile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}-true.fits'.format(sim['suffix'], ii))
if os.path.isfile(truefile):
log.info('Reading {}'.format(truefile))
meta = Table.read(truefile)
#result['TARGETID'][nspec*ib:nspec*(ii+1)] = truth['TARGETID']
result['TEMPLATEID'][nspec*ii:nspec*(ii+1)] = meta['TEMPLATEID']
result['RMAG'][nspec*ii:nspec*(ii+1)] = 22.5 - 2.5 * np.log10(meta['FLUX_R'])
result['GR'][nspec*ii:nspec*(ii+1)] = -2.5 * np.log10(meta['FLUX_G'] / meta['FLUX_R'])
result['D4000'][nspec*ii:nspec*(ii+1)] = meta['D4000']
result['EWHBETA'][nspec*ii:nspec*(ii+1)] = meta['EWHBETA']
result['ZTRUE'][nspec*ii:nspec*(ii+1)] = meta['REDSHIFT']
# Read the zbest file.
zbestfile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}-zbest.fits'.format(sim['suffix'], ii))
if os.path.isfile(zbestfile):
log.info('Reading {}'.format(zbestfile))
zbest = read_zbest(zbestfile)
# Assume the tables are row-ordered!
result['Z'][nspec*ii:nspec*(ii+1)] = zbest.z
result['ZERR'][nspec*ii:nspec*(ii+1)] = zbest.zerr
result['ZWARN'][nspec*ii:nspec*(ii+1)] = zbest.zwarn
# Finally, read the spectra to get the S/N.
spectrafile = os.path.join(simdir, sim['suffix'], 'bgs-{}-{:03}.fits'.format(sim['suffix'], ii))
if os.path.isfile(spectrafile):
log.info('Reading {}'.format(spectrafile))
spec = read_spectra(spectrafile)
for band in ('b','r','z'):
for iobj in range(nspec):
these = np.where((spec.wave[band] > np.mean(spec.wave[band])-50) *
(spec.wave[band] < np.mean(spec.wave[band])+50) *
(spec.flux[band][iobj, :] > 0))[0]
result['SNR_{}'.format(band.upper())][nspec*ii+iobj] = (
np.median( spec.flux[band][iobj, these] * np.sqrt(spec.ivar[band][iobj, these]) )
)
log.info('Writing {}'.format(resultfile))
write_bintable(resultfile, result, extname='RESULTS', clobber=True)
for sim in np.atleast_1d(sim1):
bgs_gather_results(sim, overwrite=overwrite_results)
sim = sim1
resultfile = os.path.join(simdir, sim['suffix'], 'bgs-{}-results.fits'.format(sim['suffix']))
log.info('Reading {}'.format(resultfile))
result = Table.read(resultfile)
result
qa_zmag(result['ZTRUE'], result['RMAG'], maglabel=r'$r_{\rm DECaLS}$ (AB mag)', faintmag=20.0)
def qa_snr(res):
rmag, snr_b, snr_r = res['RMAG'], res['SNR_B'], res['SNR_R']
fig, ax = plt.subplots(2, 1, figsize=(7, 6), sharex=True)
ax[0].scatter(rmag, snr_b, s=40, alpha=0.95, edgecolor='k')
ax[0].set_ylabel(r'S/N [$b$ channel]')
ax[0].set_yscale('log')
ax[0].grid()
ax[1].scatter(rmag, snr_r, s=40, alpha=0.95, edgecolor='k')
ax[1].set_yscale('log')
ax[1].set_xlabel(r'$r_{DECaLS}$ (AB mag)')
ax[1].set_ylabel(r'S/N [$r$ channel]')
ax[1].grid()
plt.subplots_adjust(hspace=0.1)
qa_snr(result)
def zstats(res):
z = res['ZTRUE']
dz = (res['Z'] - z)
dzr = dz / (1 + z)
s1 = (res['ZWARN'] == 0)
s2 = (res['ZWARN'] == 0) & (np.abs(dzr) < 0.003)
s3 = (res['ZWARN'] == 0) & (np.abs(dzr) >= 0.003)
s4 = (res['ZWARN'] != 0)
s5 = np.logical_and( np.logical_or( (res['ZWARN'] == 0), (res['ZWARN'] == 4) ), (np.abs(dzr) < 0.003) )
return z, dz, dzr, s1, s2, s3, s4, s5
def gethist(quantity, res, range=None):
Generate the histogram (and Poisson uncertainty) for various
sample cuts. See zstats() for details.
var = res[quantity]
z, dz, dzr, s1, s2, s3, s4, s5 = zstats(res)
h0, bins = np.histogram(var, bins=100, range=range)
hv, _ = np.histogram(var, bins=bins, weights=var)
h1, _ = np.histogram(var[s1], bins=bins)
h2, _ = np.histogram(var[s2], bins=bins)
h3, _ = np.histogram(var[s3], bins=bins)
h4, _ = np.histogram(var[s4], bins=bins)
h5, _ = np.histogram(var[s5], bins=bins)
good = h0 > 2
hv = hv[good]
h0 = h0[good]
h1 = h1[good]
h2 = h2[good]
h3 = h3[good]
h4 = h4[good]
h5 = h5[good]
vv = hv / h0
def _eff(k, n):
eff = k / (n + (n==0))
efferr = np.sqrt(eff * (1 - eff)) / np.sqrt(n + (n == 0))
return eff, efferr
e1, ee1 = _eff(h1, h0)
e2, ee2 = _eff(h2, h0)
e3, ee3 = _eff(h3, h0)
e4, ee4 = _eff(h4, h0)
e5, ee5 = _eff(h5, h0)
return vv, e1, e2, e3, e4, e5, ee1, ee2, ee3, ee4, ee5
def qa_efficiency(res, pngfile=None):
Redshift efficiency vs S/N, rmag, g-r color, redshift,
and D(4000).
from matplotlib.ticker import ScalarFormatter
fig, ax = plt.subplots(3, 2, figsize=(10, 12), sharey=True)
xlabel = (r'$r_{DECaLS}$ (AB mag)', 'True Redshift $z$', r'S/N [$r$ channel]',
r'S/N [$b$ channel]', r'Apparent $g - r$', '$D_{n}(4000)$')
for thisax, xx, dolog, label in zip(ax.flat, ('RMAG', 'ZTRUE', 'SNR_R', 'SNR_B', 'GR', 'D4000'),
(0, 0, 0, 0, 0, 0), xlabel):
mm, e1, e2, e3, e4, e5, ee1, ee2, ee3, ee4, ee5 = gethist(xx, res)
thisax.errorbar(mm, e1, ee1, fmt='o', label='ZWARN=0')
thisax.errorbar(mm, e2, ee2, fmt='o', label='ZWARN=0, $dz/(1+z)<0.003$')
thisax.errorbar(mm, e3, ee3, fmt='o', label='ZWARN=0, $dz/(1+z)\geq 0.003$')
#thisax.errorbar(mm, e4, ee4, fmt='o', label='ZWARN>0')
thisax.set_xlabel(label)
if dolog:
thisax.set_xscale('log')
thisax.xaxis.set_major_formatter(ScalarFormatter())
thisax.axhline(y=1, ls='--', color='k')
thisax.grid()
thisax.set_ylim([0, 1.1])
ax[0][0].set_ylabel('Redshift Efficiency')
ax[1][0].set_ylabel('Redshift Efficiency')
ax[2][0].set_ylabel('Redshift Efficiency')
ax[0][0].legend(loc='lower left', fontsize=12)
plt.subplots_adjust(wspace=0.05, hspace=0.3)
if pngfile:
plt.savefig(pngfile)
qa_efficiency(result)
def qa_zwarn4(res):
mm, e1, e2, e3, e4, e5, ee1, ee2, ee3, ee4, ee5 = gethist('RMAG', res)
fig, ax = plt.subplots()
ax.errorbar(mm, e1, ee1, fmt='o', label='ZWARN=0, $dz/(1+z)<0.003$')
ax.errorbar(mm, e5, ee5, fmt='o', label='ZWARN=0 or ZWARN=4, $dz/(1+z)<0.003$')
ax.axhline(y=1, ls='--', color='k')
ax.grid()
ax.set_xlabel(r'$r_{DECaLS}$ (AB mag)')
ax.set_ylabel('Redshift Efficiency')
ax.legend(loc='lower left')
ax.set_ylim([0, 1.1])
qa_zwarn4(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Establish the I/O path, output filename, and random seed.
Step2: All or none of the output files can be overwritten using these keywords.
Step3: Read a handful of healpixels from the BGS/MXXL mock.
Step9: Use a KD tree to assign a basis template to each mock galaxy.
Step10: Set up the simulation parameters.
Step14: Note that if use_mock=False then rmagmin, rmagmax, zmin, and zmax are required. For example, here's another possible simulation of 1000 spectra in which the magnitude (r=19.5) and redshift (z=0.2) are held fixed while moonfrac and moonsep are varied (as well as intrinsic galaxy properties)
Step15: Generate the spectra.
Step17: Fit the redshifts.
Step19: Gather the results.
Step20: Analyze the outputs.
Step21: Distribution of redshift and apparent magnitude.
Step23: S/N vs apparent magnitude.
Step25: Redshift efficiency vs various measured and intrinsic quantities.
Step26: The most common failure mode seems to be ZWARN==4 (i.e., small delta-chi2).
|
3,398
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Import TF 2.
%tensorflow_version 2.x
import tensorflow as tf
# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
!git clone https://github.com/Bihaqo/t3f.git
!cd t3f; pip install .
import t3f
shape = (3, 4, 4, 5, 7, 5)
# Generate ground truth tensor A. To make sure that it has low TT-rank,
# let's generate a random tt-rank 5 tensor and apply t3f.full to it to convert to actual tensor.
ground_truth = t3f.full(t3f.random_tensor(shape, tt_rank=5))
# Make a (non trainable) variable out of ground truth. Otherwise, it will be randomly regenerated on each sess.run.
ground_truth = tf.Variable(ground_truth, trainable=False)
noise = 1e-2 * tf.Variable(tf.random.normal(shape), trainable=False)
noisy_ground_truth = ground_truth + noise
# Observe 25% of the tensor values.
sparsity_mask = tf.cast(tf.random.uniform(shape) <= 0.25, tf.float32)
sparsity_mask = tf.Variable(sparsity_mask, trainable=False)
sparse_observation = noisy_ground_truth * sparsity_mask
observed_total = tf.reduce_sum(sparsity_mask)
total = np.prod(shape)
initialization = t3f.random_tensor(shape, tt_rank=5)
estimated = t3f.get_variable('estimated', initializer=initialization)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
def step():
with tf.GradientTape() as tape:
# Loss is MSE between the estimated and ground-truth tensor as computed in the observed cells.
loss = 1.0 / observed_total * tf.reduce_sum((sparsity_mask * t3f.full(estimated) - sparse_observation)**2)
gradients = tape.gradient(loss, estimated.tt_cores)
optimizer.apply_gradients(zip(gradients, estimated.tt_cores))
# Test loss is MSE between the estimated tensor and full (and not noisy) ground-truth tensor A.
test_loss = 1.0 / total * tf.reduce_sum((t3f.full(estimated) - ground_truth)**2)
return loss, test_loss
train_loss_hist = []
test_loss_hist = []
for i in range(5000):
tr_loss_v, test_loss_v = step()
tr_loss_v, test_loss_v = tr_loss_v.numpy(), test_loss_v.numpy()
train_loss_hist.append(tr_loss_v)
test_loss_hist.append(test_loss_v)
if i % 1000 == 0:
print(i, tr_loss_v, test_loss_v)
plt.loglog(train_loss_hist, label='train')
plt.loglog(test_loss_hist, label='test')
plt.xlabel('Iteration')
plt.ylabel('MSE Loss value')
plt.title('SGD completion')
plt.legend()
shape = (10, 10, 10, 10, 10, 10, 10)
total_observed = np.prod(shape)
# Since now the tensor is too large to work with explicitly,
# we don't want to generate binary mask,
# but we would rather generate indecies of observed cells.
ratio = 0.001
# Let us simply randomly pick some indecies (it may happen
# that we will get duplicates but probability of that
# is 10^(-14) so lets not bother for now).
num_observed = int(ratio * total_observed)
observation_idx = np.random.randint(0, 10, size=(num_observed, len(shape)))
# and let us generate some values of the tensor to be approximated
observations = np.random.randn(num_observed)
# Our strategy is to feed the observation_idx
# into the tensor in the Tensor Train format and compute MSE between
# the obtained values and the desired values
initialization = t3f.random_tensor(shape, tt_rank=16)
estimated = t3f.get_variable('estimated', initializer=initialization)
# To collect the values of a TT tensor (withour forming the full tensor)
# we use the function t3f.gather_nd
def loss():
estimated_vals = t3f.gather_nd(estimated, observation_idx)
return tf.reduce_mean((estimated_vals - observations) ** 2)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
def step():
with tf.GradientTape() as tape:
loss_value = loss()
gradients = tape.gradient(loss_value, estimated.tt_cores)
optimizer.apply_gradients(zip(gradients, estimated.tt_cores))
return loss_value
# In TF eager mode you're supposed to first implement and debug
# a function, and then compile it to make it faster.
faster_step = tf.function(step)
loss_hist = []
for i in range(2000):
loss_v = faster_step().numpy()
loss_hist.append(loss_v)
if i % 100 == 0:
print(i, loss_v)
plt.loglog(loss_hist)
plt.xlabel('Iteration')
plt.ylabel('MSE Loss value')
plt.title('smarter SGD completion')
plt.legend()
print(t3f.gather_nd(estimated, observation_idx))
print(observations)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating problem instance
Step2: Initialize the variable and compute the loss
Step3: SGD optimization
Step4: Speeding it up
Step5: Compiling the function to additionally speed things up
|
3,399
|
<ASSISTANT_TASK:>
Python Code:
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
import shogun as sg
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 1000 examples for training
Xtrain = sg.create_features(Xall[:,0:1000])
Ytrain = sg.create_labels(Yall[0:1000])
# 4000 examples for validation
Xval = sg.create_features(Xall[:,1001:5001])
Yval = sg.create_labels(Yall[1001:5001])
# the rest for testing
Xtest = sg.create_features(Xall[:,5002:-1])
Ytest = sg.create_labels(Yall[5002:-1])
# create the networks
net_no_reg = sg.create_machine("NeuralNetwork")
net_no_reg.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256))
net_no_reg.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256))
net_no_reg.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128))
net_no_reg.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10))
net_l2 = sg.create_machine("NeuralNetwork")
net_l2.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256))
net_l2.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256))
net_l2.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128))
net_l2.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10))
net_l1 = sg.create_machine("NeuralNetwork")
net_l1.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256))
net_l1.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256))
net_l1.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128))
net_l1.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10))
net_dropout = sg.create_machine("NeuralNetwork")
net_dropout.add("layers", sg.create_layer("NeuralInputLayer", num_neurons=256))
net_dropout.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=256))
net_dropout.add("layers", sg.create_layer("NeuralLogisticLayer", num_neurons=128))
net_dropout.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10))
# import networkx, install if necessary
try:
import networkx as nx
except ImportError:
import pip
%pip install --user networkx
import networkx as nx
G = nx.DiGraph()
pos = {}
for i in range(8):
pos['X'+str(i)] = (i,0) # 8 neurons in the input layer
pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer
for j in range(8): G.add_edge('X'+str(j),'H'+str(i))
if i<4:
pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer
for j in range(8): G.add_edge('H'+str(j),'U'+str(i))
if i<6:
pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer
for j in range(4): G.add_edge('U'+str(j),'Y'+str(i))
nx.draw(G, pos, node_color='y', node_size=750)
def compute_accuracy(net, X, Y):
predictions = net.apply_multiclass(X)
evaluator = sg.create_evaluation("MulticlassAccuracy")
accuracy = evaluator.evaluate(predictions, Y)
return accuracy*100
net_no_reg.put('epsilon', 1e-6)
net_no_reg.put('max_num_epochs', 600)
net_no_reg.put('seed', 10)
# uncomment this line to allow the training progress to be printed on the console
#from shogun import MSG_INFO; net_no_reg.io.put('loglevel', MSG_INFO)
net_no_reg.put('labels', Ytrain)
net_no_reg.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%")
# turn on L2 regularization
net_l2.put('l2_coefficient', 3e-4)
net_l2.put('epsilon', 1e-6)
net_l2.put('max_num_epochs', 600)
net_l2.put('seed', 10)
net_l2.put('labels', Ytrain)
net_l2.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%")
# turn on L1 regularization
net_l1.put('l1_coefficient', 3e-5)
net_l1.put('epsilon', 1e-6)
net_l1.put('max_num_epochs', 600)
net_l1.put('seed', 10)
net_l1.put('labels', Ytrain)
net_l1.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%")
# set the dropout probabilty for neurons in the hidden layers
net_dropout.put('dropout_hidden', 0.5)
# set the dropout probabilty for the inputs
net_dropout.put('dropout_input', 0.2)
# limit the maximum incoming weights vector lengh for neurons
net_dropout.put('max_norm', 15)
net_dropout.put('epsilon', 1e-6)
net_dropout.put('max_num_epochs', 600)
net_dropout.put('seed', 10)
# use gradient descent for optimization
net_dropout.put('optimization_method', "NNOM_GRADIENT_DESCENT")
net_dropout.put('gd_learning_rate', 0.5)
net_dropout.put('gd_mini_batch_size', 100)
net_dropout.put('labels', Ytrain)
net_dropout.train(Xtrain) # this might take a while, depending on your machine
# compute accuracy on the validation set
print("With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%")
# prepere the layers
net_conv = sg.create_machine("NeuralNetwork")
# input layer, a 16x16 image single channel image
net_conv.add("layers", sg.create_layer("NeuralInputLayer", width=16, height=16, num_neurons=256))
# the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
net_conv.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=10,
radius_x=2,
radius_y=2,
pooling_width=2,
pooling_height=2))
# the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps
net_conv.add("layers", sg.create_layer("NeuralConvolutionalLayer",
activation_function="CMAF_RECTIFIED_LINEAR",
num_maps=15,
radius_x=2,
radius_y=2,
pooling_width=2,
pooling_height=2))
# output layer
net_conv.add("layers", sg.create_layer("NeuralSoftmaxLayer", num_neurons=10))
# 50% dropout in the input layer
net_conv.put('dropout_input', 0.5)
# max-norm regularization
net_conv.put('max_norm', 1.0)
# set gradient descent parameters
net_conv.put('optimization_method', "NNOM_GRADIENT_DESCENT")
net_conv.put('gd_learning_rate', 0.01)
net_conv.put('gd_mini_batch_size', 100)
net_conv.put('epsilon', 0.0)
net_conv.put('max_num_epochs', 100)
net_conv.put("seed", 10)
# start training
net_conv.put('labels', Ytrain)
net_conv.train(Xtrain)
# compute accuracy on the validation set
print("With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%")
print("Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%")
predictions = net_conv.apply_multiclass(Xtest)
_=plt.figure(figsize=(10,12))
# plot some images, with the predicted label as the title of each image
# this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg
for i in range(100):
ax=plt.subplot(10,10,i+1)
plt.title(int(predictions.get("labels")[i]))
ax.imshow(Xtest.get("feature_matrix")[:,i].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r)
ax.set_xticks([])
ax.set_yticks([])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating the network
Step2: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.
Step3: Training
Step4: Training without regularization
Step5: Training with L2 regularization
Step6: Training with L1 regularization
Step7: Training with dropout
Step8: Convolutional Neural Networks
Step9: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization
Step10: Evaluation
Step11: We can also look at some of the images and the network's response to each of them
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.