Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
11,200
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from IPython.display import display, Latex
import seaborn as sb
sb.set_context('notebook')
sb.set_style('ticks')
from ldtk import LDPSetCreator, BoxcarFilter, TabulatedFilter
from ldtk.filters import sdss_g, sdss_r, sdss_i, sdss_z
sc = LDPSetCreator(teff=(5500,100), logg=(4.5,0.2), z=(0.25,0.05),
filters=[sdss_g, sdss_r, sdss_i, sdss_z], dataset='vis-lowres')
ps = sc.create_profiles(nsamples=2000)
ps.resample_linear_z(300)
cmap = cm.get_cmap("Spectral")
cp = cmap(linspace(0.1,1.0,4))
fig,ax = subplots(1, 2, figsize=(12,4), sharey=True)
for i in range(ps._nfilters):
ax[0].fill_between(ps._z, ps._mean[i]-3*ps._std[i], ps._mean[i]+3*ps._std[i],
facecolor=cp[i], alpha=0.2)
ax[0].plot(ps._z, ps._mean[i], '-', c=cp[i], label=ps._filters[i])
ax[1].fill_between(ps._mu, ps._mean[i]-3*ps._std[i], ps._mean[i]+3*ps._std[i],
facecolor=cp[i], alpha=0.2)
ax[1].plot(ps._mu, ps._mean[i], '-', c=cp[i], label=ps._filters[i])
ax[0].legend()
setp(ax[0], xlim=(0,1.05), ylim=(0.2,1.05), xlabel='z', ylabel='Flux')
setp(ax[1], xlim=(1.05,0), ylim=(0.2,1.05), xlabel='$\mu$')
sb.despine(fig, offset=10, trim=True)
setp(ax[1].get_yticklabels(), visible=False)
fig.tight_layout()
x,y = meshgrid(linspace(-1.05, 1.05, 150), linspace(-1.05, 1.05, 150))
r = sqrt(x**2+y**2)
profiles = ps.profile_averages
fig, axs = subplots(1, 4, figsize=(12,3), sharex=True, sharey=True)
for i,ldp in enumerate(profiles):
axs.flat[i].imshow(interp(r, ps._z, ldp, left=0, right=0),
vmin=0, vmax=1, aspect='auto', extent=(-1.05,1.05,-1.05,1.05))
axs.flat[i].contour(x, y, interp(r, ps._z, ldp, left=0, right=0), colors='k', alpha=0.15,
levels=[0.0, 0.25, 0.4, 0.55, 0.65, 0.75, 0.85, 0.95, 1])
axs.flat[i].set_title(ps._filters[i])
setp(axs, xticks=[], yticks=[])
fig.tight_layout()
qc, qe = ps.coeffs_qd()
for i, (c, e) in enumerate(zip(qc, qe)):
display(Latex('u$_{i:d} = {c[0]:5.4f} \pm {e[0]:5.4f}\quad$'
'v$_{i:d} = {c[1]:5.4f} \pm {e[1]:5.4f}$'.format(i=i+1,c=c,e=e)))
qc, qe = ps.coeffs_qd(do_mc=True, n_mc_samples=10000)
for i, (c, e) in enumerate(zip(qc, qe)):
display(Latex('u$_{i:d} = {c[0]:5.4f} \pm {e[0]:5.4f}\quad$'
'v$_{i:d} = {c[1]:5.4f} \pm {e[1]:5.4f}$'.format(i=i+1,c=c,e=e)))
ps.set_uncertainty_multiplier(2)
qc, qe = ps.coeffs_qd(do_mc=True, n_mc_samples=10000)
for i, (c, e) in enumerate(zip(qc, qe)):
display(Latex('u$_{i:d} = {c[0]:5.4f} \pm {e[0]:5.4f}\quad$'
'v$_{i:d} = {c[1]:5.4f} \pm {e[1]:5.4f}$'.format(i=i+1,c=c,e=e)))
print(ps.lnlike_qd([0.69, 0.15], flt=0))
print(ps.lnlike_qd([[0.69, 0.15], [0.48 ,0.16], [0.38, 0.15], [0.38, 0.10]]))
print(ps.lnlike_qd([[[0.69, 0.15], [0.48 ,0.16], [0.38, 0.15], [0.38, 0.10]],
[[0.66, 0.12], [0.43 ,0.12], [0.35, 0.14], [0.37, 0.13]]]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialisation
Step2: First, we initialise a LDPSetCreator with the stellar parameter estimates and our filter set. This may take some time, since we also need to download the necessary model spectra to a local cache directory (can be several hundreds of MB).
Step3: Next, we create the limb darkening profiles with their uncertainties for each filter, all contained in an LDPSet object.
Step4: Limb darkening profile visualisation
Step5: Limb darkening coefficient estimation
Step6: We can also use MCMC to estimate the coefficient uncertainties (or the full covariance matrix) more accurately
Step7: Finally, we can decide we don't trust the stellar models the profiles were computed from too much, and set an multiplicative factor on the profile uncertainties.
Step8: Likelihood evaluation
Step9: jointly for all passbands
Step10: or in parallel for a number of coefficient sets, what can be useful when using population-based optimizers or MCMC samplers, such as emcee
|
11,201
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
% matplotlib inline
from matplotlib import pyplot as plt
data = pd.read_csv('all_data.csv')
data.head(10)
duplicated_data = data.duplicated()
duplicated_data.head()
data[duplicated_data]
heights = data['height']
ages = data['age']
gender = data['gender']
missing_height = heights.isnull()
missing_height.head()
missing_height.sum()
data[missing_height]
missing_ages = ages.isnull()
data[missing_ages]
gender.value_counts(dropna=False)
missing_gender = data['gender'].isnull()
data[missing_gender]
gender.value_counts(dropna=False).plot(kind='bar', rot=0)
heights.hist(bins=40, figsize=(16,4))
plt.xlabel('Height')
plt.ylabel('Count')
def print_analysis(series):
for nr in range(1, 4):
upper_limit = series.mean() + (nr * series.std())
lower_limit = series.mean() - (nr * series.std())
over_range = series > upper_limit
percent_over_range = over_range.sum() / len(series) * 100
under_range = series < lower_limit
percent_under_range = under_range.sum() / len(series) * 100
in_range = (series < upper_limit) & (series > lower_limit)
percent_in_range = in_range.sum() / len(series) * 100
print('\nFor the range of %0.0f standard deviations:' % nr)
print(' Lower limit: %0.0f' % lower_limit)
print(' Percent under range: %0.1f%%' % percent_under_range)
print(' Upper limit: %0.0f' % upper_limit)
print(' Percent over range: %0.1f%%' % percent_over_range)
print(' Percent within range: %0.1f%%' % percent_in_range)
heights.hist(bins=20)
plt.xlabel('Height')
plt.ylabel('Count')
print_analysis(heights)
heights[heights < heights.mean() - 2*heights.std()]
heights[heights > heights.mean() + 2*heights.std()]
heights[heights < heights.mean() - 3*heights.std()]
heights[heights > heights.mean() + 3*heights.std()]
ages.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
print_analysis(ages)
ages.max()
extreme_value = .99
ages.dropna().quantile(extreme_value)
# under_extreme_value = ages, where ages is smaller than the extreme value:
under_extreme_value = ages[ages < ages.dropna().quantile(extreme_value)]
under_extreme_value.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
non_babies = under_extreme_value[under_extreme_value > 10]
non_babies.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
non_babies.max()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data you'll be exloring
Step2: Duplicated data
Step3: So this is actually a mask. We can now ask for the data where the mask applies
Step4: Missing data
Step5: How much missing data do we have for heights?
Step6: In python, False evaluates to 0, and True to 1. So we can count the number of missing by doing
Step7: As before, we can use that mask on our original dataset, and see it
Step8: How about age?
Step9: And gender?
Step10: But wait, we have another problem. We seem to have male and MALE
Step11: Outliers
Step12: This was useful, we can see that there are some really tall people, and some quite small ones. The distribution also looks close to normal
Step13: Looking at a few of these outliers
Step14: Over
Step15: Note
Step16: How about the ages?
Step17: Well, this is quite useless. The reason is that using standard deviations assumes that the distribution is normal, and we can clearly see it isn't.
Step18: What if we used percentiles?
Step19: Note
Step20: Now that looks a lot better. We can clearly identify that almost everyone is an adult, except the point on the extreme left.
Step21: Starting to look better, what about the point on the far right?
|
11,202
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
df = pd.read_csv("data/titanic.csv")
df.head()
df['Age'].hist()
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
data.head()
data['1999':].resample('M').mean().plot(ylim=[0,120])
data['1999':].resample('A').mean().plot(ylim=[0,100])
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot()
df
df.index
df.columns
df.dtypes
df.info()
df.values
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries
df['Age']
age = df['Age']
age.index
age.values[:10]
age[0]
df = df.set_index('Name')
df
age = df['Age']
age
age['Dooley, Mr. Patrick']
age * 1000
age.mean()
age[age > 70]
df['Embarked'].value_counts()
df['Fare'].max()
df['Fare'].median()
# df['Survived'].sum() / len(df['Survived'])
df['Survived'].mean()
#pd.read
#df.to
pd.read_csv?
df = pd.read_csv("data/titanic.csv")
df.head()
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
no2
no2.head(3)
no2.tail()
no2.info()
no2.describe()
no2.plot(kind='box', ylim=[0,250])
no2['BASCH'].plot(kind='hist', bins=50)
df['Age'].hist()
no2.plot(figsize=(12,6))
no2[-500:].plot(figsize=(12,6))
df = pd.read_csv("data/titanic.csv")
df['Age']
df[['Age', 'Fare']]
df[10:15]
df = df.set_index('Name')
df.loc['Bonnell, Miss. Elizabeth', 'Fare']
df.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :]
df.iloc[0:2,1:3]
df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100
df
df['Fare'] > 50
df[df['Fare'] > 50]
df = pd.read_csv("data/titanic.csv")
df[df['Sex'] == 'male']
df.loc[df['Sex'] == 'male', 'Age'].mean()
df.loc[df['Sex'] == 'female', 'Age'].mean()
len(df[df['Age'] > 70])
(df['Age'] > 70).sum()
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
df['data'].sum()
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
df.groupby('key').sum()
df.groupby('key').aggregate(np.sum) # 'sum'
df.groupby('key')['data'].sum()
df = pd.read_csv("data/titanic.csv")
df.head()
df.groupby('Sex')['Age'].mean()
# df['Survived'].sum() / len(df['Survived'])
df['Survived'].mean()
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
# df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
df.groupby('Sex')[['Survived']].mean()
df.groupby('Pclass')['Survived'].mean().plot(kind='bar')
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
df.groupby('AgeClass')['Fare'].mean().plot(kind='bar', rot=0)
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
no2.index
no2["2010-01-01 09:00": "2010-01-01 12:00"]
no2['2012-01':'2012-03']
no2.index.hour
no2.index.year
no2.plot()
no2.head()
no2.resample('D').mean().head()
no2.resample('D').max().head()
no2.resample('M').mean().plot() # 'A'
# no2['2012'].resample('D').plot()
no2.loc['2009':, 'VERS'].resample('M').agg(['mean', 'median']).plot()
no2.resample('A').mean().plot()
no2.mean(axis=1).resample('A').mean().plot(color='k', linestyle='--', linewidth=4)
no2['month'] = no2.index.month
no2.groupby('month').mean()
no2.groupby('month').mean().plot()
no2.groupby(no2.index.hour).mean().plot()
no2.index.weekday?
no2['weekday'] = no2.index.weekday
no2['weekend'] = no2['weekday'].isin([5, 6])
data_weekend = no2.groupby(['weekend', no2.index.hour]).mean()
data_weekend.head()
data_weekend_BASCH = data_weekend['BASCH'].unstack(level=0)
data_weekend_BASCH.head()
data_weekend_BASCH.plot()
no2['hour'] = no2.index.hour
no2.pivot_table(columns='weekend', index='hour', values='BASCH')
# re-reading the data to have a clean version
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
exceedances = no2 > 200
# group by year and count exceedances (sum of boolean)
exceedances = exceedances.groupby(exceedances.index.year).sum()
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Let's start with a showcase
Step2: Starting from reading this dataset, to answering questions about this data in a few lines of code
Step3: How does the survival rate of the passengers differ between sexes?
Step4: Or how does it differ between the different classes?
Step5: All the needed functionality for the above examples will be explained throughout this tutorial.
Step6: to answering questions about this data in a few lines of code
Step7: What is the difference in diurnal profile between weekdays and weekend?
Step8: We will come back to these example, and build them up step by step.
Step9: Attributes of the DataFrame
Step10: To check the data types of the different columns
Step11: An overview of that information can be given with the info() method
Step12: Also a DataFrame has a values attribute, but attention
Step13: Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Step14: One-dimensional data
Step15: Attributes of a Series
Step16: You can access the underlying numpy array representation with the .values attribute
Step17: We can access series values via the index, just like for NumPy arrays
Step18: Unlike the NumPy array, though, this index can be something other than integers
Step19: but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
Step20: A range of methods
Step21: Fancy indexing, like indexing with a list or boolean indexing
Step22: But also a lot of pandas specific methods, e.g.
Step23: <div class="alert alert-success">
Step24: <div class="alert alert-success">
Step25: 3. Data import and export
Step26: Very powerful csv reader
Step27: Luckily, if we have a well formed csv file, we don't need many of those arguments
Step28: <div class="alert alert-success">
Step29: 4. Exploration
Step30: info()
Step31: Getting some basic summary statistics about the data with describe
Step32: Quickly visualizing the data
Step33: <div class="alert alert-success">
Step34: The default plot (when not specifying kind) is a line plot of all columns
Step35: This does not say too much ..
Step36: Or we can use some more advanced time series features -> see further in this notebook!
Step37: df[] provides some convenience shortcuts
Step38: or multiple columns
Step39: But, slicing accesses the rows
Step40: Systematic indexing with loc and iloc
Step41: Selecting by position with iloc works similar as indexing numpy arrays
Step42: The different indexing methods can also be used to assign data
Step43: Boolean indexing (filtering)
Step44: <div class="alert alert-success">
Step45: <div class="alert alert-success">
Step46: 6. The group-by operation
Step47: Recap
Step48: However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
Step49: This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
Step50: And many more methods are available.
Step51: Application of the groupby concept on the titanic data
Step52: <div class="alert alert-success">
Step53: <div class="alert alert-success">
Step54: <div class="alert alert-success">
Step55: <div class="alert alert-success">
Step56: <div class="alert alert-success">
Step57: <div class="alert alert-success">
Step58: 7. Working with time series data
Step59: When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available
Step60: Indexing a time series works with strings
Step61: A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
Step62: Time and date components can be accessed from the index
Step63: Converting your time series with resample
Step64: The time series has a frequency of 1 hour. I want to change this to daily
Step65: Above I take the mean, but as with groupby I can also specify other methods
Step66: The string to specify the new time frequency
Step67: <div class="alert alert-success">
Step68: <div class="alert alert-success">
Step69: Now, we can calculate the mean of each month over the different years
Step70: <div class="alert alert-success">
Step71: <div class="alert alert-success">
Step72: Add a column indicating week/weekend
Step73: Now we can groupby the hour of the day and the weekend (or use pivot_table)
Step74: <div class="alert alert-success">
|
11,203
|
<ASSISTANT_TASK:>
Python Code:
DATA_FOLDER = 'Data' # Use the data folder provided in Tutorial 02 - Intro to Pandas.
path = "Data/ebola"
countries = [("Guinea","guinea_data"), ("Liberia","liberia_data"), ("Sierra Leone","sl_data")]
data = pd.DataFrame()
# for each country dataframe, concatenate all the sheets and add a new column with the corresponding country
for country in countries:
allFiles = glob.glob(path + "/" + country[1] + "/" + "/*csv")
data_per_country = pd.concat((pd.read_csv(file) for file in allFiles), axis=0)
data_per_country['Country'] = country[0]
data = pd.concat([data, data_per_country])
# merge columns 'Date' and 'date'
data['Date'] = pd.concat([data['Date'].dropna(), data['date'].dropna()])
data = data.drop('date', axis=1)
data['Date'] = pd.to_datetime(data['Date'])
# Guinea
# basic formating of the data, to be more readable for a human
data_cleaned_guinea = data.query('Country == "Guinea"').dropna(axis=1, how='all').set_index(['Date', 'Description'])['Totals']
data_cleaned_guinea = pd.pivot_table(data_cleaned_guinea.reset_index(), values='Totals', index='Date', columns='Description', aggfunc='sum')
# select the two columns for which we are interested
data_cleaned_guinea = data_cleaned_guinea[['New cases of confirmed', 'New deaths registered']].apply(pd.to_numeric, errors='coerce')
# group by month computing the mean
data_cleaned_guinea = data_cleaned_guinea.groupby(data_cleaned_guinea.index.month).mean()
# display the values in a sexy manner
data_cleaned_guinea.rename_axis({'New cases of confirmed': 'New cases (mean)', 'New deaths registered': 'Deaths (mean)'}, axis='columns', inplace=True)
data_cleaned_guinea.index.names = ['Month']
data_cleaned_guinea.columns.name = ''
data_cleaned_guinea
# this function checks if a cumulative cell has been correctly fullfilled among the days, i.e. it checks that
# as we go through the days the cumulative cell does not strictly decrease. If this is the case, then we write
# NaN in the concerned cumulative cell: it is considered that the cell has been wrongly filled.
def check_cumulative_entries(df, column):
# initialize the last entry to 0
last_entry = 0
# for each day we will check if the current entry is strictly smaller the the last entry
for d in df.index:
current_entry = df.loc[d, column]
if last_entry > current_entry:
# if this is the case, we write NaN value in the concerned cell
df.loc[d, column] = pd.np.nan
else:
# in the case where there is no problem, we update the last entry value, otherwise we don't change it
last_entry = current_entry
# we return the series modified
return df[column]
# this function builds a daily entry serie based on a cumulative one
def compute_daily_entries_from_cumulative(df, column):
# build the serie based on the dataframe df and the column we are interested
s_cum = df[column]
# create a shifted version of it
s_cum_shifted = s_cum.shift(1)
# initialize the first entry of the shifted version by the first entry of the original one
s_cum_shifted[0] = s_cum[0]
# compute the daily entry serie
s_daily = s_cum - s_cum_shifted
# return it
return s_daily
# Liberia
# basic formating of the data, to be more readable for a human
data_cleaned_liberia = data.query('Country == "Liberia"').dropna(axis=1, how='all').set_index(['Date','Variable'])['National'].apply(pd.to_numeric, errors='coerce')
data_cleaned_liberia = pd.pivot_table(data_cleaned_liberia.reset_index(), values='National', index='Date', columns='Variable', aggfunc='sum')
# new cases
# write cumulative confirmed cases in last 6 entries as the concerned cells are empty (they just wrote the
# cumulative confirmed cases in a different cell)
for i, row in data_cleaned_liberia[(data_cleaned_liberia.index.month == 12) & (data_cleaned_liberia.index.day > 3)].iterrows():
data_cleaned_liberia.loc[i, 'Total confirmed cases'] = row['New case/s (confirmed)']
# check consistency of cumulative data
data_cleaned_liberia['Total confirmed cases'] = check_cumulative_entries(data_cleaned_liberia, 'Total confirmed cases')
# create daily entries
data_cleaned_liberia['New case/s confirmed'] = compute_daily_entries_from_cumulative(data_cleaned_liberia, 'Total confirmed cases')
# compute the mean by month
new_cases_liberia = data_cleaned_liberia['New case/s confirmed'].resample('M').mean()
# new deaths
new_deaths_liberia = data_cleaned_liberia['Newly reported deaths'].resample('M').mean()
# concatenate the two time series
data_cleaned_liberia = pd.concat([new_cases_liberia, new_deaths_liberia], axis=1)
# display the values in a sexy manner
data_cleaned_liberia.rename_axis({'New case/s confirmed': 'New cases (mean)', 'Newly reported deaths': 'Deaths (mean)'}, axis='columns', inplace=True)
data_cleaned_liberia.index.names = ['Month']
data_cleaned_liberia.index = data_cleaned_liberia.index.month
data_cleaned_liberia
# Sierra Leone
# basic formating of the data, to be more readable for a human
data_cleaned_SL = data.query('Country == "Sierra Leone"').dropna(axis=1, how='all').set_index(['Date','variable'])['National'].apply(pd.to_numeric, errors='coerce')
data_cleaned_SL = pd.pivot_table(data_cleaned_SL.reset_index(), values='National', index='Date', columns='variable', aggfunc='sum')
# no specific problem for the new cases
new_cases_SL = data_cleaned_SL['new_confirmed'].resample('M').mean()
# new deaths
# check consisteny of cumulative data
data_cleaned_SL['death_confirmed'] = check_cumulative_entries(data_cleaned_SL, 'death_confirmed')
# create daily entries
data_cleaned_SL['death_confirmed'] = compute_daily_entries_from_cumulative(data_cleaned_SL, 'death_confirmed')
#compute the mean by month
new_deaths_SL = data_cleaned_SL['death_confirmed'].resample('M').mean()
# concatenate the two time series
data_cleaned_SL = pd.concat([new_cases_SL, new_deaths_SL], axis=1)
data_cleaned_SL
# display the values in a sexy manner
data_cleaned_SL.rename_axis({'new_confirmed': 'New cases (mean)', 'death_confirmed': 'Deaths (mean)'}, axis='columns', inplace=True)
data_cleaned_SL.index.names = ['Month']
data_cleaned_SL.index = data_cleaned_SL.index.month
data_cleaned_SL
# general overview
data_cleaned_guinea['Country'] = 'Guinea'
data_cleaned_liberia['Country'] = 'Liberia'
data_cleaned_SL['Country'] = 'Sierra Leone'
# concatenate the data from the three countries to have a general overview
overview = pd.concat([data_cleaned_guinea, data_cleaned_liberia,data_cleaned_SL])
overview.reset_index().set_index(['Country', 'Month'])
path_RNA = "Data/microbiome"
data_RNA = pd.DataFrame()
allFiles_RNA = glob.glob(path_RNA + "/MID*.xls")
metadata_RNA = path_RNA + "/metadata.xls"
# Concat all the MID* files
data_RNA = pd.concat((pd.read_excel(file, index_col=0, names=[file[16:-4]]) for file in allFiles_RNA), axis=1)
# transpose index/columns and sort indexes
data_RNA = data_RNA.T
# add last file as columns
data_RNA = pd.concat([data_RNA, pd.read_excel(metadata_RNA, index_col=0)], axis=1)
# replace NaN values by 'unknown'
data_RNA = data_RNA.fillna(value='unknown')
data_RNA
from IPython.core.display import HTML
HTML(filename=DATA_FOLDER+'/titanic.html')
# load data
data_titanic = pd.read_excel('Data/titanic.xls')
data_titanic.head()
# We are only interested in handling a few of the columns and their missing values
# We drop the row with a NaN value for Embarked since there are only two of them
# We could also replace by the most frequent value policy, and thus fill with "S" in this case
data_titanic.dropna(subset=["embarked"], inplace=True)
# Fill NaN with median value policy for fare and age
data_titanic["fare"].fillna(data_titanic["fare"].median(), inplace=True)
data_titanic["age"].fillna(data_titanic["age"].median(), inplace=True)
data_titanic.dtypes
data_titanic.describe()
for col in ['pclass', 'survived', 'sex', 'embarked']:
data_titanic[col] = data_titanic[col].astype('category')
data_titanic.dtypes
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 2, figsize=(10,10))
data_titanic['pclass'].value_counts().plot(kind='bar', ax=axes[0, 0])
axes[0, 0].set_title('Number of passenger w.r.t class')
data_titanic['embarked'].value_counts().plot(kind='bar', ax=axes[0, 1])
axes[0, 1].set_title('Number of passenger w.r.t embarkment location')
data_titanic['sex'].value_counts().plot(kind='bar', ax=axes[1, 0])
axes[1, 0].set_title('Number of passenger w.r.t sex')
age_binned = pd.cut(data_titanic['age'], bins=[0, 10, 20, 30, 40, 50, 60, 70, 80], include_lowest=True)
axes[1, 1].set_title('Number of passenger w.r.t age range')
age_binned.value_counts(sort=False).plot(kind='bar', ax=axes[1, 1])
plt.show()
fig, axes = plt.subplots(1, 2, figsize=(16,8))
s = data_titanic['cabin']
s_nan = s.astype(str).str[0]
# We have to replace the N
s_nan = s_nan.replace('n', 'NaN')
s_no_nan = s.dropna().astype(str).str[0]
axes[0].set_title('Cabin repartition with undefined')
s_nan.value_counts().plot.pie(ax=axes[0], autopct='%1.0f%%')
axes[1].set_title('Cabin repartition without undefined')
s_no_nan.value_counts().plot.pie(ax=axes[1], autopct='%1.0f%%')
plt.show()
plt.figure(figsize=(8,8))
df = data_titanic[data_titanic['survived'] == 1]['pclass']
df.value_counts().plot.pie(autopct='%1.0f%%')
plt.title('Proportion of survivors for each class')
plt.show()
df = data_titanic[data_titanic['survived'] == 1]
df = df.groupby(['pclass', 'sex']).size().unstack()
df.plot.bar(stacked=True, figsize=(14,8))
plt.title('Proportion of survivors per sex and class')
plt.show()
df = data_titanic[['age', 'pclass', 'sex', 'survived']]
df = df.sort_values('age')
# Take the 50% quantile (quantile) of the indices (since the median by age) will not produce two equally sized sets.
# The result will satisfy the conditions answered for the question 20) in the FAQ
# CAREFUL: df must be sorted by age before taking such a median by indices.
med_id = df.shape[0] // 2
print('Age of the median:', int(df.iloc[med_id]['age']), 'years old')
df['age_group'] = 'Older'
df.iloc[:med_id]['age_group'] = 'Younger'
df['age_group'] = df['age_group'].astype('category')
df['survived'] = df['survived'].astype(bool)
groups = pd.DataFrame(df.groupby(['age_group', 'pclass', 'sex', 'survived']).size())
# Following the instructions: Present your results in a DataFrame with unique index.
groups.reset_index(inplace=True)
groups.columns = ['age category', 'class', 'sex', 'survived', 'count']
groups
fig, axes = plt.subplots(1, 2, figsize=(16,8))
groups[groups['age category'] == 'Younger'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[0], labels=['', ''])
groups[groups['age category'] == 'Older'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[1], labels=['', ''])
axes[0].set_title('Proportion of survivors for younger')
axes[1].set_title('Proportion of survivors for older')
axes[0].legend(labels=['Not survived', 'Survived'])
axes[1].legend(labels=['Not survived', 'Survived'])
plt.show()
fig, axes = plt.subplots(1, 3, figsize=(16,5))
groups[groups['class'] == 1].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[0], labels=['', ''])
groups[groups['class'] == 2].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[1], labels=['', ''])
groups[groups['class'] == 3].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[2], labels=['', ''])
axes[0].set_title('Proportion of survivors for class 1')
axes[1].set_title('Proportion of survivors for class 2')
axes[2].set_title('Proportion of survivors for class 3')
axes[0].legend(labels=['Not survived', 'Survived'])
axes[1].legend(labels=['Not survived', 'Survived'])
axes[2].legend(labels=['Not survived', 'Survived'])
plt.show()
fig, axes = plt.subplots(1, 2, figsize=(16,8))
groups[groups['sex'] == 'male'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[0], labels=['', ''])
groups[groups['sex'] == 'female'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[1], labels=['', ''])
axes[0].set_title('Proportion of survivors for male')
axes[1].set_title('Proportion of survivors for female')
axes[0].legend(labels=['Not survived', 'Survived'])
axes[1].legend(labels=['Not survived', 'Survived'])
plt.show()
df = data_titanic[['age', 'pclass', 'sex', 'survived']]
df = df.sort_values('age')
# Take the 12.5% quantile (quantile) of the indices (since the median by age) will not produce two equally sized sets.
# The result will satisfy the conditions answered for the question 20) in the FAQ
# CAREFUL: df must be sorted by age before taking such a median by indices.
med_id = df.shape[0] // 8
print('Age of the median:', int(df.iloc[med_id]['age']), 'years old')
df['age_group'] = 'Older'
df.iloc[:med_id]['age_group'] = 'Younger'
df['age_group'] = df['age_group'].astype('category')
df['survived'] = df['survived'].astype(bool)
groups = pd.DataFrame(df.groupby(['age_group', 'pclass', 'sex', 'survived']).size())
# Following the instructions: Present your results in a DataFrame with unique index.
groups.reset_index(inplace=True)
groups.columns = ['age category', 'class', 'sex', 'survived', 'count']
fig, axes = plt.subplots(1, 2, figsize=(16,8))
groups[groups['age category'] == 'Younger'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[0], labels=['', ''])
groups[groups['age category'] == 'Older'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[1], labels=['', ''])
axes[0].set_title('Proportion of survivors for younger')
axes[1].set_title('Proportion of survivors for older')
axes[0].legend(labels=['Not survived', 'Survived'])
axes[1].legend(labels=['Not survived', 'Survived'])
plt.show()
fig, axes = plt.subplots(1, 2, figsize=(16,8))
groups[groups['sex'] == 'male'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[0], labels=['', ''])
groups[groups['sex'] == 'female'].groupby('survived').sum().plot.pie(y='count', autopct='%1.0f%%', ax=axes[1], labels=['', ''])
axes[0].set_title('Proportion of survivors for male')
axes[1].set_title('Proportion of survivors for female')
axes[0].legend(labels=['Not survived', 'Survived'])
axes[1].legend(labels=['Not survived', 'Survived'])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task 1. Compiling Ebola Data
Step2: 2.
Step3: 3.
Step4: 4.
Step5: Comments
Step6: Task 3. Class War in Titanic
Step7: For each of the following questions state clearly your assumptions and discuss your findings
Step8: Data cleaning
Step9: 1. Showing types, describing data and changing to categorical types
Step10: 2.
Step11: Comments
Step12: 4.
Step13: Comments
Step14: Comments
Step15: We don't see any clear difference between the two groups in term of survival rate.
Step16: We can clearly notice that the survival rate decreases drastically as the class changes from first to "ecnonomy" class.
Step17: We notice a huge difference in the survival rate between genders. Indeed, women have a much higher survival rate than men.
|
11,204
|
<ASSISTANT_TASK:>
Python Code:
import dnnSwift
# The layout is a list of dictionaries defining each layer of the DNN.
# For this example, the DNN will consist of two blocks of
# "convolution - convolution - maximum pooling", followed by a fully
# connected layer and finally a cross entropy cost function
dnn_layout = [
{
"name": "data",
"type": "input"
},
{
"name": "conv1",
"type": "conv",
"n_kernels": 32,
"size": (3, 3),
"stride": (1, 1),
"padding" : "VALID",
"input": "data"
},
{
"name": "conv2",
"type": "conv",
"n_kernels": 32,
"size": (3, 3),
"stride": (1, 1),
"padding" : "VALID",
"input": "conv1"
},
{
"name": "maxpool1",
"type": "maxpool",
"size": (3, 3),
"stride": (2, 2),
"padding" : "VALID",
"input": "conv2"
},
{
"name": "conv3",
"type": "conv",
"n_kernels": 64,
"size": (3, 3),
"stride": (1, 1),
"padding" : "VALID",
"input": "maxpool1"
},
{
"name": "conv4",
"type": "conv",
"n_kernels": 64,
"size": (3, 3),
"stride": (1, 1),
"padding" : "VALID",
"input": "conv3"
},
{
"name": "maxpool2",
"type": "maxpool",
"size": (3, 3),
"stride": (2, 2),
"padding" : "VALID",
"input": "conv4"
},
{"name": "fc1", "type": "fc", "input": "maxpool2"},
{"name": "CrossEntropy", "type": "cross_entropy", "input": "fc1"}]
# This is a dictionary that assigns numerical values to each label. The
# keys of the dictionary are the actual labels used in the hdf5 file and
# the values should be integery from 0 to N. Internally, labels are turned
# into one-hot vectors, so in this set up, "0" becomes (1, 0, 0), "5"
# becomes (0, 1, 0), and "7" becomes (0, 0, 1)
categories = {"0": 0, "5": 1, "7": 2}
# Initialize the DNN object.
my_dnn = dnnSwift.DNNWrapper(
categories=categories, layout=dnn_layout)
# The training data must be initialized. This means that the hdf5 file is
# scanned and the images split into a training, validation, and test set.
# The indices of the lists are written into the 'outfile' as a pickled
# dictionary.
my_dnn.initialize_training_data(
filename="demo_data/MNISTDemo.h5",
outfile="demo_data/MNISTDemo_split.pkl")
# Train the network. Set 'verbose=True' to see the full output, including
# the remaining time
my_dnn.train_dnn(
num_epochs=5, batch_size=128,
weights_dir="demo_data/weights", verbose=False)
import pickle
import os
# For each epoch, a weights file and a validation file are created.
# Counting starts at 0, i.e. *_0.pkl is the output AFTER the first epoch!
all_files = os.listdir("demo_data/weights")
print "All Files: "
for f in all_files:
print " - %s" % f
with open("demo_data/weights/val_0.pkl", "r") as f:
val_dat = pickle.load(f)
keys = val_dat.keys()
print "Validation file type: " + str(type(val_dat))
print "Validation file keys:"
for key in keys:
print " - %s" % key
val_accs = []
for i in range(5):
with open("demo_data/weights/val_%s.pkl" % str(i), "r") as f:
val_accs.append(pickle.load(f)["top_1_acc"])
best_epoch = val_accs.index(max(val_accs))
print "Validation accuracies:"
for val_acc in val_accs:
print " - %.4f" % val_acc
print "Best epoch: %s" % str(best_epoch)
with open("demo_data/weights/val_0.pkl", "r") as f:
val = pickle.load(f)["top_n_acc"]
for key in val.keys():
print "Top-%s accuracy: %s" % (key, val[key])
import matplotlib.pyplot as plt
# Precision and Recall are calculated for each category individually
print "Labels: " + str(best_val["precision"].keys())
# We'll use the result of the first epoch to make the curve at least
# somewhat interesting
with open("demo_data/weights/val_0.pkl", "r") as f:
val_dat = pickle.load(f)
precision = val_dat["precision"]["5"]
recall = val_dat["recall"]["5"]
counts = val_dat["counts"]["5"]
# The disadvantage of using a dictionary is that the entries are not
# sorted. Future versions of this package may use a different storage
# method for the validation
p_thresh, p_val = zip(*precision.items())
sort_indices = [p_thresh.index(p) for p in sorted(p_thresh, key=lambda x: float(x))]
p_thresh = [float(p_thresh[i]) for i in sort_indices]
p_val = [p_val[i] for i in sort_indices]
r_thresh, r_val = zip(*recall.items())
sort_indices = [r_thresh.index(r) for r in sorted(r_thresh, key=lambda x: float(x))]
r_thresh = [float(r_thresh[i]) for i in sort_indices]
r_val = [r_val[i] for i in sort_indices]
c_thresh, c_val = zip(*counts.items())
sort_indices = [c_thresh.index(c) for c in sorted(c_thresh, key=lambda x: float(x))]
c_thresh = [float(c_thresh[i]) for i in sort_indices]
c_val = [c_val[i] for i in sort_indices]
# The thresholds for all categories are identical:
assert(p_thresh == r_thresh == c_thresh)
# A value of 'nan' indicates that nothing was predicted, correctly or not,
# with the given probability threshold.
print "Threshold | Precision | Recall | Counts"
print "--------------------------------------------"
for i in range(len(p_thresh)):
print "%8.4f |" % p_thresh[i], \
"%8.4f |" % p_val[i], \
"%8.4f |" % r_val[i], \
"%5d" % c_val[i]
print "--------------------------------------------"
from __future__ import division
import h5py
import numpy as np
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
# The size of our image segments (both dimensions have the same size)
img_dims = 25
# The number of images in each class
num_images = 10000
max_rad = ((img_dims - 1) // 2) - 1
center = (img_dims - 1) // 2
ind_y, ind_x = np.indices(dimensions=(img_dims, img_dims)) - center
# Circles
rand_rads = np.random.randint(5, max_rad+1, num_images)
rand_intensities = np.random.randint(1, 256, num_images)
noise_field = np.random.randint(-32, 33, (img_dims, img_dims, num_images))
rad_field = np.repeat(np.expand_dims(np.sqrt(ind_y**2 + ind_x**2), 2), num_images, 2) <= rand_rads
img_circles = (rad_field.astype(dtype=np.int16) * rand_intensities) + noise_field
img_circles[img_circles < 0] = 0
img_circles[img_circles > 255] = 255
# Squares
rand_rads = np.random.randint(5, max_rad+1, num_images)
rand_intensities = np.random.randint(1, 256, num_images)
noise_field = np.random.randint(-32, 33, (img_dims, img_dims, num_images))
ind_x_square = np.repeat(np.expand_dims(ind_x, 2), num_images, 2)
ind_y_square = np.repeat(np.expand_dims(ind_y, 2), num_images, 2)
square_field = (np.abs(ind_x_square) <= rand_rads) * (np.abs(ind_y_square) <= rand_rads)
img_squares = (square_field.astype(dtype=np.int16) * rand_intensities) + noise_field
img_squares[img_squares < 0] = 0
img_squares[img_squares > 255] = 255
imgs = np.concatenate((img_circles, img_squares), 2)
imgs = np.transpose(imgs, (2, 0, 1))
imgs = np.expand_dims(imgs, 1)
imgs = imgs.astype(np.uint8)
labels = np.expand_dims(np.repeat(("Circle", "Square"), (num_images, num_images)), 1)
with h5py.File("TrainingData.h5", "w") as h5handle:
h5handle.create_dataset(name="images", data=imgs, chunks=(1, 1, img_dims, img_dims), compression=3)
h5handle.create_dataset(name="labels", data=labels, chunks=(1, 1), compression=3)
with h5py.File("TrainingData.h5", "r") as h5handle:
print "Entries: ", h5handle.keys()
print h5handle["images"]
print h5handle["labels"]
import DNNSwift
img_handler = DNNSwift.ImageHandler.initialize(
filename="TrainingData.h5",
categories={"Circle": 0, "Square": 1})
for list_name in img_handler.get_list_names():
print list_name, ":: Length:", img_handler.get_list_length(list_name)
img_handler = DNNSwift.ImageHandler.initialize(
filename="TrainingData.h5",
categories={"Circle": 0, "Square": 1},
data_split=[60, 20, 20])
img_handler.save_lists("IndexLists.pkl")
import pickle
with open("IndexLists.pkl", "r") as f:
index_file = pickle.load(f)
print "Keys:", index_file.keys()
print "Index Lists:"
for key in index_file["index_list"].keys():
print " ", key, ":", index_file["index_list"][key]
print "md5sum:", index_file["h5file_hash"]
img_handler = DNNSwift.ImageHandler.initialize_with_index_list(
filename="TrainingData.h5",
categories={"Circle": 0, "Square": 1},
index_list=index_file)
images = img_handler.get_images(list_name="test", index_low=0, index_high=9)
print "Object type:", images
print "images.imgs:", type(images.imgs), ";", images.imgs.shape
print "images.labels:", type(images.labels), ";", images.labels.shape
plt.imshow(np.concatenate(images.imgs[..., 0], axis=1))
dnn_layout = [{"name": "input", "type": "input"},
{"name": "conv1", "type": "conv", "n_kernels": 32, "size": (3, 3),
"stride": (1, 1), "padding" : "VALID", "input": "input"},
{"name": "conv2", "type": "conv", "n_kernels": 32, "size": (3, 3),
"stride": (1, 1), "padding" : "VALID", "input": "conv1"},
{"name": "maxpool1", "type": "maxpool", "size": (3, 3),
"stride": (2, 2), "padding" : "VALID", "input": "conv2"},
{"name": "conv3", "type": "conv", "n_kernels": 64, "size": (3, 3),
"stride": (1, 1), "padding" : "VALID", "input": "maxpool1"},
{"name": "conv4", "type": "conv", "n_kernels": 64, "size": (3, 3),
"stride": (1, 1), "padding" : "VALID", "input": "conv3"},
{"name": "maxpool2", "type": "maxpool", "size": (3, 3),
"stride": (2, 2), "padding" : "VALID", "input": "conv4"},
{"name": "fc1", "type": "fc", "input": "maxpool2"},
{"name": "CrossEntropy", "type": "cross_entropy", "input": "fc1"}]
trainer = DNNSwift.DNN(
img_dims=img_handler.get_image_dims(),
labels=img_handler.get_image_groups(),
layer_params=dnn_layout,
basedir="dnn_output")
trainer.print_structure(filename="DNN_Structure.png")
trainer.train_network(
batch_size=128,
image_handler=img_handler,
num_epochs=1,
subdir="weights",
logfile="dnn_output/DNN_log.txt")
import pickle
with open("dnn_output/weights/val_0.pkl", "r") as f:
val_dat = pickle.load(f)
print "Keys:", val_dat.keys()
print "Validation Accuracy:", val_dat["top_1_acc"]
with open("dnn_output/weights/weights_0.pkl", "r") as f:
weights = pickle.load(f)
keys = sorted(weights.keys())
for key in keys:
print key, ":", weights[key].shape
# Initialize the object with the optimal set of weights
trainer = DNNSwift.DNN(
img_dims=img_handler.get_image_dims(),
labels=img_handler.get_image_groups(),
layer_params=dnn_layout,
basedir="dnn_output",
weights=weights)
images = img_handler.get_images("test", 0, 9)
output = trainer.run_network(input_images=images.imgs)
print "Output shape:", output.shape
print "Output:\n", np.squeeze(output)
print "Image Categories:", img_handler.get_image_groups()
print "Network Output:", np.argmax(np.squeeze(output), axis=1)
print "Ground Truth: ", np.argmax(images.labels, axis=1)
images = img_handler.get_images("test", 0, 3)
big_image = np.concatenate(
(np.concatenate((images.imgs[0], images.imgs[1]), axis=0),
np.concatenate((images.imgs[2], images.imgs[3]), axis=0)),
axis=1)
big_image = np.expand_dims(big_image, axis=0)
print "Big Image shape:", big_image.shape
plt.imshow(big_image[0, :, :, 0])
# Use the same weights as above
trainer = DNNSwift.DNN(
img_dims=(50, 50, 1),
labels=img_handler.get_image_groups(),
layer_params=dnn_layout,
basedir="dnn_output",
weights=weights)
output = trainer.run_network(input_images=big_image)
output_argmax = np.squeeze(np.argmax(output, axis=3))
plt.imshow(output_argmax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dnnSwift object writes its output into the 'weights_dir' directory. The output consists of weights and validation data. Validation data is calculated from data that was not trained on, i.e. it can be used for 1-fold cross validation of the training epochs.
Step2: The validation files are pickled dictionaries containing various metrics.
Step3: Lets determine the epoch with the best validation accuracy.
Step4: We've already seen that "top_1_acc" represents the top-1 accuracy. This is a common measure for determining the accuracy of neural networks. It represents the probability of the network correctly predicting the class of a validation data point. Additionally, the top-N accuracy is also calculated. This is the probability that the label of a validation data point is in the top N most likely classes predicted by the neural network.
Step5: Beyond that, we can also calculate the precision-recall curves from the "precision" and "recall" keys. These entries contain nested dictionaries and are calculated for each label independently. They are calculated for various probability thresholds. That means that instead of naively using the highest probability to determine the class, one can set custom thresholds for each category.
Step6: In addition to the precision and recall, the number of items predicted for each category at each threshold level is also stored. This can be useful for computing averages of precision and recall between categories. Note that this is not the total number of elements for each class in the validation data, but the number of elements predicted by the DNN, correct or not.
Step7: Imports for this vignette
Step8: ImageHandler
Step9: The resulting file, TrainingData.h5, has the following structure
Step10: Loading the Training Data
Step11: The categories argument assigns numerical values to each label. The output of a tensorflow neural network is a vector of numbers, which can be interpreted as probabilities after the proper normalization. The integer value of the categories dictionary indicates the order of labels, i.e. in this case 'Circle' is assigned to the one-hot vector (1,0) and 'Square' to the one-hot vector (0,1). A normalized output of (0.4,0.6) would then be interpreted as a 60% probability for the input image being a square.
Step12: By default, the split is 80%-10%-10% (Training, Validation, Testing), but this can be customized via the data_split argument of the constructor. It expects a list of values indicating the relative sizes of each image set (in order
Step13: The image split can be saved. If, for whatever reason, the network needs to be revalidated or training is interrupted and continued, the identical training dataset must be used. For this reason, ImageHandler can also be initialized with an index list, preventing the creation of a new one.
Step14: Images can be extracted directly from the image handler. This returns a custom object with the attributes "imgs" and "labels", which hold the image and label data, respectively. Note that index_high is an inclusive index, as opposed to python's usual indexing logic of excluding the upper limit. Ergo, setting
Step15: Initializing the DNN
Step16: The network wrapper is initialized via
Step17: The required parameters are as follows
Step18: The filename is the relative path under the objects basedir folder. The output type is defined by the file ending, e.g. *.png outputs an image. The full list of permitted output types can be found at the PyGraphViz homepage.
Step19: The required parameters are as follows
Step20: The trained parameters ('weights') are output as as pickled python dictionary. For each node with a set of parameters (i.e. convolutions and all derivatives), two keys exist in the dictionary, 'node_name/w' and 'node_name/b' for the multiplicative weights and the biases, respectively. The values of each dictionary entry is a numpy array
Step21: If initializing a DNN with non-random weights, the dictionary must have two keys for each layer that requires parameters with the expected shape
Step22: Comparing the input labels with the output logits
Step23: Optional parameters for 'run_network()'
Step24: Then initialize the network and run the output
|
11,205
|
<ASSISTANT_TASK:>
Python Code:
# Source: Manuel Torres. Universidad de Almería.
import pymysql
# Establecemos la conexion con la base de datos
bd = pymysql.connect("localhost", "root", "gebd", "RRHH")
# Preparamos el cursor que nos va a ayudar a realizar las operaciones con la base de datos
cursor = bd.cursor()
# Ejecutamos un query SQL usando el metodo execute() que nos proporciona el cursor
cursor.execute("SELECT * FROM Empleado")
resultado = cursor.fetchall()
# Extraemos una sola fila usando el metodo fetchone()
for fila in resultado:
print("%s %s %s" % (fila[0], fila[1], fila[2]))
# Nos desconectamos de la base de datos
bd.close()
resultado[0][2]
all_jobs = [x[2] for x in resultado]
all_jobs = []
for i in resultado:
all_jobs.append(i[2])
all_jobs
diff_jobs = set(all_jobs)
diff_jobs
jobs_count = {}
for i in diff_jobs:
jobs_count[i] = 0
jobs_count
for i in all_jobs:
jobs_count[i] += 1
jobs_count
jobs_count = {}
for i in diff_jobs:
jobs_count[i] = all_jobs.count(i)
jobs_count
jobs_count['Analista']
count_jobs = []
for i in jobs_count:
count_jobs.append((jobs_count[i], i))
count_jobs
sorted_count_jobs = sorted(count_jobs, reverse=True)
sorted_count_jobs
sorted_count_jobs[1][1]
i = 1
most_common_jobs = [(sorted_count_jobs[0][1], sorted_count_jobs[0][0])]
most_common_jobs
while(sorted_count_jobs[i][0] == sorted_count_jobs[0][0]):
x = (sorted_count_jobs[i][1], sorted_count_jobs[i][0])
most_common_jobs.append(x)
i += 1
most_common_jobs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Exercise 5
Step6: Exercise 6
|
11,206
|
<ASSISTANT_TASK:>
Python Code:
import math
import pickle
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
import openmoc
import openmoc.process
from openmoc.opencg_compatible import get_openmoc_geometry
from openmoc.materialize import load_openmc_mgxs_lib
%matplotlib inline
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
b10 = openmc.Nuclide('B10')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, water, zircaloy))
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:,:] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Export to "geometry.xml"
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.pixels = [250, 250]
plot.width = [-10.71*2, -10.71*2]
plot.color = 'mat'
# Instantiate a Plots object, add Plot, and export to "plots.xml"
plot_file = openmc.Plots([plot])
plot_file.export_to_xml()
# Run openmc in plotting mode
openmc.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
# Instantiate a 6-delayed-group list
delayed_groups = list(range(1,7))
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.delayed_groups = delayed_groups
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = -np.log(0.5) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
current_tally.get_pandas_dataframe().head(10)
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step2: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pins.
Step3: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step5: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step6: Likewise, we can construct a control rod guide tube with the same surfaces.
Step7: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step8: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step9: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step10: We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step12: Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully.
Step13: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
Step14: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Step15: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step16: Now, we can run OpenMC to generate the cross sections.
Step17: Tally Data Processing
Step18: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step19: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step20: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step21: Cross Section Visualizations
|
11,207
|
<ASSISTANT_TASK:>
Python Code:
# Setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# generate random data -- not linearly separable
np.random.seed(0)
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K, D))
num_train_examples = X.shape[0]
y = np.zeros(N*K, dtype='uint8')
for j in range(K):
ix = range(N*j, N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
def sigmoid(x):
x = 1 / (1 + np.exp(-x))
return x
def sigmoid_grad(x):
return (x) * (1 - x)
def relu(x):
return np.maximum(0,x)
#function to train a three layer neural net with either RELU or sigmoid nonlinearity via vanilla grad descent
def three_layer_net(NONLINEARITY, X, y, model, step_size, reg):
#parameter initialization
h = model['h']
h2= model['h2']
W1= model['W1']
W2= model['W2']
W3= model['W3']
b1= model['b1']
b2= model['b2']
b3= model['b3']
# some hyperparameters
# gradient descent loop
num_examples = X.shape[0]
plot_array_1=[]
plot_array_2=[]
for i in range(50000):
#FOWARD PROP
if NONLINEARITY== 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
#print(X.shape)
#print(scores.shape)
#print(np.sum(exp_scores, axis=1, keepdims=True).shape)
#print(probs.shape)
#assert False
# compute the loss: average cross-entropy loss and regularization
# v = probs[range(num_examples), y] -> 1d vector v[i] = probs[i, y[i]]]
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs) / num_examples
reg_loss = 0.5*reg*np.sum(W1*W1) + 0.5*reg*np.sum(W2*W2)+ 0.5*reg*np.sum(W3*W3)
loss = data_loss + reg_loss
if i % 1000 == 0:
print("iteration %d: loss %f" % (i, loss))
# compute the gradient on scores
dscores = probs
dscores[range(num_examples), y] -= 1
dscores /= num_examples
# BACKPROP HERE
dW3 = (hidden_layer2.T).dot(dscores)
db3 = np.sum(dscores, axis=0, keepdims=True)
if NONLINEARITY == 'RELU':
#backprop ReLU nonlinearity here
dhidden2 = np.dot(dscores, W3.T)
dhidden2[hidden_layer2 <= 0] = 0
dW2 = np.dot( hidden_layer.T, dhidden2)
plot_array_2.append(np.sum(np.abs(dW2)) / np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = np.dot(dhidden2, W2.T)
dhidden[hidden_layer <= 0] = 0
elif NONLINEARITY == 'SIGM':
#backprop sigmoid nonlinearity here
dhidden2 = dscores.dot(W3.T)*sigmoid_grad(hidden_layer2)
dW2 = (hidden_layer.T).dot(dhidden2)
plot_array_2.append(np.sum(np.abs(dW2))/np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = dhidden2.dot(W2.T)*sigmoid_grad(hidden_layer)
dW1 = np.dot(X.T, dhidden)
plot_array_1.append(np.sum(np.abs(dW1))/np.sum(np.abs(dW1.shape)))
db1 = np.sum(dhidden, axis=0)
# add regularization
dW3 += reg * W3
dW2 += reg * W2
dW1 += reg * W1
#option to return loss, grads -- uncomment next comment
grads={}
grads['W1']=dW1
grads['W2']=dW2
grads['W3']=dW3
grads['b1']=db1
grads['b2']=db2
grads['b3']=db3
#return loss, grads
# update
W1 += -step_size * dW1
b1 += -step_size * db1
W2 += -step_size * dW2
b2 += -step_size * db2
W3 += -step_size * dW3
b3 += -step_size * db3
# evaluate training set accuracy
if NONLINEARITY == 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
predicted_class = np.argmax(scores, axis=1)
print('training accuracy: %.2f' % (np.mean(predicted_class == y)))
#return cost, grads
return plot_array_1, plot_array_2, W1, W2, W3, b1, b2, b3
#Initialize toy model, train sigmoid net
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
h=50
h2=50
num_train_examples = X.shape[0]
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(sigm_array_1, sigm_array_2, s_W1, s_W2,s_W3, s_b1, s_b2,s_b3) = three_layer_net('SIGM', X,y,model, step_size=1e-1, reg=1e-3)
#Re-initialize model, train relu net
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(relu_array_1, relu_array_2, r_W1, r_W2,r_W3, r_b1, r_b2,r_b3) = three_layer_net('RELU', X,y,model, step_size=1e-1, reg=1e-3)
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- SIGM weights')
plt.legend(("sigm first layer", "sigm second layer"))
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.title('Sum of magnitudes of gradients -- ReLU weights')
plt.legend(("relu first layer", "relu second layer"))
# Overlaying the two plots to compare
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- hidden layer neurons')
plt.legend(("relu first layer", "relu second layer","sigm first layer", "sigm second layer"))
# plot the classifiers- SIGMOID
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(sigmoid(np.dot(sigmoid(np.dot(np.c_[xx.ravel(), yy.ravel()], s_W1)
+ s_b1), s_W2) + s_b2), s_W3) + s_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# plot the classifiers-- RELU
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], r_W1)
+ r_b1), r_W2) + r_b2), r_W3) + r_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The sigmoid function "squashes" inputs to lie between 0 and 1. Unfortunately, this means that for inputs with sigmoid output close to 0 or 1, the gradient with respect to those inputs are close to zero. This leads to the phenomenon of vanishing gradients, where gradients drop close to zero, and the net does not learn well.
Step2: Let's try and see now how the two kinds of nonlinearities change deep neural net training in practice. Below, we build a very simple neural net with three layers (two hidden layers), for which you can swap out ReLU/ sigmoid nonlinearities.
Step3: Train net with sigmoid nonlinearity first
Step4: Now train net with ReLU nonlinearity
Step5: The Vanishing Gradient Issue
Step6: Feel free to play around with this notebook to gain intuition. Things you might want to try
|
11,208
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib as mpl
from pymc3 import Model, Normal, Slice
from pymc3 import sample
from pymc3 import traceplot
from pymc3.distributions import Interpolated
from theano import as_op
import theano.tensor as tt
import numpy as np
from scipy import stats
%matplotlib inline
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha_true = 5
beta0_true = 7
beta1_true = 13
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=1)
beta0 = Normal('beta0', mu=12, sd=1)
beta1 = Normal('beta1', mu=18, sd=1)
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(10000)
traceplot(trace);
def from_posterior(param, samples):
smin, smax = np.min(samples), np.max(samples)
width = smax - smin
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
# what was never sampled should have a small probability but not 0,
# so we'll extend the domain and use linear approximation of density on it
x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])
y = np.concatenate([[0], y, [0]])
return Interpolated(param, x, y)
traces = [trace]
for _ in range(10):
# generate more data
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
model = Model()
with model:
# Priors are posteriors from previous iteration
alpha = from_posterior('alpha', trace['alpha'])
beta0 = from_posterior('beta0', trace['beta0'])
beta1 = from_posterior('beta1', trace['beta1'])
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(10000)
traces.append(trace)
print('Posterior distributions after ' + str(len(traces)) + ' iterations.')
cmap = mpl.cm.autumn
for param in ['alpha', 'beta0', 'beta1']:
plt.figure(figsize=(8, 2))
for update_i, trace in enumerate(traces):
samples = trace[param]
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
plt.plot(x, y, color=cmap(1 - update_i / len(traces)))
plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')
plt.ylabel('Frequency')
plt.title(param)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating data
Step2: Model specification
Step3: In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values.
Step4: Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers.
|
11,209
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import numpy
perm = numpy.random.permutation(numpy.arange(20))
perm
def topk_sortall(ensemble, k):
# à vous
# ...
pass
topk_sortall(perm, 4)
def topk_idee1ou2(ensemble, k):
pass
from time import perf_counter
def topk_sortall(ensemble, k):
ensemble = ensemble.copy()
ensemble.sort()
return ensemble[:k]
topk_sortall(perm, 4)
def topk_tri_selection(ensemble, k):
ensemble = ensemble.copy()
for i in range(0, min(k, len(ensemble))):
for j in range(i + 1, len(ensemble)):
if ensemble[i] > ensemble[j]:
ensemble[i], ensemble[j] = ensemble[j], ensemble[i]
return ensemble[:k]
topk_tri_selection(perm, 4)
def topk_insertion(ensemble, k):
def position(sub, value):
a, b = 0, len(sub) - 1
m = (a + b) // 2
while a < b:
if value == sub[m]:
return m
if value < sub[m]:
b = m - 1
else:
a = m + 1
m = (a + b) // 2
return m if sub[m] >= value else m + 1
res = [ensemble[0]]
for i in range(1, len(ensemble)):
if ensemble[i] < res[-1]:
# on insère selon un méthode dichotomique
p = position(res, ensemble[i])
res.insert(p, ensemble[i])
if len(res) > k:
res.pop()
return res
topk_insertion(perm, 4)
def topk_fusion(ensemble, k):
n = len(ensemble) // (len(ensemble) // k)
res = []
last = 0
while last < len(ensemble):
res.append(last)
last += n
res.append(len(ensemble))
subs = []
for i in range(1, len(res)):
sub = ensemble[res[i-1] : res[i]]
sub.sort()
subs.append(sub)
indices = [0 for sub in subs]
topk = []
while len(topk) < k:
arg = None
for i, sub in enumerate(subs):
if indices[i] < len(sub) and (arg is None or sub[indices[i]] < subs[arg][indices[arg]]):
arg = i
topk.append(subs[arg][indices[arg]])
indices[arg] += 1
return topk
topk_fusion(perm, 4)
def topk_select_recursive(ensemble, k):
if len(ensemble) <= k:
return ensemble
p = ensemble[k]
inf, sup = [], []
for e in ensemble:
if e > p:
sup.append(e)
else:
inf.append(e)
if len(inf) == k:
return inf
if len(sup) == 0:
# potentiellement une boucle infinie, on change
# le pivot de côté
sup.append(p)
del inf[inf.index(p)]
if len(inf) >= k:
return topk_select_recursive(inf, k)
return inf + topk_select_recursive(sup, k - len(inf))
topk_select_recursive(perm, 4)
def topk_select(ensemble, k):
ensemble = ensemble.copy()
def loop(a, b, pivot):
pivot_index = None
while a < b:
while a < len(ensemble) and ensemble[a] < pivot:
a += 1
while b >= 0 and ensemble[b] >= pivot:
if ensemble[b] == pivot:
pivot_index = b
b -= 1
if a < b:
ensemble[a], ensemble[b] = ensemble[b], ensemble[a]
return min(a, len(ensemble) - 1), pivot_index
a = 0
b = len(ensemble) - 1
m = k
while True:
pivot = ensemble[m]
m, pi = loop(a, b, pivot)
if m == k:
return ensemble[:m]
# Les cas particuliers interviennent lorsque pivot choisi est
# identique au précédent.
if m > k:
if b == m - 1:
ensemble[b], ensemble[pi] = ensemble[pi], ensemble[b]
b -= 1
else:
b = m - 1
else:
if a == m:
ensemble[a], ensemble[pi] = ensemble[pi], ensemble[a]
a += 1
else:
a = m
m = k
topk_select(perm, 4)
from time import perf_counter
def measure_time(fct, perm, k, repeat=1000):
t1 = perf_counter()
for r in range(repeat):
fct(perm, k)
return perf_counter() - t1
measure_time(topk_sortall, perm, 4)
measure_time(topk_tri_selection, perm, 4)
measure_time(topk_insertion, perm, 4)
measure_time(topk_fusion, perm, 4)
measure_time(topk_select_recursive, perm, 4)
measure_time(topk_select, perm, 4)
perm10000 = numpy.random.permutation(numpy.arange(10000))
measure_time(topk_sortall, perm10000, 40, repeat=10)
measure_time(topk_tri_selection, perm10000, 40, repeat=10)
measure_time(topk_insertion, perm10000, 40, repeat=10)
measure_time(topk_fusion, perm10000, 40, repeat=10)
measure_time(topk_select_recursive, perm10000, 40, repeat=10)
measure_time(topk_select, perm10000, 40, repeat=10)
from tqdm import tqdm
from pandas import DataFrame
perm1000 = numpy.random.permutation(numpy.arange(1000))
res = []
for k in tqdm(range(100, 501, 50)):
obs = dict(k=k, n=perm1000.shape[0])
for fct in [topk_sortall, topk_tri_selection, topk_insertion,
topk_fusion, topk_select_recursive, topk_select]:
obs[fct.__name__] = measure_time(fct, perm1000, k, repeat=10)
res.append(obs)
df = DataFrame(res)
df.head()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
df.set_index('k').drop(["n", "topk_tri_selection"], axis=1).plot(
logy=True, logx=True, title="Coût en fonction de k", ax=ax[0])
df.set_index('k')[["topk_tri_selection"]].plot(
logy=True, logx=True, title="Coût en fonction de k", ax=ax[1]);
res = []
k = 10
for n in tqdm([20, 100, 1000, 10000, 100000]):
permn = numpy.random.permutation(numpy.arange(n))
obs = dict(k=k, n=permn.shape[0])
for fct in [topk_sortall, topk_tri_selection, topk_insertion,
topk_fusion, topk_select_recursive, topk_select]:
obs[fct.__name__] = measure_time(fct, permn, k, repeat=10)
res.append(obs)
df = DataFrame(res)
df.head()
df.set_index('n').drop(["k"], axis=1).plot(
logy=True, logx=True, title="Coût en fonction de n");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enoncé
Step2: Exercice 1
Step3: Exercice 2
Step4: Vous avez aussi le droit de tricher pour une troisième idée Heap / Tas mais il faudra vous débrouiller tout seul.
Step5: Réponses
Step6: Or trier tous les éléments n'est pas nécessaire. On n'a pas besoin de trier tous les éléments après la position k. Il est donc a priori envisageable de fairre mieux en ne faisant que les calculs nécessaires.
Step7: Le coût de l'algorithme est en $O(kn)$.
Step8: Le coût de l'algorithme est dans le pire des cas $O(n * (\ln k + k))$
Step9: Exercice 3
Step10: Une seconde version sans récurrence.
Step11: Exercice 4
Step12: La première fonction est plus rapide surtout parce que le tri est implémenté par une fonction du langage particulièrement optimisée (en C). Il faudrait implémenter un tri en python pour pouvoir comparer.
Step13: Il faut faire tourner la fonction un grand nombre de fois sur différentes tailles de n pour voir l'évolution.
Step14: Pas évident, le langage est trop haut niveau et le coût de l'interpréteur gomme les aspects algorithmique. Il faudrait augmenter la taille du tableau à trier ou affiner la mesure.
|
11,210
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
sns.set(style="ticks", context='talk', font_scale=1.1)
%matplotlib inline
df = pd.read_csv('top10mountain-lions.csv')
sns.set_style("whitegrid")
sns.barplot(x="count", y="COUNTY", data=df, color="#2ecc71")
sns.despine(bottom=True, left=True)
sns.set_style("white")
sns.barplot(x="count", y="COUNTY", data=df, color="#f6a14e")
plt.ylabel('')
plt.xlabel('Mountain Lion Sightings')
sns.despine(bottom=True, left=True)
sns.set_style("darkgrid")
sns.barplot(x="count", y="COUNTY", data=df, color="#7fe5ba")
plt.ylabel('')
plt.xlabel('Mountain Lion Sightings')
sns.despine(bottom=True, left=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the csv file into a panda
Step2: Chart 1 features a whitegrid style to help assign a number to each bar. Despine removes Tufte's chart junk.
Step3: Chart 2 features a white chart style. Y label is removed because county label is redudant. Also, X label is more useful. Despine removes Tufte's chart junk.
Step4: Chart 3 features a dark grid style. Although the grid helps assign values to the bars, the color is distracting and tends to chart junk
|
11,211
|
<ASSISTANT_TASK:>
Python Code:
!pip install cufflinks
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
plt.style.use('ggplot')
import seaborn as sns # for making plots with seaborn
color = sns.color_palette()
sns.set(rc={'figure.figsize':(25,15)})
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.figure_factory as ff
#import cufflinks as cf
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df = pd.read_csv('./db/googleplaystore.csv')
df.sample(10)
len(df)
df.columns
df.drop_duplicates(subset='App', inplace=True)
df = df[df['Android Ver'] != np.nan]
df = df[df['Android Ver'] != 'NaN']
df = df[df['Installs'] != 'Free']
df = df[df['Installs'] != 'Paid']
df['Installs'] = df['Installs'].apply(lambda x: x.replace('+', '') if '+' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: x.replace(',', '') if ',' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: int(x))
df['Installs'] = df['Installs'].apply(lambda x: float(x))
df['Size'] = df['Size'].apply(lambda x: str(x).replace('Varies with device', 'NaN') if 'Varies with device' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace('M', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace(',', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(str(x).replace('k', '')) / 1000 if 'k' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(x))
df['Reviews'] = df['Reviews'].apply(lambda x: int(x))
df['Price'] = df['Price'].apply(lambda x: str(x).replace('$', '') if '$' in str(x) else str(x))
df['Price'] = df['Price'].apply(lambda x: float(x))
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(6)
df.dropna(how ='any', inplace = True)
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(6)
print(len(df))
10841 - 7021
df["Price"].describe()
x = df['Rating'].dropna()
z = df['Installs'][df.Installs!=0].dropna()
p = df['Reviews'][df.Reviews!=0].dropna()
t = df['Type'].dropna()
price = df['Price']
p = sns.pairplot(pd.DataFrame(list(zip(x, np.log(z), np.log10(p), t, price)),
columns=['Rating', 'Installs', 'Reviews', 'Type', 'Price']), hue='Type', palette="Set2")
print("\n", df['Category'].unique())
print(df["Genres"].unique())
from scipy import stats
sns.distplot(df["Price"], kde=False, fit=stats.norm);
df["Price"].describe()
dfmod = df
dfmod["Price"] = (df["Price"] - df["Price"].min()) / (df["Price"].max() - df["Price"].min())
dfmod["Price"].describe()
sns.distplot(dfmod["Price"], kde=False, fit=stats.norm);
%%capture
dfmod["Installs"] = (df["Installs"] - df["Installs"].min()) / (df["Installs"].max() - df["Installs"].min())
print(dfmod["Rating"].describe())
sns.distplot(df["Rating"], kde=False, fit=stats.norm);
sns.distplot(df["Installs"], rug=True, hist=False)
print(df["Rating"].describe())
print("\n", df['Rating'].unique())
sns.distplot(df["Rating"], hist=True)
dfmod.loc[(dfmod['Rating'] >= 0.0 ) & (dfmod['Rating'] <= 4.25 ), 'label_rating'] = '0 bad'
dfmod.loc[(dfmod['Rating'] > 4.25 ) & (dfmod['Rating'] <= 4.75 ), 'label_rating'] = '1 normal'
dfmod.loc[(dfmod['Rating'] > 4.75), 'label_rating'] = '2 good'
print(dfmod['label_rating'].unique())
dfmod["Rating"] = (df["Rating"] - df["Rating"].min()) / (df["Rating"].max() - df["Rating"].min())
sns.distplot(dfmod["Rating"], hist=True)
dfmod["Installs"] = (df["Installs"] - df["Installs"].min()) / (df["Installs"].max() - df["Installs"].min())
dfmod["Price"] = (df["Price"] - df["Price"].min()) / (df["Price"].max() - df["Price"].min())
dfmod["Reviews"] = (df["Reviews"] - df["Reviews"].min()) / (df["Reviews"].max() - df["Reviews"].min())
dfmod[['Rating', 'Installs', 'Price', 'Reviews', 'label_rating' ]]
#sns.dfmod[['Rating', 'Installs', 'Price', 'Reviews', 'label_rating' ]]
#sns.pairplot(data = dfmod[['Rating', 'Installs', 'Price', 'Reviews', 'label_rating' ]])
x = dfmod['Rating']
z = dfmod['Installs']
p = dfmod['Reviews'][df.Reviews!=0].dropna()
t = dfmod['label_rating'].dropna()
price = df['Price']
p = sns.pairplot(pd.DataFrame(list(zip(x, z, p, t, price)),
columns=['Rating', 'Installs', 'Reviews', 'label_rating', 'Price']), hue='label_rating', palette="Set2")
from sklearn.cluster import KMeans
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score, cross_val_predict, KFold
from sklearn import preprocessing, metrics
from sklearn.cluster import AgglomerativeClustering
from time import time
seed_initialization = [37,110,777]
n_jobs = 4
sample_size_ = 300
dataclust = dfmod[['Rating', 'Installs', 'Price', 'Reviews' ]];
table_results = []
for i in range(3,13):
for y in range(3):
t0 = time()
estimator = KMeans(init='k-means++', n_clusters=i, random_state = seed_initialization[y], n_jobs = n_jobs)
estimator.fit(dataclust)
print(time()-t0)
print("k: " + str(i))
print("seed: " + str(seed_initialization[y]))
db = metrics.homogeneity_score(dfmod['label_rating'], estimator.labels_)
si = metrics.silhouette_score(dataclust, estimator.labels_, metric='euclidean',sample_size=sample_size_)
result_estimator = {"kmeans", i, seed_initialization[y], db, si}
table_results.append(result_estimator)
estimator2 = AgglomerativeClustering(n_clusters=6, linkage='ward').fit(dataclust)
db2 = metrics.homogeneity_score(dfmod['label_rating'], estimator.labels_)
si2 = metrics.silhouette_score(dataclust, estimator.labels_, metric='euclidean',sample_size=sample_size_)
result_estimator = {"AgglomerativeClustering", 6, "-", db2, si2}
table_results.append(result_estimator)
result = pd.DataFrame(table_results, columns=["DB", "Silhouete","K","Seed","Algorithm"])
result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the dataset for the assignment
Step2: So, according to the description of the dataset as well as the descriptions of the columns we do have a dataset with information about the Google Play Store, we could apply many algorithms since we do have a great variety of tuples, more than 10k, and 13 features.
Step3: we could check that our dataset presents 10.841 tuples
Step4: and 13 columns/features
Step5: Where each column/feature represents
Step6: So, analyzing the number of 'missing' data we have
Step7: And we need to take action about this situation dropping all instances with missing values from our dataset
Step8: Checking the result of this operation we conclude with less number of instances
Step9: Fiding
Step10: The feature 'Price' is one of the columns that need some normalization. As we can see below, we have a mean of $1,17 per app and the data is very skewed. Below we plotted a chart with the distribution of this
Step11: So as we can see below we now have a normalized data for the price in another DataFrame following the same histogram. As Price, there is other information that we need to check like the number of installs, rating, reviews, and size.
Step12: About the information of 'Installs' we have a categorized information, dividing the number of downloads in chunks as we can notice below
Step13: Let's see how the distribution of the rating values is presented. So, we do have a mean really high of 4.16 in ratings' data and a standard deviation of around 0.56. Notice that the minimum note is 1 and that the first quartile is about 4 in ratings, in other words, we do have a skewed data with a long left long tail. We already normalized this feature.
Step14: If we going to utilize the method for classification we should classify based on a label. For this purpose, we will consider, based on the distribution of the rating. So, we'll propose the following
Step15: Now it's after we categorize the new label, we a ready to normalize the data from the rating. Remembering that we are normalizing the data from 0-1.
Step16: We should also normalize reviews and price data
Step17: What this pair plot above shows us is how the mobile market works. The first only plot that we noticed was the $rating x install$ chart. In this chart we can notice that the highest ratings received are way less the number of installs, that same behavior can be observed with the $rating x reviews$ the third observation is over the $rating x price$ chart, in this chat we can conclude that there is no 'good' software with the high rating!
Step18: The k-Means algorithm coded in scikit-learn utilizes the Lloyd’s or Elkan’s algorithm [http
Step19: 3.Post processing
|
11,212
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize = [15, 5])
plt.title('My plot')
plt.xlabel('X values')
plt.ylabel('Count of values')
plt.xlim([0, 10])
plt.ylim([0, 5])
data_dict = {
'x': [0, 1, 1, 2, 1, 0, 0, 1, 2, 1, 1],
'y': [1, 1, 1, 1, 1, 0, 0, 0, 1, 2, 1]
}
df = pd.DataFrame(data=data_dict)
plt.title('My plot')
plt.xlabel('X values')
plt.ylabel('Count of values')
sns.countplot(df['x']);
data_dict = {
'x': [0, 1, 1, 2, 1, 0, 0, 1, 2, 1, 1],
'y': [1, 1, 1, 1, 1, 0, 0, 0, 1, 2, 1]
}
df = pd.DataFrame(data=data_dict)
percentage_dist = df['x'].value_counts(normalize=True).sort_index()
ax = sns.barplot(x=percentage_dist.index, y=percentage_dist.values)
ax.set_ylim([0,1]);
mu, sigma = 0, 0.1 # mean and standard deviation
data_dict = {
'x': np.random.normal(mu, sigma, 1000)
}
df = pd.DataFrame(data=data_dict)
plt.hist(df['x'], bins=np.arange(start=-0.4, stop=0.4, step=0.05));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chart Attributes
Step2: Axes - Set Range
Step3: Percentage of Each Label in Column of DataFrame
Step4: Count of (Continuous) Values in Column of DataFrame
|
11,213
|
<ASSISTANT_TASK:>
Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT
REGION = "us-central1"
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
title_dataset.source.value_counts()
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
MODEL_DIR = "./text_models"
DATA_DIR = "./data"
ls ./data/
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
titles_df.source.value_counts()
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
sources_train.value_counts()
sources_valid.value_counts()
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(# TODO)
nnlm_module(tf.constant([# TODO]))
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(# TODO)
swivel_module(tf.constant([# TODO]))
def build_model(hub_module, name):
model = Sequential([
# TODO
Dense(16, activation='relu'),
Dense(N_CLASSES, activation='softmax')
], name=name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(), TensorBoard(model_dir)],
)
return history
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
swivel_model = build_model(swivel_module, name='swivel')
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')
os.environ['EXPORT_PATH'] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.saved_model.save(swivel_model, EXPORT_PATH)
%%bash
# TODO 5
MODEL_NAME=title_model
VERSION_NAME=swivel
if [[ $(gcloud ai-platform models list --format='value(name)' | grep ^$MODEL_NAME$) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep ^$VERSION_NAME$) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create $VERSION_NAME\
--model=$MODEL_NAME \
--framework=# TODO \
--python-version=# TODO \
--runtime-version=2.1 \
--origin=# TODO \
--staging-bucket=# TODO \
--machine-type n1-standard-4 \
--region=$REGION
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
%%writefile input.json
{# TODO}
!gcloud ai-platform predict \
--model title_model \
--json-instances input.json \
--version swivel \
--region=$REGION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset
Step16: Preparing the labels
Step17: Preparing the train/test splits
Step18: To be on the safe side, we verify that the train and test splits
Step19: Now let's create the features and labels we will feed our models with
Step20: NNLM Model
Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence
Step22: Swivel Model
Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence
Step24: Building the models
Step25: Let's also wrap the training code into a train_and_evaluate function that
Step26: Training NNLM
Step27: Training Swivel
Step28: Swivel trains faster but achieves a lower validation accuracy, and requires more epochs to train on.
Step29: Then we can deploy the model using the gcloud CLI as before
Step30: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API
Step31: Let's go ahead and hit our model
|
11,214
|
<ASSISTANT_TASK:>
Python Code:
import xlsxwriter
workbook = xlsxwriter.Workbook('hello.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'Hello world')
workbook.close()
expenses = (
['Rent', 1000],
['Gas', 100],
['Food', 300],
['Gym', 50],
)
workbook = xlsxwriter.Workbook('Expenses01.xlsx')
# worksheet은 sheet1, sheet2, ... 기본 이름이지만 이름을 별도로 붙일 수 있음!
worksheet = workbook.add_worksheet()
# start from the first cell! zero index
row = 0
col = 0
for item, cost in (expenses):
worksheet.write(row, col, item)
worksheet.write(row, col+1, cost)
row += 1
# 데이터 입력할 경우 write 사용
worksheet.write(row, 0, 'Total')
worksheet.write(row, 1, '=SUM(B1:B4)')
workbook.close() # 항상 닫아줘야함
# default는 sheet1, sheet2 .... 임
worksheet1 = workbook.add_worksheet() # sheet1
worksheet2 = workbook.add_worksheet('Data')
worksheet3 = workbook.add_worksheet() # sheet3
workbook = xlsxwriter.Workbook("Expenses02.xlsx")
worksheet = workbook.add_worksheet("Sheet1")
# bold 처리
bold = workbook.add_format({"bold": True})
# format for cells
money = workbook.add_format({"num_format": "$#,##0"})
# header 설정
worksheet.write('A1', 'Item', bold)
worksheet.write('B1', 'Cost', bold)
expenses = (
['Rent', 1000],
['Gas', 100],
['Food', 300],
['Gym', 50],
)
row = 1 # header를 작성했기에 row가 1부터 시작
col = 0
for item, cost in (expenses):
worksheet.write(row, col, item)
worksheet.write(row, col+1, cost, money)
# write(row, col, 넣을 값(token), 형식[format])
row += 1
worksheet.write(row, 0, "Total", bold)
worksheet.write(row, 1, "=sum(B2:B5)", money)
workbook.close()
from datetime import datetime
workbook = xlsxwriter.Workbook("Expenses03.xlsx")
worksheet = workbook.add_worksheet()
bold = workbook.add_format({"bold":1})
money_format = workbook.add_format({"num_format": "$#,##0"})
date_format = workbook.add_format({"num_format": "mmmm d yyyy"})
worksheet.set_column(1, 1, 15) # column의 width 조절
worksheet.write("A1", "Item", bold)
worksheet.write("B1", "Date", bold)
worksheet.write("C1", "Cost", bold)
expenses = (
['Rent', '2013-01-13', 1000],
['Gas', '2013-01-14', 100],
['Food', '2013-01-16', 300],
['Gym', '2013-01-20', 50],
)
# Start from the first cell below the headers.
row = 1
col = 0
for item, date_str, cost in (expenses):
date = datetime.strptime(date_str, "%Y-%m-%d")
worksheet.write_string (row, col, item)
worksheet.write_datetime(row, col + 1, date, date_format )
worksheet.write_number (row, col + 2, cost, money_format)
row += 1
# Write a total using a formula.
worksheet.write(row, 0, 'Total', bold)
worksheet.write(row, 2, '=SUM(C2:C5)', money_format)
workbook.close()
import pandas as pd
df = pd.DataFrame({"Data": [10, 20, 30, 20, 15, 30, 45]})
df
## pandas로 xlsxwriter 접근하기
df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]})
writer = pd.ExcelWriter("pandas_simple.xlsx", engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
workbook = writer.book
worksheet = writer.sheets["Sheet1"]
# chart 객체 생성
chart = workbook.add_chart({"type":'column'})
# dataframe 데이터에서 chart 범위설정
chart.add_series({"values":"=Sheet1!$B$2:$B$8"})
# worksheet에 chart 삽입
worksheet.insert_chart("D2", chart)
writer.save()
# 차트 속성을 수정하려면 어떻게 해야할까..!
# dataframe formatting 색깔 설정
worksheet.conditional_format('B2:B8', {'type': '3_color_scale'})
# 이걸 하면 알록달록 아래처럼 나옴
df1 = pd.DataFrame({'Data': [11, 12, 13, 14]})
df2 = pd.DataFrame({'Data': [21, 22, 23, 24]})
df3 = pd.DataFrame({'Data': [31, 32, 33, 34]})
df4 = pd.DataFrame({'Data': [41, 42, 43, 44]})
# writer 설정
writer = pd.ExcelWriter('pandas_positioning.xlsx', engine='xlsxwriter')
# df1을 넣는 위치 설정
df1.to_excel(writer, sheet_name='Sheet1') # 기초 A1
df2.to_excel(writer, sheet_name='Sheet1', startcol=3) # col 4번째(3)부터 시작
df3.to_excel(writer, sheet_name='Sheet1', startrow=6) # row 7번째부터 시작
# header와 index 없이 설정가능! 단 시작 위치는 고려해야함
df4.to_excel(writer, sheet_name='Sheet1',
startrow=7, startcol=4, header=False, index=False)
# writer를 닫음
writer.save()
list_data = [10, 20, 30, 20, 15, 30, 45]
df = pd.DataFrame(list_data)
excel_file = 'column.xlsx'
sheet_name = 'Sheet1'
writer = pd.ExcelWriter(excel_file, engine='xlsxwriter')
df.to_excel(writer, sheet_name=sheet_name)
workbook = writer.book
worksheet = writer.sheets[sheet_name]
chart = workbook.add_chart({'type': 'column'})
chart.add_series({
'values': '=Sheet1!$B$2:$B$8',
'gap': 2 # 여백
})
# chart : y axis 설정
chart.set_y_axis({'major_gridlines': {'visible': False}})
# chart legend(범주) 해제
chart.set_legend({'position': 'none'})
# insert the chart into the worksheet
worksheet.insert_chart('D2', chart) # d2는 에러나고 D2라고 해야되요
writer.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tutorial 1
Step2: <img src="https
Step3: Tutorial 2
Step4: Tutorial 3
Step5: write() method
Step6: <img src="https
Step7: <img src="https
Step8: <img src="https
|
11,215
|
<ASSISTANT_TASK:>
Python Code:
# Enter the specs of the detector
nep = 2.34e-12 # in Watts per root hz
BW = 10e6 # Bandwidth in Hz
gain = 0.75e4 # gain in V/A
responsivity = 0.5 # Amps per Watt (assume 800 nm)
pmin = nep * np.sqrt(BW)
volts_min = pmin * responsivity * gain
print("voltage generated by p_min:",volts_min)
scope_floor_factor = 0.010/volts_min
# the power has to be scope_floor_factor times larger in order to generate 10mV:
pmin * scope_floor_factor
nep = 1.21e-12
BW = 260e3
gain = 2.38e4
responsivity = 0.5
pmin = nep * np.sqrt(BW)
volts_min = pmin * responsivity * gain
print(volts_min)
scope_floor_factor = 0.010/volts_min
# resolvable power:
pmin * scope_floor_factor
nep = 0.2e-12
BW = 50e6
gain = 50000
responsivity = 25
pmin = nep * np.sqrt(BW)
volts_min = pmin * responsivity * gain
print(volts_min)
scope_floor_factor = 0.010/volts_min
# resolvable power:
pmin * scope_floor_factor
1.3e-6 * 50000 * 25 # watts times volts/amp times amps/watt gives volts:
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that we need at least 10mV to even resolve a signal on the scope. So the NEP is only part of the story
Step2: This setting (20dB) leaves plenty of bandwidth but can't resolve our signal (1.3uW). It's limited to double our signal.
Step3: On 30dB we start to hit the bandwidth limit but can resolve down to 0.8 uW (half our signal). Even so, we'd only get 10mV out of it.
Step4: This can resolve down to 8nW so our 1.3uW would be ~1000x larger.
|
11,216
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import collections
import mido
from commons import dgxdump
from commons.dumpdata import messages, songdata, regdata, regvalues
old_syx_messages = mido.read_syx_file('../data/syxout5.syx')
clear_syx_messages = mido.read_syx_file('../data/clear_bulk.txt')
o_dump = dgxdump.DgxDump(old_syx_messages)
c_dump = dgxdump.DgxDump(clear_syx_messages)
# songs slices
songslices = collections.OrderedDict([
('songs', slice(0x00, 0x01)),
('mystery', slice(0x01, 0x15D)),
('tracks', slice(0x15D, 0x167)),
('durations', slice(0x167, 0x17B)),
('trackdurations', slice(0x17B, 0x1F3)),
('presetstyle', slice(0x1F3, 0x22F)),
('beginningblocks', slice(0x22F, 0x24D)),
('nextblocks', slice(0x24D, 0x2CF)),
('startmarker', slice(0x2CF, 0x2D5)),
('blockdata', slice(0x2D5, 0x106D5)),
('endmarker', slice(0x106D5, None)),
])
EXPECTED_SIZE = 0x106DB
PRESETSTYLE = b'PresetStyle\0'*5
MARKER = b'PK0001'
def hex_string(data):
return " ".join("{:02X}".format(b) for b in data)
def bin_string(data):
return " ".join("{:08b}".format(b) for b in data)
def line_hex(data, head=None, tail=0):
if head is None:
head = len(data)
tailstart = len(data) - tail
if tailstart <= head:
return (hex_string(data))
else:
return ("{} .. {}".format(hex_string(data[:head]), hex_string(data[tailstart:])))
def song_section(dump, section):
return dump.song_data.data[songslices[section]]
for sec in songslices:
print(sec)
print(line_hex(song_section(o_dump, sec), 32, 4))
print(line_hex(song_section(c_dump, sec), 32, 4))
song_section(o_dump, 'mystery') == song_section(c_dump, 'mystery')
all(b==0 for b in song_section(c_dump, 'nextblocks'))
all(b==0 for b in song_section(c_dump, 'blockdata'))
bytes(song_section(c_dump, 'presetstyle'))
print(line_hex(o_dump.reg_data.data, 32, 4))
print(line_hex(c_dump.reg_data.data, 32, 4))
for bank in range(1, 8+1):
for button in range(1, 2+1):
print(bank, button)
print(line_hex(o_dump.reg_data.settings.get_setting(bank, button).data))
print(line_hex(c_dump.reg_data.settings.get_setting(bank, button).data))
for x in range(2, 7):
!diff -qs ../data/backup_experiment/cb1.txt ../data/backup_experiment/cb{x}.txt
!diff -qs ../data/backup_experiment/cb1.txt ../data/clear_bulk.txt
c2_syx_messages = mido.read_syx_file('../data/backup_experiment/cb1.txt')
c2_dump = dgxdump.DgxDump(c2_syx_messages)
c_dump.song_data.data == c2_dump.song_data.data
c_dump.reg_data.data == c2_dump.reg_data.data
for sec in songslices:
c_sec = song_section(c_dump, sec)
c2_sec = song_section(c2_dump, sec)
if c_sec != c2_sec:
print(sec)
print(line_hex(c_sec, 32, 4))
print(line_hex(c2_sec, 32, 4))
for n, (a, b) in enumerate(zip(c_dump.song_data.data, c2_dump.song_data.data)):
if a != b:
print("{0:02X}: {1:02X} {2:02X} ({1:03d} {2:03d})".format(n, a, b))
r1_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/1reg.syx'))
c2_dump.song_data.data == r1_dump.song_data.data
c2_dump.reg_data.data == r1_dump.reg_data.data
for bank in range(1, 8+1):
for button in range(1, 2+1):
if not all(x == 0 for x in r1_dump.reg_data.settings.get_setting(bank, button).data):
print(bank, button)
line_hex(r1_dump.reg_data.settings.get_setting(3, 2).data)
for bb in [(3, 2), (1, 1)]:
sets = r1_dump.reg_data.settings.get_setting(*bb)
print(line_hex(sets.data))
sets.print_settings()
sets.print_unusual()
r2_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/2reg.txt'))
sets = r2_dump.reg_data.settings.get_setting(2,2)
sets.print_settings()
sets.print_unusual()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The mystery section remains the same.
Step2: All the blocks are empty.
Step3: The 'PresetStyle' settings are empty, too.
Step4: Each of the registry settings are completely blank.
Step5: The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.
Step6: I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.
|
11,217
|
<ASSISTANT_TASK:>
Python Code:
BATCH_SIZE = 16
PROJECT_DIR = os.path.expanduser("~/nta/nupic.research/projects/gsc")
EXPERIMENT_NAME = "default_sparse_cnn"
CHECKPOINT_FILE = "/home/ec2-user/nta/results/experiments/gsc/default_sparse_cnn/RemoteProcessTrainable_0_2021-01-07_15-54-473wnu2slv/checkpoint_30/checkpoint"
USE_CUDA = torch.cuda.is_available()
# Load the experiment
import os
import sys
sys.path.insert(0, PROJECT_DIR)
from experiments import CONFIGS
config = CONFIGS[EXPERIMENT_NAME]
exp_cls = config["experiment_class"]
exp = exp_cls()
# Load dataloader with specified BATCH_SIZE.
dataset = exp_cls.load_dataset(config, train=True)
train_loader = DataLoader(
dataset=dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=0,
sampler=None,
pin_memory=USE_CUDA,
drop_last=config.get("train_loader_drop_last", False),
)
# Path to checkpoint.
assert CHECKPOINT_FILE is not None, "You should specify a path to a checkpoint."
# Load model
config = CONFIGS[EXPERIMENT_NAME]
exp_cls = config["experiment_class"]
exp = exp_cls()
model = exp_cls.create_model(config, "cuda")
load_state_from_checkpoint(model=model, checkpoint_path=CHECKPOINT_FILE, device="cuda");
class SaveForwardOutput:
Forward hook for atorch.nn.Module. This will store the output from a module during the forward pass.
:param name: name of the module (e.g. "classifier")
:param verbose: whether to print something every forward pass
def __init__(self, name=None, verbose=False):
self.name = name
self.verbose = verbose
self.output = None
def __call__(self, module, x, y):
Forward hook on torch.nn.Module
:param module: module
:param x: tuple of inputs
:param y: ouput of module
if self.verbose:
print(f"Forward hook of {self.name} ({module.__class__.__name__})")
# Save the output of the layer.
self.output = y
def calculate_on_counts(rep):
Calculates the number of times a unit is on for the given batch.
:param rep: tensor of an intermediate representation
:return: a 1D tensor of with len equal to the total number of units
rep = rep.flatten(start_dim=1)
on_mask = rep != 0
on_counts = on_mask.sum(axis=0)
return on_counts
def calculate_sparsities(rep):
Calculates the fraction of units off for each sample in the batch.
:param rep: tensor of an intermediate representation
:return: tensor of sparsities, one for each sample in the batch
rep = rep.flatten(start_dim=1)
off_mask = rep == 0
sparsities = off_mask.sum(axis=1).float()
sparsities = sparsities / rep.shape[1]
return sparsities
forward_hooks = {}
tracked_modules = []
for n, m in model.named_modules():
# Skip over modules that have sub-modules.
# These sub-modules will be found recursively by 'named_modules'.
if len(list(m.children())) > 0:
continue
# TODO: We can filter which modules get tracked if needed.
forward_hook = SaveForwardOutput(name=n)
m.register_forward_hook(forward_hook)
forward_hooks[n] = forward_hook
tracked_modules.append(m)
total_params, on_params = count_nonzero_params(model)
print(f"Model density (on_params / total_params): {on_params:,} / {total_params:,} = {on_params/total_params:.2f}")
print(model)
# This will be a tensor of length equal to the total number of output units to the given layer.
# Each value will represent the frequency ("freq") of how often that unit is on throughout the whol dataset.
layer_on_freqs = defaultdict(int)
# This will be a list of sparsities measure for each output passed to the layer.
layer_sparsities = defaultdict(list)
# Loop through the whole dataset. See "Load the experiment" section
# above to use the validation dataloader if desired.
total_images = 0
model.eval()
for x, y in train_loader:
x, y = x.to("cuda"), y.to("cuda")
out = model(x)
for name, hook in forward_hooks.items():
# Retrieve ouput saved in the forward hook.
representation = hook.output
# Tally the on-counts for each unit.
on_counts = calculate_on_counts(representation)
layer_on_freqs[name] += on_counts
# Compute the level of sparsity for each output.
sparsities = calculate_sparsities(representation)
layer_sparsities[name].extend(sparsities.tolist())
total_images += len(x)
# Divide through by number of samples to find on-frequency for eash unit.
for layer, on_counts in layer_on_freqs.items():
layer_on_freqs[layer] = on_counts.float() / total_images
df = DataFrame(columns=["module type", "mu output sparsity", "sd output sparsity", "entropy", "dead neuron count", "total neurons", "fraction_dead"])
def log2_of_zeros(arr):
Intentionally leave zero's as they are when taking the log.
arr = arr.copy()
arr[arr != 0] = np.log(arr[arr != 0])
return arr
for (layer, sparsities), freqs, module in zip(layer_sparsities.items(), layer_on_freqs.values(), tracked_modules):
mu_sparsity = np.mean(sparsities)
sd_sparsity = np.std(sparsities)
module_type = module.__class__.__name__
print(f"Layer {layer} ({module_type})")
print(f" Average sparsity of output = {mu_sparsity:.3f} ± {sd_sparsity:.3f}")
freqs = freqs.cpu().numpy()
entropy = np.sum(-freqs * log2_of_zeros(freqs) / len(freqs))
print(f" Entropy of layer = {entropy}")
dead_count = (freqs == 0).sum()
total_neurons = len(freqs)
fraction_dead = dead_count / total_neurons
print(f" Fractions of dead neurons in layer = {fraction_dead:.3f} ({dead_count} out of {total_neurons})")
print()
df.at[layer] = (module_type, mu_sparsity, sd_sparsity, entropy, dead_count, total_neurons, fraction_dead)
df
print("Names of layers observed via hooks: (helpful for specifying 'layer' in the cell below.\n")
pprint(list(layer_on_freqs.keys()))
# Choose one layer to visualize the frequency of it's output units being on or off.
layer = "linear_kwinner"
on_freqs = layer_on_freqs[layer].cpu().numpy()
fig = sns.histplot(data=on_freqs, kde=True, stat="probability")
fig.set_xlabel("on-frequency")
fig.set_title(f"How often are units on for layer '{layer}'");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The the model from a checkpoint.
Step6: Helper class and method.
Step7: Add forward hooks to all layers of the model
Step8: A Brief Overview of the Model
Step9: Compute statistics of the output representations
Step11: Print summary of statitics
Step12: Visualize the distribution of how often each unit is on or off
|
11,218
|
<ASSISTANT_TASK:>
Python Code:
heart = sm.datasets.heart.load_pandas().data
heart.head(n=6)
heart.sort_values('age', ascending=False, inplace=True)
heart.head()
sum(heart.censors == 0)
heart.loc[(heart.censors == 1) & (heart.age < 45), 'age'].mean()
heart.groupby(['censors']).agg(['mean', 'std'])
import seaborn as sns
sns.lmplot(data=heart, x='age', y='survival', hue='censors')
pass
def flatten(list_of_lists):
Flatten a list of lists.
for alist in list_of_lists:
for item in alist:
yield item
def flatmap(f, xs):
First map, then flatten result.
return flatten(map(f, xs))
list(flatmap(lambda x: x.split(), ["hello world", "the quick dog"]))
def affine(x, A, b):
Affine transofrm.
return A@x + b
def rev_affine(y, A, b):
Reverse affine transform.
return np.linalg.solve(A, (y - b))
A = np.random.random((3,3,))
b = np.random.random(3)
x = np.random.random(3)
x
y = affine(x, A, b)
y
rev_affine(y, A, b)
dna = '''> A simulated DNA sequence.
TTAGGCAGTAACCCCGCGATAGGTAGAGCACGCAATCGTCAAGGCGTGCGGTAGGGCTTCCGTGTCTTACCCAAAGAAAC
GACGTAACGTTCCCCGGGCGGTTAAACCAAATCCACTTCACCAACGGCATAACGCGAAGCCCAAACTAAATCGCGCTCGA
GCGGACGCACATTCGCTAGGCTGTGTAGGGGCAGTCTCCGTTAAGGACGATTACCACGTGATGGTAGTTCGCAACATTGG
ACTGTCGGGAATTCCCGAAGGCACTTAAGCGGAGTCTTAGCGTACAGTAACGCAGTCCCGCGTGAACGACTGACAGATGA
'''
dna = ''.join(dna.split('\n')[1:])
dna
from collections import Counter
c = Counter(zip(dna[:-1], dna[1:]))
c
sum(c.values())
def sgd(b, x, y, max_iter, alpha):
n = x.shape[0]
for i in range(max_iter):
for j in range(n):
b[0] -= alpha * (2*(b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[1] =- alpha * (2*x[j] * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[2] -= alpha * (2*x[j]**2 * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
return b
%%time
np.random.seed(12345)
n = 10000
x = np.linspace(0, 10, n)
y = 2*x**2 + 6*x + 3 + np.random.normal(0, 5, n)
k = 100
alpha = 0.00001
b0 = np.random.random(3)
b = sgd(b0, x, y, k, alpha)
yhat = b[0] + b[1]*x+ b[2]*x**2
idx = sorted(np.random.choice(n, 100))
plt.scatter(x[idx], y[idx])
plt.plot(x[idx], yhat[idx], c='red')
pass
from numba import jit
@jit(nopython = True)
def sgd_numba(b, x, y, max_iter, alpha):
n = x.shape[0]
for i in range(max_iter):
for j in range(n):
b[0] -= alpha * (2*(b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[1] =- alpha * (2*x[j] * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[2] -= alpha * (2*x[j]**2 * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
return b
# Call once to trigger compilation
sgd_numba(b0, x, y, k, alpha);
%%time
np.random.seed(12345)
n = 10000
x = np.linspace(0, 10, n)
y = 2*x**2 + 6*x + 3 + np.random.normal(0, 5, n)
k = 100
alpha = 0.00001
b0 = np.random.random(3)
b = sgd_numba(b0, x, y, k, alpha)
yhat = b[0] + b[1]*x+ b[2]*x**2
idx = sorted(np.random.choice(n, 100))
plt.scatter(x[idx], y[idx])
plt.plot(x[idx], yhat[idx], c='red')
pass
%load_ext cython
%%cython -a
cimport cython
@cython.wraparound(False)
@cython.boundscheck(False)
def sgd_cython(double[:] b, double[:] x, double[:] y, int max_iter, double alpha):
cdef int n = x.shape[0]
cdef int i, j
for i in range(max_iter):
for j in range(n):
b[0] -= alpha * (2*(b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[1] =- alpha * (2*x[j] * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
b[2] -= alpha * (2*x[j]**2 * (b[0] + b[1]*x[j] + b[2]*x[j]**2 - y[j]))
return b
%%time
np.random.seed(12345)
n = 10000
x = np.linspace(0, 10, n)
y = 2*x**2 + 6*x + 3 + np.random.normal(0, 5, n)
k = 100
alpha = 0.00001
b0 = np.random.random(3)
b = sgd_cython(b0, x, y, k, alpha)
yhat = b[0] + b[1]*x+ b[2]*x**2
idx = sorted(np.random.choice(n, 100))
plt.scatter(x[idx], y[idx])
plt.plot(x[idx], yhat[idx], c='red')
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Q2 (10 points)
Step6: Q3 (10 points)
Step7: Q4 (10 points)
Step8: Q5 (10 points)
|
11,219
|
<ASSISTANT_TASK:>
Python Code:
# from __future__ import unicode_literals, division
import time, datetime, requests, itchat
from itchat.content import *
import tagui as t
from time import localtime, strftime
# import pandas as pd
# Function
# input : Parcel ID, type: string
# Return: File name of screenshot png image containing parcel status results, type: string
def Enquire_Parcel(str_parcel_id):
# print('str_parcel_id : ', str_parcel_id)
result_filename = 'results/' + str(str_parcel_id).lstrip() + strftime("-%Y-%m-%d-%Hm%Mm%Ss", localtime()) + '.png'
# print('result_filename : ', result_filename)
t.init(visual_automation = True)
t.url('http://qexpress.co.nz/tracking.aspx?orderNumber=' + str(str_parcel_id).lstrip())
t.wait(0.5)
t.keyboard('[end]')
t.wait(0.5)
t.snap('page.png', result_filename)
t.wait(0.5)
t.close()
print('[ Enquie Parcel ] ID : {} | File Name : {}'.format(str_parcel_id, result_filename))
return result_filename;
#######################################################################
### Optional Adhoc Test
### Function: Enquire_Parcel
#######################################################################
msg_parcel_id = 'DZ140053181NZ'
msg_filename = Enquire_Parcel(msg_parcel_id)
# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片
# If click Kernel -> Interupt, then hot re-login is possible:
# itchat.auto_login(hotReload=True)
#######################################################################
### Optional Adhoc Test
### Send Enquire_Parcel status through WeChat
#######################################################################
# Locate User / 获取分别对应相应键值的用户。
# friend = itchat.search_friends(name=u'IPA-Bot')
friend = itchat.search_friends(name=u'白黑')
for i in range(0, len(friend)):
print('NickName : %s' % friend[i]['NickName'])
print('Alias A-ID: %s' % friend[i]['Alias'])
print('RemarkName: %s' % friend[i]['RemarkName'])
print('UserName : %s' % friend[i]['UserName'])
# Picture / 图片
reply = itchat.send_image(msg_filename, friend[0]['UserName'])
print(reply['BaseResponse']['ErrMsg'])
# 如果收到[TEXT, MAP, CARD, NOTE, SHARING]类的信息,会自动回复:
@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING]) # 文字、位置、名片、通知、分享
def text_reply(msg):
print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s'
% (msg['Type'], msg['Text'], msg['FromUserName']))
itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\nI received:\n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
######################################################################################################
# Parcel Enquiry module
######################################################################################################
if "Parcel" in msg['Text'] or "parcel" in msg['Text']: # Check parcel enquiry command keyword: Parcel
msg_filename = Enquire_Parcel(msg['Text'].replace('Parcel ','').replace('parcel ','')) # Extract Parcel ID from message
itchat.send_image(msg_filename, msg['FromUserName'])
######################################################################################################
# 如果收到[PICTURE, RECORDING, ATTACHMENT, VIDEO]类的信息,会自动保存:
@itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO]) # 图片、语音、文件、视频
def download_files(msg):
msg['Text'](msg['FileName'])
print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s'
% ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'], msg['FromUserName']))
itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\nI received:', msg['FromUserName'])
return '@%s@%s' % ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'])
# 如果收到新朋友的请求,会自动通过验证添加加好友,并主动打个招呼:幸会幸会!Nice to meet you!
@itchat.msg_register(FRIENDS)
def add_friend(msg):
print(u'[ Terminal Info ] New Friend Request 新朋友的请求,自动通过验证添加加好友 From: %s' % msg['RecommendInfo']['UserName'])
itchat.add_friend(**msg['Text']) # 该操作会自动将新好友的消息录入,不需要重载通讯录
itchat.send_msg(u'幸会幸会!Nice to meet you!', msg['RecommendInfo']['UserName'])
# 在群里,如果收到@自己的文字信息,会自动回复:
@itchat.msg_register(TEXT, isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
print(u'[ Terminal Info ] Group@Info 在群里收到@自己的文字信息: %s From: %s %s'
% (msg['Content'], msg['ActualNickName'], msg['FromUserName']))
itchat.send(u'@%s\u2005I received: %s' % (msg['ActualNickName'], msg['Content']), msg['FromUserName'])
itchat.run()
# Click Kernel -> Interupt, then logout
itchat.logout() # 安全退出
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function
Step2:
Step3: * Log in web WeChat using QR code image / 用微信App扫QR码图片来自动登录
Step4:
Step5: * Interactive Conversation
Step6: Try the parcel enquiry bot by yourself
|
11,220
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
# Generate features matrix and target vector
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 1)
# Create logistic regression
logit = LogisticRegression()
# Cross-validate model using precision
cross_val_score(logit, X, y, scoring="recall")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Features And Target Data
Step2: Create Logistic Regression
Step3: Cross-Validate Model Using Recall
|
11,221
|
<ASSISTANT_TASK:>
Python Code:
def \
quicksort():
pass
class Meta(type):
pass
class MyClass(metaclass = Meta):
pass
class MySubclass(MyClass):
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Everything is represented by objects in python or relation among objects. Every object has a value, type, identity. Identity can be thought of as address of object in memory which never changes. The 'is' operator compares the identity of the objects. Type of an object is also unchangable.<br>
|
11,222
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import networkx as nx
import numpy as np
import scipy as sp
import itertools
import matplotlib.pyplot as plt
import statsmodels.api as sm
%matplotlib inline
G = nx.Graph()
G.add_nodes_from(['A','B','C','D','E','F','G'])
G.add_edges_from([('A','B'),('A','C'),
('A','D'),('A','F'),
('B','E'),('C','E'),
('F','G')])
nx.draw_networkx(G, with_labels=True)
deg = nx.degree_centrality(G)
print(deg)
eig_c = nx.eigenvector_centrality_numpy(G)
toy_adj = nx.adjacency_matrix(G)
print(eig_c)
val,vec = np.linalg.eig(toy_adj.toarray())
print(val)
vec[:,0]
betw = nx.betweenness_centrality(G)
print(betw)
cent_scores = pd.DataFrame({'deg':deg,'eig_c':eig_c,'betw':betw})
print(cent_scores.corr())
cent_scores
G_trans = G.copy()
G_trans.add_edge('A','E')
G_trans.add_edge('F','D')
nx.draw_networkx(G_trans, with_labels=True)
print("Transitivity:")
print(nx.transitivity(G))
print(nx.transitivity(G_trans))
print("Triangles:")
print(nx.triangles(G))
print(nx.triangles(G_trans))
print("Clustering coefficient")
print(nx.clustering(G))
print(nx.clustering(G_trans))
print("Average Clustering")
print(nx.average_clustering(G))
print(nx.average_clustering(G_trans))
coms = nx.algorithms.community.centrality.girvan_newman(G)
i = 2
for com in itertools.islice(coms,4):
print(i, ' communities')
i+=1
print(tuple(c for c in com))
edges = []
with open('cosponsors.txt') as d:
for line in d:
edges.append(line.split())
dates = pd.read_csv('Dates.txt',sep='-',header=None)
dates.columns = ['year','month','day']
index_loc = np.where(dates.year==2004)
edges_04 = [edges[i] for i in index_loc[0]]
# Get nodes
senate = pd.read_csv('senate.csv')
senators = senate.loc[senate.congress==108,['id','party']]
# Creae adjacency matrix
adj_mat = np.zeros([len(senators),len(senators)])
senators = pd.DataFrame(senators)
senators['adj_ind']=range(len(senators))
# Create Graph Object
senateG= nx.Graph()
senateG.add_nodes_from(senators.id)
party_dict = dict(zip(senators.id,senators.party))
nx.set_node_attributes(senateG, name='party',values=party_dict)
for bill in edges_04:
if bill[0] == "NA": continue
bill = [int(i) for i in bill]
if bill[0] not in list(senators.id): continue
combos = list(itertools.combinations(bill,2))
senateG.add_edges_from(combos)
for pair in combos:
i = senators.loc[senators.id == int(pair[0]), 'adj_ind']
j = senators.loc[senators.id == int(pair[1]), 'adj_ind']
adj_mat[i,j]+=1
adj_mat[j,i]+=1
for row in range(len(adj_mat)):
cols = np.where(adj_mat[row,:])[0]
i = senators.loc[senators.adj_ind==row,'id']
i = int(i)
for col in cols:
j = senators.loc[senators.adj_ind==col,'id']
j = int(j)
senateG[i][j]['bills']=adj_mat[row,col]
bill_dict = nx.get_edge_attributes(senateG,'bills')
elarge=[(i,j) for (i,j) in bill_dict if bill_dict[(i,j)] >40]
nx.draw_spring(senateG, edgelist = elarge,with_labels=True)
senateGt= nx.Graph()
senateGt.add_nodes_from(senateG.nodes)
senateGt.add_edges_from(elarge)
deg = senateGt.degree()
rem = [n[0] for n in deg if n[1]==0]
senateGt_all = senateGt.copy()
senateGt.remove_nodes_from(rem)
nx.draw_spring(senateGt,with_labels=True)
foo=pd.DataFrame({'tup':deg})
deg = senateGt.degree()
foo = pd.DataFrame(foo)
foo[['grp','deg']]=foo['tup'].apply(pd.Series)
foo.deg.plot.hist()
party = nx.get_node_attributes(senateG,'party')
dems = []
gop = []
for i in party:
if party[i]==100: dems.append(i)
else: gop.append(i)
pos = nx.spring_layout(senateGt)
pos_all = nx.circular_layout(senateG)
dem_dict={}
gop_dict={}
dem_lone = {}
gop_lone= {}
for n in dems:
if n in rem: dem_lone[n]=pos_all[n]
else:dem_dict[n] = pos[n]
for n in gop:
if n in rem: gop_lone[n]=pos_all[n]
else:gop_dict[n] = pos[n]
dems = list(set(dems)-set(rem))
gop = list(set(gop)-set(rem))
nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100)
nx.draw_networkx_nodes(senateG, pos=dem_lone, nodelist = list(dem_lone.keys()),node_color='b',node_size = 200)
nx.draw_networkx_nodes(senateG, pos=gop_lone, nodelist = list(gop_lone.keys()),node_color='r', node_size = 200)
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
dems = list(set(dems)-set(rem))
gop = list(set(gop)-set(rem))
nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100)
nx.draw_networkx_nodes(senateGt_all, pos=dem_lone, nodelist = list(dem_lone.keys()),node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt_all, pos=gop_lone, nodelist = list(gop_lone.keys()),node_color='r', node_size = 100)
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
colors = greedy_modularity_communities(senateGt, weight = 'bills')
pos = nx.spring_layout(senateGt)
pos0={}
pos1={}
for n in colors[0]:
pos0[n] = pos[n]
for n in colors[1]:
pos1[n] = pos[n]
nx.draw_networkx_nodes(senateGt, pos=pos0, nodelist = colors[0],node_color='r')
nx.draw_networkx_nodes(senateGt, pos=pos1, nodelist = colors[1],node_color='b')
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
print('gop misclassification')
for i in colors[1]:
if i in dems: print(i,len(senateGt[i]))
print('dem misclassification')
for i in colors[0]:
if i in gop: print(i,len(senateGt[i]))
sh = pd.read_csv('SH.tab',sep='\t')
sh['dem']= sh.party==100
sh['dem']=sh.dem*1
model_data = sh.loc[
(sh.congress == 108) & (sh.chamber=='S'),
['ids','dem','pb','pa']
]
model_data['passed']=model_data.pb+model_data.pa
model_data.set_index('ids',inplace=True)
bet_cent = nx.betweenness_centrality(senateG,weight='bills')
bet_cent = pd.Series(bet_cent)
deg_cent = nx.degree_centrality(senateGt)
deg_cent = pd.Series(deg_cent)
model_data['between']=bet_cent
model_data['degree']=deg_cent
y =model_data.loc[:,'passed']
x =model_data.loc[:,['degree','dem']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
y =model_data.loc[:,'passed']
x =model_data.loc[:,['between','dem']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
# Some functions from the NetworkX package
import heapq
class MappedQueue(object):
The MappedQueue class implements an efficient minimum heap. The
smallest element can be popped in O(1) time, new elements can be pushed
in O(log n) time, and any element can be removed or updated in O(log n)
time. The queue cannot contain duplicate elements and an attempt to push an
element already in the queue will have no effect.
MappedQueue complements the heapq package from the python standard
library. While MappedQueue is designed for maximum compatibility with
heapq, it has slightly different functionality.
Examples
--------
A `MappedQueue` can be created empty or optionally given an array of
initial elements. Calling `push()` will add an element and calling `pop()`
will remove and return the smallest element.
>>> q = MappedQueue([916, 50, 4609, 493, 237])
>>> q.push(1310)
True
>>> x = [q.pop() for i in range(len(q.h))]
>>> x
[50, 237, 493, 916, 1310, 4609]
Elements can also be updated or removed from anywhere in the queue.
>>> q = MappedQueue([916, 50, 4609, 493, 237])
>>> q.remove(493)
>>> q.update(237, 1117)
>>> x = [q.pop() for i in range(len(q.h))]
>>> x
[50, 916, 1117, 4609]
References
----------
.. [1] Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2001).
Introduction to algorithms second edition.
.. [2] Knuth, D. E. (1997). The art of computer programming (Vol. 3).
Pearson Education.
def __init__(self, data=[]):
Priority queue class with updatable priorities.
self.h = list(data)
self.d = dict()
self._heapify()
def __len__(self):
return len(self.h)
def _heapify(self):
Restore heap invariant and recalculate map.
heapq.heapify(self.h)
self.d = dict([(elt, pos) for pos, elt in enumerate(self.h)])
if len(self.h) != len(self.d):
raise AssertionError("Heap contains duplicate elements")
def push(self, elt):
Add an element to the queue.
# If element is already in queue, do nothing
if elt in self.d:
return False
# Add element to heap and dict
pos = len(self.h)
self.h.append(elt)
self.d[elt] = pos
# Restore invariant by sifting down
self._siftdown(pos)
return True
def pop(self):
Remove and return the smallest element in the queue.
# Remove smallest element
elt = self.h[0]
del self.d[elt]
# If elt is last item, remove and return
if len(self.h) == 1:
self.h.pop()
return elt
# Replace root with last element
last = self.h.pop()
self.h[0] = last
self.d[last] = 0
# Restore invariant by sifting up, then down
pos = self._siftup(0)
self._siftdown(pos)
# Return smallest element
return elt
def update(self, elt, new):
Replace an element in the queue with a new one.
# Replace
pos = self.d[elt]
self.h[pos] = new
del self.d[elt]
self.d[new] = pos
# Restore invariant by sifting up, then down
pos = self._siftup(pos)
self._siftdown(pos)
def remove(self, elt):
Remove an element from the queue.
# Find and remove element
try:
pos = self.d[elt]
del self.d[elt]
except KeyError:
# Not in queue
raise
# If elt is last item, remove and return
if pos == len(self.h) - 1:
self.h.pop()
return
# Replace elt with last element
last = self.h.pop()
self.h[pos] = last
self.d[last] = pos
# Restore invariant by sifting up, then down
pos = self._siftup(pos)
self._siftdown(pos)
def _siftup(self, pos):
Move element at pos down to a leaf by repeatedly moving the smaller
child up.
h, d = self.h, self.d
elt = h[pos]
# Continue until element is in a leaf
end_pos = len(h)
left_pos = (pos << 1) + 1
while left_pos < end_pos:
# Left child is guaranteed to exist by loop predicate
left = h[left_pos]
try:
right_pos = left_pos + 1
right = h[right_pos]
# Out-of-place, swap with left unless right is smaller
if right < left:
h[pos], h[right_pos] = right, elt
pos, right_pos = right_pos, pos
d[elt], d[right] = pos, right_pos
else:
h[pos], h[left_pos] = left, elt
pos, left_pos = left_pos, pos
d[elt], d[left] = pos, left_pos
except IndexError:
# Left leaf is the end of the heap, swap
h[pos], h[left_pos] = left, elt
pos, left_pos = left_pos, pos
d[elt], d[left] = pos, left_pos
# Update left_pos
left_pos = (pos << 1) + 1
return pos
def _siftdown(self, pos):
Restore invariant by repeatedly replacing out-of-place element with
its parent.
h, d = self.h, self.d
elt = h[pos]
# Continue until element is at root
while pos > 0:
parent_pos = (pos - 1) >> 1
parent = h[parent_pos]
if parent > elt:
# Swap out-of-place element with parent
h[parent_pos], h[pos] = elt, parent
parent_pos, pos = pos, parent_pos
d[elt] = pos
d[parent] = parent_pos
else:
# Invariant is satisfied
break
return pos
from __future__ import division
import networkx as nx
from networkx.algorithms.community.quality import modularity
def greedy_modularity_communities(G, weight=None):
Find communities in graph using Clauset-Newman-Moore greedy modularity
maximization. This method currently supports the Graph class and does not
consider edge weights.
Greedy modularity maximization begins with each node in its own community
and joins the pair of communities that most increases modularity until no
such pair exists.
Parameters
----------
G : NetworkX graph
Returns
-------
Yields sets of nodes, one for each community.
Examples
--------
>>> from networkx.algorithms.community import greedy_modularity_communities
>>> G = nx.karate_club_graph()
>>> c = list(greedy_modularity_communities(G))
>>> sorted(c[0])
[8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]
References
----------
.. [1] M. E. J Newman 'Networks: An Introduction', page 224
Oxford University Press 2011.
.. [2] Clauset, A., Newman, M. E., & Moore, C.
"Finding community structure in very large networks."
Physical Review E 70(6), 2004.
# Count nodes and edges
N = len(G.nodes())
m = sum([d.get('weight', 1) for u, v, d in G.edges(data=True)])
q0 = 1.0 / (2.0*m)
# Map node labels to contiguous integers
label_for_node = dict((i, v) for i, v in enumerate(G.nodes()))
node_for_label = dict((label_for_node[i], i) for i in range(N))
# Calculate degrees
k_for_label = G.degree(G.nodes(), weight=weight)
k = [k_for_label[label_for_node[i]] for i in range(N)]
# Initialize community and merge lists
communities = dict((i, frozenset([i])) for i in range(N))
merges = []
# Initial modularity
partition = [[label_for_node[x] for x in c] for c in communities.values()]
q_cnm = modularity(G, partition)
# Initialize data structures
# CNM Eq 8-9 (Eq 8 was missing a factor of 2 (from A_ij + A_ji)
# a[i]: fraction of edges within community i
# dq_dict[i][j]: dQ for merging community i, j
# dq_heap[i][n] : (-dq, i, j) for communitiy i nth largest dQ
# H[n]: (-dq, i, j) for community with nth largest max_j(dQ_ij)
a = [k[i]*q0 for i in range(N)]
dq_dict = dict(
(i, dict(
(j, 2*q0 - 2*k[i]*k[j]*q0*q0)
for j in [
node_for_label[u]
for u in G.neighbors(label_for_node[i])]
if j != i))
for i in range(N))
dq_heap = [
MappedQueue([
(-dq, i, j)
for j, dq in dq_dict[i].items()])
for i in range(N)]
H = MappedQueue([
dq_heap[i].h[0]
for i in range(N)
if len(dq_heap[i]) > 0])
# Merge communities until we can't improve modularity
while len(H) > 1:
# Find best merge
# Remove from heap of row maxes
# Ties will be broken by choosing the pair with lowest min community id
try:
dq, i, j = H.pop()
except IndexError:
break
dq = -dq
# Remove best merge from row i heap
dq_heap[i].pop()
# Push new row max onto H
if len(dq_heap[i]) > 0:
H.push(dq_heap[i].h[0])
# If this element was also at the root of row j, we need to remove the
# dupliate entry from H
if dq_heap[j].h[0] == (-dq, j, i):
H.remove((-dq, j, i))
# Remove best merge from row j heap
dq_heap[j].remove((-dq, j, i))
# Push new row max onto H
if len(dq_heap[j]) > 0:
H.push(dq_heap[j].h[0])
else:
# Duplicate wasn't in H, just remove from row j heap
dq_heap[j].remove((-dq, j, i))
# Stop when change is non-positive
if dq <= 0:
break
# Perform merge
communities[j] = frozenset(communities[i] | communities[j])
del communities[i]
merges.append((i, j, dq))
# New modularity
q_cnm += dq
# Get list of communities connected to merged communities
i_set = set(dq_dict[i].keys())
j_set = set(dq_dict[j].keys())
all_set = (i_set | j_set) - set([i, j])
both_set = i_set & j_set
# Merge i into j and update dQ
for k in all_set:
# Calculate new dq value
if k in both_set:
dq_jk = dq_dict[j][k] + dq_dict[i][k]
elif k in j_set:
dq_jk = dq_dict[j][k] - 2.0*a[i]*a[k]
else:
# k in i_set
dq_jk = dq_dict[i][k] - 2.0*a[j]*a[k]
# Update rows j and k
for row, col in [(j, k), (k, j)]:
# Save old value for finding heap index
if k in j_set:
d_old = (-dq_dict[row][col], row, col)
else:
d_old = None
# Update dict for j,k only (i is removed below)
dq_dict[row][col] = dq_jk
# Save old max of per-row heap
if len(dq_heap[row]) > 0:
d_oldmax = dq_heap[row].h[0]
else:
d_oldmax = None
# Add/update heaps
d = (-dq_jk, row, col)
if d_old is None:
# We're creating a new nonzero element, add to heap
dq_heap[row].push(d)
else:
# Update existing element in per-row heap
dq_heap[row].update(d_old, d)
# Update heap of row maxes if necessary
if d_oldmax is None:
# No entries previously in this row, push new max
H.push(d)
else:
# We've updated an entry in this row, has the max changed?
if dq_heap[row].h[0] != d_oldmax:
H.update(d_oldmax, dq_heap[row].h[0])
# Remove row/col i from matrix
i_neighbors = dq_dict[i].keys()
for k in i_neighbors:
# Remove from dict
dq_old = dq_dict[k][i]
del dq_dict[k][i]
# Remove from heaps if we haven't already
if k != j:
# Remove both row and column
for row, col in [(k, i), (i, k)]:
# Check if replaced dq is row max
d_old = (-dq_old, row, col)
if dq_heap[row].h[0] == d_old:
# Update per-row heap and heap of row maxes
dq_heap[row].remove(d_old)
H.remove(d_old)
# Update row max
if len(dq_heap[row]) > 0:
H.push(dq_heap[row].h[0])
else:
# Only update per-row heap
dq_heap[row].remove(d_old)
del dq_dict[i]
# Mark row i as deleted, but keep placeholder
dq_heap[i] = MappedQueue()
# Merge i into j and update a
a[j] += a[i]
a[i] = 0
communities = [
frozenset([label_for_node[i] for i in c])
for c in communities.values()]
return sorted(communities, key=len, reverse=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Toy Network
Step2: Centrality
Step3: Eigenvector Centrality
Step4: Betweenness Centrality
Step5: Centrality Measures Are Different
Step6: Transitivity
Step7: Measure Transitivity
Step8: Clustering Coefficient
Step9: Community Detection
Step10: Real Network
Step11: Subset the Data
Step12: Subset the Data
Step13: Create the network (two ways)
Step14: Set edge weights for Network Object
Step15: Thresholding
Step16: Look at the network
Step17: Take out the singletons to get a clearer picture
Step18: Look at the degree distribution
Step19: Look at party in the network
Step20: Prepare the Visualization
Step21: Visualize the network by party
Step22: Do it again with a lower threshold
Step23: Modularity
Step24: Visualize the Communities
Step25: How did we do?
Step26: Pretty, but now what?
Step27: Merge in some network data
Step28: Degree is not significant
Step29: Betweeness is!
Step40: Questions?
|
11,223
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
11,224
|
<ASSISTANT_TASK:>
Python Code:
def minimum_sum(n , k ) :
if(k % n == 0 ) :
return 0 ;
return 1
n = 3
k = 56
print(minimum_sum(n , k ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,225
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import matplotlib.image as mpimg
from IPython.display import Image
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
from IPython.display import HTML
HTML('../style/code_toggle.html')
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits', \
figure=fig, subplot=[0.0,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-0.1, vmax=1.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Sky Model')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('KAT-7 PSF')
gc2.add_colorbar()
fig.canvas.draw()
fig = plt.figure(figsize=(16, 5))
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits')
skyModel = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits')
psf = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits')
dirtyImg = fh[0].data
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')
residualImg = fh[0].data
#convolve the sky model with the PSF
sampFunc = np.fft.fft2(psf) #sampling function
skyModelVis = np.fft.fft2(skyModel) #sky model visibilities
sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities
convImg = np.fft.fftshift(np.fft.ifft2(sampModelVis)).real + residualImg #sky model convolved with PSF
gc1 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0,0.0,0.30,1])
gc1.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('PSF convolved with Sky Model')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure(dirtyImg, figure=fig, subplot=[0.33,0.0,0.30,1])
gc2.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Dirty')
gc2.add_colorbar()
gc3 = aplpy.FITSFigure(dirtyImg - convImg, figure=fig, subplot=[0.67,0.0,0.30,1])
gc3.show_colorscale(cmap='viridis')
gc3.hide_axis_labels()
gc3.hide_tick_labels()
plt.title('Difference')
gc3.add_colorbar()
fig.canvas.draw()
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \
figure=fig, subplot=[0.1,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Residual')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Restored')
gc2.add_colorbar()
fig.canvas.draw()
def gauss2d(sigma):
Return a normalized 2d Gaussian function, sigma: size in pixels
return lambda x,y: (1./(2.*np.pi*(sigma**2.))) * np.exp(-1. * ((xpos**2. + ypos**2.) / (2. * sigma**2.)))
imgSize = 512
xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float)
xpos -= imgSize/2.
ypos -= imgSize/2.
sigmas = [64., 16., 4., 1.]
fig = plt.figure(figsize=(16, 7))
#Gaussian image-domain source
ax1 = plt.subplot2grid((2, 4), (0, 0))
gauss1 = gauss2d(sigmas[0])
ax1.imshow(gauss1(xpos, ypos))
ax1.axis('off')
plt.title('Sigma: %i'%int(sigmas[0]))
#Gaussian image-domain source
ax2 = plt.subplot2grid((2, 4), (0, 1))
gauss2 = gauss2d(sigmas[1])
ax2.imshow(gauss2(xpos, ypos))
ax2.axis('off')
plt.title('Sigma: %i'%int(sigmas[1]))
#Gaussian image-domain source
ax3 = plt.subplot2grid((2, 4), (0, 2))
gauss3 = gauss2d(sigmas[2])
ax3.imshow(gauss3(xpos, ypos))
ax3.axis('off')
plt.title('Sigma: %i'%int(sigmas[2]))
#Gaussian image-domain source
ax4 = plt.subplot2grid((2, 4), (0, 3))
gauss4 = gauss2d(sigmas[3])
ax4.imshow(gauss4(xpos, ypos))
ax4.axis('off')
plt.title('Sigma: %i'%int(sigmas[3]))
#plot the visibility flux distribution as a function of baseline length
ax5 = plt.subplot2grid((2, 4), (1, 0), colspan=4)
visGauss1 = np.abs( np.fft.fftshift( np.fft.fft2(gauss1(xpos, ypos))))
visGauss2 = np.abs( np.fft.fftshift( np.fft.fft2(gauss2(xpos, ypos))))
visGauss3 = np.abs( np.fft.fftshift( np.fft.fft2(gauss3(xpos, ypos))))
visGauss4 = np.abs( np.fft.fftshift( np.fft.fft2(gauss4(xpos, ypos))))
ax5.plot(visGauss1[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[0]))
ax5.plot(visGauss2[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[1]))
ax5.plot(visGauss3[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[2]))
ax5.plot(visGauss4[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[3]))
ax5.hlines(1., xmin=0, xmax=int(imgSize/2)-1, linestyles='dashed')
plt.legend()
plt.ylabel('Flux')
plt.xlabel('Baseline Length')
plt.xlim(0, int(imgSize/8)-1)
ax5.set_xticks([])
ax5.set_yticks([])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 6.1 Sky Models<a id='deconv
Step3: Left
Step4: Left
Step6: Left
|
11,226
|
<ASSISTANT_TASK:>
Python Code:
def nonPremCost(maxY, unitCostPrev, rPrev, prevDeduct, costBase, rBase, baseDeduct):
paidPrev = 0
paidBase = 0
# first, calculate the preventive services, suppose that 2 units per year
totalCostPrev = unitCostPrev * 2
coveredPrev = min(maxY, max(0, totalCostPrev - prevDeduct) * rPrev)
#print('coveredPrev=', coveredPrev)
coveredBase = maxY - coveredPrev
paidPrev = totalCostPrev - coveredPrev;
#print('paidPrev=', paidPrev)
# second, calculate the basic filling services, suppose that total cost is costBase
if coveredBase <= 0:
paidBase = costBase
else:
paidBase = costBase - min(coveredBase, rBase*max(0, costBase-baseDeduct))
extraCost = paidPrev + paidBase
return extraCost
unitCostPrev = 150 # cost estimate for each teeth cleaning
x = [50*i for i in range(41)] # vector for total base cost
# BlueCare lower $28.93
# nonPremCost(1000, unitCostPrev, 1, 100, costBase, 0.8, 100)
yB1 = [nonPremCost(1000, unitCostPrev, 1, 100, costBase, 0.8, 100) for costBase in x]
yB1t = [i+28.93*12 for i in yB1] # total out-of-pocket (absolute)
yB1a = [i/.7+28.93*12 for i in yB1] # adjusted to pre-tax cost
# BlueCare Higher $ 44.76
# nonPremCost(1000, unitCostPrev, 1, 0, costBase, 0.8, 60)
yB2 = [nonPremCost(1000, unitCostPrev, 1, 0, costBase, 0.8, 60) for costBase in x]
yB2t = [i+44.76*12 for i in yB2]
yB2a = [i/.7+44.76*12 for i in yB2]
# Metlife Option 1 $33.62
# nonPremCost(1000, unitCostPrev, 1, 0, costBase, 0.7, 75)
yM1 = [nonPremCost(1000, unitCostPrev, 1, 0, costBase, 0.7, 75) for costBase in x]
yM1t = [i+33.62*12 for i in yM1]
yM1a = [i/.7+33.62*12 for i in yM1]
# Metlife Option 2 $37.5
# nonPremCost(1500, unitCostPrev, 1, 0, costBase, 0.7, 50)
yM2 = [nonPremCost(1500, unitCostPrev, 1, 0, costBase, 0.7, 50) for costBase in x]
yM2t = [i+37.5*12 for i in yM2]
yM2a = [i/.7+37.5*12 for i in yM2]
# Metlife Option 3 $42.36
# nonPremCost(2000, unitCostPrev, 1, 0, costBase, 0.8, 25)
yM3 = [nonPremCost(2000, unitCostPrev, 1, 0, costBase, 0.8, 25) for costBase in x]
yM3t = [i+42.36*12 for i in yM3]
yM3a = [i/.7+42.36*12 for i in yM3]
# Guardian Option 1 $ 37.07
# nonPremCost(1000, unitCostPrev, 1, 50, costBase, 0.7, 0)
yG1 = [nonPremCost(1000, unitCostPrev, 1, 50, costBase, 0.7, 0) for costBase in x]
yG1t = [i+37.07*12 for i in yG1]
yG1a = [i/.7+37.07*12 for i in yG1]
# Guardian Option 2 $ 25.50
# nonPremCost(500, unitCostPrev, 0.8, 50, costBase, 0.5, 0)
yG2 = [nonPremCost(500, unitCostPrev, 0.8, 50, costBase, 0.5, 0) for costBase in x]
yG2t = [i+25.50*12 for i in yG2]
yG2a = [i/.7+25.50*12 for i in yG2]
# Metlife Option 4 $37.5
# nonPremCost(1500, unitCostPrev, 1, 0, costBase, 0.8, 50)
yMg = [nonPremCost(1500, unitCostPrev, 1, 0, costBase, 0.8, 50) for costBase in x]
yMgt = [i+37.5*12 for i in yMg]
yMga = [i/.7+37.5*12 for i in yMg]
y = [i+2*unitCostPrev for i in x]
plotChoice = 2
if plotChoice == 0:
plt.plot(x,yB1,'b--',x,yB2,'b^',x,yM1,'r--',x,yM2,'r^',x,yM3,'r*',x,yG1,'g--', x,yG2,'g^', x, yMg, 'ro')
plt.title('Extra Dental Cost when preventive cost: %s' %(unitCostPrev*2))
plt.xlabel('Estimate of basic restorative service cost')
plt.ylabel('Out-of-pocket copayment (w/o annual premium)')
elif plotChoice == 1:
plt.plot(x,y,'k', x,yB1t,'b--',x,yB2t,'b^',x,yM1t,'r--',x,yM2t,'r^',x,yM3t,'r*',x,yG1t,'g--', x,yG2t,'g^')
plt.title('Dental Cost in the bill when preventive cost: %s' %(unitCostPrev*2))
plt.xlabel('Estimate of basic restorative service cost')
plt.ylabel('Cost in bill (annual premium plus out-of-pocket copayment)')
elif plotChoice == 2:
plt.plot(x,y,'k',x,yB1a,'b--',x,yB2a,'b^',x,yM1a,'r--',x,yM2a,'r^',x,yM3a,'r*',x,yG1a,'g--', x,yG2a,'g^', x, yMga, 'ro')
plt.title('Adjusted Pre-Tax Dental Cost when preventive cost: %s' %(unitCostPrev*2))
plt.xlabel('Estimate of basic restorative service cost')
plt.ylabel('Adjusted pre-tax dental cost (annual premium plus out-of-pocket copayment)')
else:
print('no figure generateds')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, we consider different dental insurance options which are available online for individual and group member plans.
|
11,227
|
<ASSISTANT_TASK:>
Python Code:
import os
import time
import sys
import requests
POP20_CC = ('CN IN US ID BR PK NG BD RU JP '
'MX PH VN ET EG DE IR TR CD FR').split()
BASE_URL = 'http://flupy.org/data/flags'
DEST_DIR = 'downloads/'
def save_flag(img, filename):
path = os.path.join(DEST_DIR, filename)
with open(path, 'wb') as fp:
fp.write(img)
def get_flag(cc):
url = '{}/{cc}/{cc}.gif'.format(BASE_URL, cc=cc.lower())
resp = requests.get(url)
return resp.content
def show(text):
print(text, end=' ') # 输出末尾的换行符变成了空格
sys.stdout.flush() #Python 中正常情况下,遇到换行才会刷新 stdout 缓冲。所以这里手动刷新缓冲
def download_many(cc_list):
for cc in sorted(cc_list):
image = get_flag(cc)
show(cc)
save_flag(image, cc.lower() + '.gif')
return len(cc_list)
def main(download_many):
t0 = time.time()
count = download_many(POP20_CC)
elapsed = time.time() - t0
msg = '\n{} flags downloaded in {:.2f}s'
print(msg.format(count, elapsed))
main(download_many)
from concurrent import futures
MAX_WORKERS = 20
def download_one(cc):
image = get_flag(cc)
show(cc)
save_flag(image, cc.lower() + '.gif')
return cc
def download_many(cc_list):
workers = min(MAX_WORKERS, len(cc_list))
with futures.ThreadPoolExecutor(workers) as executor:
# 与内置的 map 方法类似,不过 download_one 函数会在多个线程中并发调用
# map 方法返回一个生成器,因此可以迭代,获取各个函数的返回值
res = executor.map(download_one, sorted(cc_list))
return len(list(res))
main(download_many)
def download_many(cc_list):
cc_list = cc_list[:5]
# max_workers 硬编码为 3,以便在输出中观察待完成的期物
with futures.ThreadPoolExecutor(max_workers=3) as executor:
to_do = []
# 按照字母表迭代国家代码,明确表明输出的顺序与输入一致
for cc in sorted(cc_list):
# submit 方法可以排定调用对象的执行时间,然后返回一个期物,表示这个待执行的操作
future = executor.submit(download_one, cc)
to_do.append(future)
msg = 'Scheduled for {}: {}'
print(msg.format(cc, future))
results = []
for future in futures.as_completed(to_do):
res = future.result()
msg = '{} result: {!r}'
print(msg.format(future, res))
results.append(res)
return len(results)
main(download_many)
from concurrent import futures
MAX_WORKERS = 20
def download_one(cc):
image = get_flag(cc)
show(cc)
save_flag(image, cc.lower() + '.gif')
return cc
def download_many(cc_list):
workers = min(MAX_WORKERS, len(cc_list))
with futures.ProcessPoolExecutor() as executor:
res = executor.map(download_one, sorted(cc_list))
return len(list(res))
main(download_many)
from time import sleep, strftime
from concurrent import futures
# 将参数打印,前面加上时间戳
def display(*args):
print(strftime('[%H:%M:%S]'), end=' ')
print(*args)
# 开始时候显示一个消息,休眠 n 秒,休眠 n 秒,结束时候显示一个消息
# 消息使用制表符缩进,缩进量由 n 确定
def loiter(n):
msg = '{}loiter({}): doing nothing for {}s...'
display(msg.format('\t'*n, n, n))
sleep(n)
msg = '{}loiter({}): done'
display(msg.format('\t'*n, n))
return n * 10
def main():
display('Script starting')
executor = futures.ThreadPoolExecutor(max_workers=3)
results = executor.map(loiter, range(5))
display('results:', results)
display('Waiting for individual results:')
# for 循环中的 enumerate 函数会隐式调用 next(results),
# 这个函数又会在(内部)表示第一个任务(loiter(0))的 _f 期物上调用_f.result() 方法。
# result 方法会阻塞,直到期物运行结束,因此这个循环每次迭代时都要等待下一个结果做好准备。
for i, result in enumerate(results):
display('result {}: {}'.format(i, result))
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 注意上面代码细节,导入标准库 os time 和 sys 之后,使用一个空格分开了非标准库 requests
Step2: 这里为了方便引用了第一个代码块的一些函数,注意,上面的 download_one 函数其实是第一个例子的 for 循环结构体,编写并发代码经常这样重构,把依次执行的 for 循环改成函数,以便并发调用
Step3: 注意,这个示例中调用 future.result() 方法绝不会阻塞,因为 future 由 as_completed 函数产出。
Step4: 对于简单用途来说,这两个实现 Executor 接口的唯一指的注意的区别是,ThreadPoolExecutor.__init__ 方法需要 max_workers 参数,指定线程池中的线程数量,在 ProcessPoolExecutor 类中,那个参数是可选的,大多数时候不适用,默认值是 os.cpu_count() 函数返回的 CPU 数量,这样处理说得通,因为对 CPU 密集型的处理来说,不可能要求使用超过 CPU 数量的职程。二队 I/O 密集型处理来说,可以在一个 ThreadPoolExecutor 实例中使用 10 和 100 个或 1000 个线程,最佳线程数量取决于线程做的事,以及可用内存等
|
11,228
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# This is how we'll access astropy's units module:
import astropy.units as u
# astropy stores fundamental physical constants like
# the mass of the sun M_sun
from astropy.constants import M_sun
masses = u.Quantity([4.31e6*M_sun, 1*M_sun, 1.4*M_sun])
masses
# Newton's gravitational constant G,
# speed of light c, radius of sun R_sun
from astropy.constants import c, G, R_sun
# Convert the masses to units of grams using the `to` method
masses.to(u.gram)
r_s = 2*G*masses/c**2
print "r_s in centimeters: {0}".format(r_s.to(u.cm))
print "r_s in solar radii: {0}".format(r_s/R_sun)
print "r_s in AU: {0}".format(r_s.to(u.AU))
%matplotlib inline
import matplotlib.pyplot as plt
planet_names = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
semimajor_axes = u.Quantity([0.38, 0.72, 1, 1.5, 5.2, 9.5, 19.1, 30.0], unit=u.AU)
print "Any planets interior to r_s?: {0}".format(any(r_s[0] > semimajor_axes))
fig, ax = plt.subplots()
ax.bar(range(len(semimajor_axes)), semimajor_axes.to(u.AU).value,
log=True, color='k')
ax.set_xticklabels(planet_names, ha='left')
ax.axhline(r_s[0].to(u.AU).value, color='r', label='$r_s$')
ax.legend()
ax.set_ylabel('Semimajor Axis [AU]');
radius_neutron_star = 10*u.km
print("how much bigger is neutron star's radius?: {0:.2f}x bigger"
.format(float(radius_neutron_star/r_s[2])))
planet_names = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune', 'Pluto']
semimajor_axes = u.Quantity([0.38, 0.72, 1, 1.5, 5.2, 9.5, 19.1, 30.0, 40], unit=u.AU)
semimajor_axes.to(u.cm)
T = (4 * np.pi**2 * semimajor_axes**3 / (G*M_sun))**0.5
T.to(u.year)
(semimajor_axes/c).to(u.hour)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b) Retrieve constants from astropy's constants module
Step2: c) Compute $r_s$, convert to cm
Step3: d) $r_s$ in units of solar radii
Step4: e) $r_s$ in units of AU
Step5: f) Compare $r_s$ of Sag A* to orbits of solar system planets
Step6: g) How much bigger is neutron star radius than $r_s$?
Step7: 6) Planets
Step8: b-c) Orbital period, in years
Step9: d) Light travel time in hours, given
|
11,229
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib auto
import matplotlib.pyplot as plt
from ipywidgets import interact
from bmi_live.bmi_diffusion import BmiDiffusion
def run_model(n_steps=10):
model = BmiDiffusion()
model.initialize('../data/diffusion.yaml')
grid_shape = model.get_grid_shape(0)
initial_temp = model.get_value('plate_surface__temperature')
initial_temp[20] = 100.
model.set_value('plate_surface__temperature', initial_temp)
temp = [initial_temp.reshape(grid_shape)]
for _ in range(n_steps):
model.update()
new_temp = model.get_value('plate_surface__temperature')
temp.append(new_temp.reshape(grid_shape))
model.finalize()
return temp
temperature = run_model(50)
xmin, xmax = 0., temperature[0].shape[1]
ymin, ymax = 0., temperature[0].shape[0]
im = plt.imshow(temperature[0], vmax=100, extent=[xmin, xmax, ymin, ymax], origin='upper', cmap='gist_stern')
plt.title('Plate Temperature')
plt.colorbar().ax.set_ylabel('Temperature (K)')
def plot_temperature(time_index=0):
im.set_data(temperature[time_index])
interact(plot_temperature, time_index=(0, 50, 1))
def plot_temperature(temperature):
plt.clf()
xmin, xmax = 0., temperature.shape[1]
ymin, ymax = 0., temperature.shape[0]
plt.imshow(temperature, vmin=0., vmax=10., extent=[xmin, xmax, ymin, ymax], origin='upper', cmap='Reds')
plt.title('Plate Temperature')
# plt.colorbar().ax.set_ylabel('Temperature (K)')
class HeatRunner(object):
def __init__(self, heat):
self._heat = heat
self._cache = []
def run_and_plot(self, time_index):
for _ in range(len(self._cache), time_index):
self._heat.update()
self._cache.append(
heat.get_value('plate_surface__temperature').reshape(self._heat.get_grid_shape(0)))
plot_temperature(self._cache[time_index - 1])
heat = BmiDiffusion()
heat.initialize('../data/diffusion.yaml')
init_temperature = heat.get_value('plate_surface__temperature')
init_temperature.shape = heat.get_grid_shape(0)
init_temperature[2, 2] = 100.
heat.set_value('plate_surface__temperature', init_temperature)
runner = HeatRunner(heat)
interact(runner.run_and_plot, time_index=(0, 100, 1))
current_temperature = heat.get_value('plate_surface__temperature')
current_temperature.shape = heat.get_grid_shape(0)
current_temperature[5, 7] = 100.
heat.set_value('plate_surface__temperature', current_temperature)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define a function that runs the Diffusion model through its BMI for a fixed number of time steps. Store the temperture field from each time step in a list. Return the list.
Step2: Run the model for awhile.
Step3: Define a function to visualize the temperature field at a time step (defined by an index into the list of temperature fields).
Step4: Make an interactive plot of temperature over time.
Step5: Update the model interactively
Step6: We're going to use this a little later to interactively run and plot our model.
Step7: Instantiate the heat model and initialize it.
Step8: Set the initial temperature.
Step9: Run the model interactively over time. Note that as you move the slider, the model runs to the updated time.
|
11,230
|
<ASSISTANT_TASK:>
Python Code:
from rl import *
%psource PassiveTDAgent
from mdp import sequential_decision_environment
# Action Directions
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
policy = {
(0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,
(0, 1): north, (2, 1): north, (3, 1): None,
(0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west,
}
our_agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
from mdp import value_iteration
print(value_iteration(sequential_decision_environment))
for i in range(200):
run_single_trial(our_agent,sequential_decision_environment)
print(our_agent.U)
%matplotlib inline
import matplotlib.pyplot as plt
def graph_utility_estimates(agent_program, mdp, no_of_iterations, states_to_graph):
graphs = {state:[] for state in states_to_graph}
for iteration in range(1,no_of_iterations+1):
run_single_trial(agent_program, mdp)
for state in states_to_graph:
graphs[state].append((iteration, agent_program.U[state]))
for state, value in graphs.items():
state_x, state_y = zip(*value)
plt.plot(state_x, state_y, label=str(state))
plt.ylim([0,1.2])
plt.legend(loc='lower right')
plt.xlabel('Iterations')
plt.ylabel('U')
agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2)])
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2), (3,2)])
%psource QLearningAgent
q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2,
alpha=lambda n: 60./(59+n))
for i in range(200):
run_single_trial(q_agent,sequential_decision_environment)
q_agent.Q
U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.
for state_action, value in q_agent.Q.items():
state, action = state_action
if U[state] < value:
U[state] = value
U
print(value_iteration(sequential_decision_environment))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9.
Step3: Figure 17.1 (sequential_decision_environment) is a GridMDP object and is similar to the grid shown in Figure 21.1. The rewards in the terminal states are +1 and -1 and -0.04 in rest of the states. <img src="files/images/mdp.png"> Now we define a policy similar to Fig 21.1 in the book.
Step4: Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837.
Step5: The rl module also has a simple implementation to simulate iterations. The function is called run_single_trial. Now we can try our implementation. We can also compare the utility estimates learned by our agent to those obtained via value iteration.
Step6: The values calculated by value iteration
Step7: Now the values estimated by our agent after 200 trials.
Step8: We can also explore how these estimates vary with time by using plots similar to Fig 21.5a. To do so we define a function to help us with the same. We will first enable matplotlib using the inline backend.
Step9: Here is a plot of state (2,2).
Step10: It is also possible to plot multiple states on the same plot.
Step11: ACTIVE REINFORCEMENT LEARNING
Step12: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Step13: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
Step14: Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to
Step15: The Utility U of each state is related to Q by the following equation.
Step16: Let us finally compare these estimates to value_iteration results.
|
11,231
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.DataFrame(data={'fruit': ["banana", "apple", "banana", "apple", "banana","apple", "banana",
"apple", "apple", "apple", "banana", "banana", "apple", "banana",],
'tasty': ["yes", "no", "yes", "yes", "yes", "yes", "yes",
"yes", "yes", "yes", "yes", "no", "no", "no"],
'size': ["large", "large", "large", "small", "large", "large", "large",
"small", "large", "large", "large", "large", "small", "small"]})
print(data)
from pgmpy.models import BayesianModel
model = BayesianModel([('fruit', 'tasty'), ('size', 'tasty')]) # fruit -> tasty <- size
from pgmpy.estimators import ParameterEstimator
pe = ParameterEstimator(model, data)
print("\n", pe.state_counts('fruit')) # unconditional
print("\n", pe.state_counts('tasty')) # conditional on fruit and size
from pgmpy.estimators import MaximumLikelihoodEstimator
mle = MaximumLikelihoodEstimator(model, data)
print(mle.estimate_cpd('fruit')) # unconditional
print(mle.estimate_cpd('tasty')) # conditional
# Calibrate all CPDs of `model` using MLE:
model.fit(data, estimator=MaximumLikelihoodEstimator)
from pgmpy.estimators import BayesianEstimator
est = BayesianEstimator(model, data)
print(est.estimate_cpd('tasty', prior_type='BDeu', equivalent_sample_size=10))
import numpy as np
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import BayesianEstimator
# generate data
data = pd.DataFrame(np.random.randint(low=0, high=2, size=(5000, 4)), columns=['A', 'B', 'C', 'D'])
model = BayesianModel([('A', 'B'), ('A', 'C'), ('D', 'C'), ('B', 'D')])
model.fit(data, estimator=BayesianEstimator, prior_type="BDeu") # default equivalent_sample_size=5
for cpd in model.get_cpds():
print(cpd)
import pandas as pd
import numpy as np
from pgmpy.estimators import BDeuScore, K2Score, BicScore
from pgmpy.models import BayesianModel
# create random data sample with 3 variables, where Z is dependent on X, Y:
data = pd.DataFrame(np.random.randint(0, 4, size=(5000, 2)), columns=list('XY'))
data['Z'] = data['X'] + data['Y']
bdeu = BDeuScore(data, equivalent_sample_size=5)
k2 = K2Score(data)
bic = BicScore(data)
model1 = BayesianModel([('X', 'Z'), ('Y', 'Z')]) # X -> Z <- Y
model2 = BayesianModel([('X', 'Z'), ('X', 'Y')]) # Y <- X -> Z
print(bdeu.score(model1))
print(k2.score(model1))
print(bic.score(model1))
print(bdeu.score(model2))
print(k2.score(model2))
print(bic.score(model2))
print(bdeu.local_score('Z', parents=[]))
print(bdeu.local_score('Z', parents=['X']))
print(bdeu.local_score('Z', parents=['X', 'Y']))
from pgmpy.estimators import ExhaustiveSearch
es = ExhaustiveSearch(data, scoring_method=bic)
best_model = es.estimate()
print(best_model.edges())
print("\nAll DAGs by score:")
for score, dag in reversed(es.all_scores()):
print(score, dag.edges())
from pgmpy.estimators import HillClimbSearch
# create some data with dependencies
data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']
hc = HillClimbSearch(data)
best_model = hc.estimate(scoring_method=BicScore(data))
print(best_model.edges())
from pgmpy.estimators import PC
from pgmpy.estimators.CITests import chi_square
data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']
data['E'] *= data['F']
print(chi_square(X='B', Y='H', Z=[], data=data, significance_level=0.05)) # dependent
print(chi_square(X='B', Y='E', Z=[], data=data, significance_level=0.05)) # independent
print(chi_square(X='B', Y='H', Z=['A'], data=data, significance_level=0.05)) # independent
print(chi_square(X='A', Y='G', Z=[], data=data, significance_level=0.05)) # independent
print(chi_square(X='A', Y='G', Z=['H'], data=data, significance_level=0.05)) # dependent
est = PC(data)
skel, seperating_sets = est.build_skeleton(significance_level=0.01)
print("Undirected edges: ", skel.edges())
pdag = est.skeleton_to_pdag(skel, seperating_sets)
print("PDAG edges: ", pdag.edges())
model = pdag.to_dag()
print("DAG edges: ", model.edges())
print(est.estimate(significance_level=0.01).edges())
from pgmpy.estimators import MmhcEstimator
from pgmpy.estimators import BDeuScore
data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']
data['E'] *= data['F']
mmhc = MmhcEstimator(data)
skeleton = mmhc.mmpc()
print("Part 1) Skeleton: ", skeleton.edges())
# use hill climb search to orient the edges:
hc = HillClimbSearch(data)
model = hc.estimate(tabu_length=10, white_list=skeleton.to_directed().edges(), scoring_method=BDeuScore(data))
print("Part 2) Model: ", model.edges())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We know that the variables relate as follows
Step2: Parameter learning is the task to estimate the values of the conditional probability distributions (CPDs), for the variables fruit, size, and tasty.
Step3: We can see, for example, that as many apples as bananas were observed and that 5 large bananas were tasty, while only 1 was not.
Step4: mle.estimate_cpd(variable) computes the state counts and divides each cell by the (conditional) sample size. The mle.get_parameters()-method returns a list of CPDs for all variable of the model.
Step5: While very straightforward, the ML estimator has the problem of overfitting to the data. In above CPD, the probability of a large banana being tasty is estimated at 0.833, because 5 out of 6 observed large bananas were tasty. Fine. But note that the probability of a small banana being tasty is estimated at 0.0, because we observed only one small banana and it happened to be not tasty. But that should hardly make us certain that small bananas aren't tasty!
Step6: The estimated values in the CPDs are now more conservative. In particular, the estimate for a small banana being not tasty is now around 0.64 rather than 1.0. Setting equivalent_sample_size to 10 means that for each parent configuration, we add the equivalent of 10 uniform samples (here
Step7: Structure Learning
Step8: While the scores vary slightly, we can see that the correct model1 has a much higher score than model2.
Step9: Search strategies
Step10: Once more nodes are involved, one needs to switch to heuristic search. HillClimbSearch implements a greedy local search that starts from the DAG start (default
Step11: The search correctly identifies e.g. that B and C do not influnce H directly, only through A and of course that D, E, F are independent.
Step12: The CITests.py module in pgmpy implements a few possible conditional independence tests.
Step13: The estimate()-method provides a shorthand for the three steps above and directly returns a BayesianModel
Step14: PC PDAG construction is only guaranteed to work under the assumption that the identified set of independencies is faithful, i.e. there exists a DAG that exactly corresponds to it. Spurious dependencies in the data set can cause the reported independencies to violate faithfulness. It can happen that the estimated PDAG does not have any faithful completions (i.e. edge orientations that do not introduce new v-structures). In that case a warning is issued.
|
11,232
|
<ASSISTANT_TASK:>
Python Code:
# Use this cell to test the output
!curl http://localhost:5000/datetime/v1/echo -vk
# Here's the `limit` parameter definition, including
# - it's a `query` parameter
# - its name
# - its schema
print(show_component('https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/parameters/limit'))
# Use this cell to test your api
# hint: get the timezone from
!pip install pytz
import pytz
tz = pytz.timezone('Europe/Rome')
print(tz.zone)
# No timezones in Neverland :)
'Neverland' in pytz.all_timezones
### Exercise solution
def get_echo(tz='UTC'):
if tz not in pytz.all_timezones:
return problem(
status=400,
title="Bad Timezone",
detail="The specified timezone is not valid",
ext={"valid_timezones": pytz.all_timezones}
)
d = datetime.now(tz=pytz.timezone(tz))
return {"timestamp": d.isoformat().replace('+00:00', 'Z')}
render_markdown(f'''
play a bit with the [Swagger UI]({api_server_url('ui')})
and try making a request!
''')
## TODO do we have enough time to show flask_testing?
# Check that default works
!curl http://localhost:5000/datetime/v1/echo -kv
# Test a valid timezone
!curl http://localhost:5000/datetime/v1/echo?tz=Europe/Rome -kv
# Test an invalid timezone
!curl http://localhost:5000/datetime/v1/echo?tz=Frittole -kv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Request parameters
Step2: Exercise
Step3: Implement get_echo
Step4: Now run the spec in a terminal using
|
11,233
|
<ASSISTANT_TASK:>
Python Code:
reference_sf = graphlab.SFrame('data/sf_processed.sframe/')
pretrained_model = graphlab.load_model('data/imagenet_model')
nn_model = graphlab.load_model('data/nearest_dress_model')
reference_sf
def dress_similar(url):
img = graphlab.Image(url)
image_sf = graphlab.SFrame()
image_sf['image'] = [img]
image_sf['features'] = pretrained_model.extract_features(image_sf)
ans = nn_model.query(image_sf, k=5)
return ans
QUERY_URL = 'http://static.ddmcdn.com/gif/blue-dress.jpg'
Image(QUERY_URL)
def retrieve_image(nearest_neighbors_output, input_sframe):
joined = input_sframe.join(nearest_neighbors_output, on={'_id':'reference_label'})
sorted_sf = joined.sort('rank')
return sorted_sf['image']
images = retrieve_image(dress_similar(QUERY_URL), reference_sf)
images.show()
import graphlab as gl
# Replace with your path.
ps_state_path = 's3://<your-bucket-name>/predictive_service/ps'
# Set your AWS credentials.
gl.aws.set_credentials(<key>, <secret>)
# Create an EC2 config
ec2_config = gl.deploy.Ec2Config()
# Launch a predictive service
ps = gl.deploy.predictive_service.create(name = 'sklearn-predictive-service',
ec2_config = ec2_config, state_path = ps_state_path, num_hosts = 1)
import graphlab as gl
ps = gl.deploy.predictive_service.load('s3://gl-demo-usw2/predictive_service/demolab/ps-1.6')
ps
#ps.add('dress_similar', dress_similar)
ps.update('dress_similar', dress_similar)
ps.apply_changes()
ps.query('dress_similar', url=QUERY_URL)
import json
import requests
def restful_query(url):
# Query the service.
headers = {'content-type': 'application/json' }
payload = {'api_key':'b437e588-0f2b-45e1-81c8-ce3acfa81ade', 'data':{'url':url}}
end_point = 'http://demolab-one-six-2015364754.us-west-2.elb.amazonaws.com/query/dress_similar'
resp = requests.post(end_point,json.dumps(payload), headers=headers)
# Join with existing data.
ans = gl.SArray(json.loads(resp.content)['response']).unpack('')
return retrieve_image(ans, reference_sf)
restful_query('http://static.ddmcdn.com/gif/blue-dress.jpg').show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Create a Predictive Service (One time) <a id='create'></a>
Step2: Load an already created service
Step3: Query the model <a id='query'></a>
Step4: Query via REST
|
11,234
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def basisNone(x, dmy1=None, dmy2=None, dmy3=None):
return x
def basisPoly(x, j, dmy1=None, dmy2=None):
return x ** j
def basisGauss(x, j, mu, s):
return np.exp(- (x-mu[j]) ** 2 / (2 * s ** 2))
def basisSigmoid(x, j, mu, s):
return 1.0 / (1 + np.exp(-(x - mu[j])/s))
basis = {
"Linear": basisNone,
"Polynomial": basisPoly,
"Gauss": basisGauss,
"Sigmoid": basisSigmoid,
}
x = np.linspace(-1, 1, 100)
mu = np.linspace(-1, 1, 10)
plt.figure(figsize=(16, 3))
for ikey, key in enumerate(basis):
plt.subplot(1, 4, ikey + 1)
plt.title(key + " Kernel")
for j in range(len(mu)):
plt.plot(x, [basis[key](tmpx, j, mu, 0.1) for tmpx in x])
def y(x, w, basisName, mu=None, s=None):
ret = w[0]
for index in range(1, len(w)):
ret += w[index] * basis[basisName](x, index, mu, s)
return ret
N = 30 # サンプル数
M = 5 # モデル次元
b = 2. # ノイズの精度
x_opt = np.linspace(-5, 5, N)
y_opt = 2 * np.sin(x_opt) + 3
y_obs = (y_opt + np.random.normal(0, 1.0/b, len(x_opt)))[:, np.newaxis]
plt.plot(x_opt, y_obs, "k.", label="Observation")
plt.plot(x_opt, y_opt, "r-", linewidth=3, label="Truth")
plt.xlim(-5, 5)
plt.legend()
# 計画行列を求める
def makeDesignMatrix(x, basis, mu, s):
ret = np.zeros((len(x), len(mu)))
for i in range(len(x)):
for j in range(len(mu)):
ret[i][ j] = basis(x[i], j, mu, s)
return ret, ret.transpose()
M, key = 15, "Sigmoid"
alphas = []
betas = []
plt.figure(figsize=(16,8))
for index, size in enumerate(np.linspace(3, len(x_opt)-1, 10)):
# extract data
size = int(size)
indices = sorted(np.random.choice(range(len(x_opt)), size, False))
x_opt_tmp = x_opt[indices]
y_opt_tmp = y_opt[indices]
y_obs_tmp = y_obs[indices]
# generate Phi
mu = np.linspace(min(x_opt_tmp), max(x_opt_tmp), M)
s = 1.0
phi, phit = makeDesignMatrix(x_opt_tmp, basis[key], mu, s)
phit = phi.transpose()
# eigenvalues of Phi
eigs = np.linalg.eigvalsh(beta * phit.dot(phi))
# estimate values including alpha, beta
alpha, beta = 0.0001, 0.1
for i in range(10):
sN = np.linalg.inv((alpha) * np.eye(M) + beta * phit.dot(phi))
mN = beta * sN.dot(phit).dot(y_obs_tmp)
mNt = mN.transpose()
gamma = sum([eig / (alpha + eig) for eig in eigs])
beta = 1 / (1/(size - gamma) * sum([(y_obs_tmp[i] - mNt.dot(phi[i, :]))**2 for i in range(size)]))
alpha = gamma / (mNt.dot(mN))
beta = np.clip(float(beta), 0, 10)
alpha = np.clip(float(alpha), 0, 10)
alphas.append(alpha)
betas.append(beta)
# Plot estimations and sigmas from estimated values
# alpha, beta, mN, SN
phi, phit = makeDesignMatrix(x_opt, basis[key], mu, s)
y_out = np.array([np.dot(mN.transpose(), phi[i, :]) for i in range(N)])
sigma = np.sqrt(np.array([1.0 / beta + phi[i, :].transpose().dot(sN).dot(phi[i, :]) for i in range(N)]))
# Plot estimations
plt.subplot(2, 5, index+1)
plt.plot(x_opt, y_opt, "k:")
plt.plot(x_opt_tmp, y_opt_tmp, "ko")
plt.plot(x_opt, y_out, "r-")
plt.fill_between(x_opt, y_out[:, 0] - sigma, y_out[:, 0] + sigma, facecolor="k", alpha=0.4)
plt.ylim([0, 6])
# Plot aplhas and betas
plt.figure(figsize=(16, 3))
plt.subplot(1, 2, 1)
plt.plot(alphas, "k-"); plt.title("alpha")
plt.subplot(1, 2, 2)
plt.plot(betas, "k-"); plt.title("beta")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 基底関数
Step2: 例題
Step3: ${\bf w}$ のベイズ推定
|
11,235
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
def simple_linear_regression(input_feature, output):
N = len(input_feature)
# compute the sum of input_feature and output
sum_input = input_feature.sum()
sum_output = output.sum()
# compute the product of the output and the input_feature and its sum
product = input_feature * output
product_sum = product.sum()
# compute the squared value of the input_feature and its sum
input_square = input_feature * input_feature
sum_input_square = input_square.sum()
# use the formula for the slope
slope = (product_sum - (sum_output * sum_input) * 1.0/N) / (sum_input_square - sum_input*sum_input*1.0/N)
# use the formula for the intercept
intercept = (sum_output - slope * sum_input) / N
return (intercept, slope)
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = output - predictions
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept) * 1.0 / slope
return estimated_feature
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
br_intercept, br_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], br_intercept, br_slope)
# Compute RSS when using squarefeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house sales data
Step2: Split data into training and testing
Step3: Useful SFrame summary functions
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Step9: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Step10: Residual Sum of Squares
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Step13: Predict the squarefeet given price
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Step15: New Model
Step16: Test your Linear Regression Algorithm
|
11,236
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%pip install google-cloud-bigquery==1.25.0
# Importing necessary tensorflow library and printing the TF version.
import tensorflow as tf
print("Tensorflow version: ",tf.__version__)
import os
from google.cloud import bigquery
# Change with your own bucket and project below:
BUCKET = "<BUCKET>"
PROJECT = "<PROJECT>"
OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET)
os.environ['BUCKET'] = BUCKET
os.environ['OUTDIR'] = OUTDIR
os.environ['PROJECT'] = PROJECT
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset)
print("Dataset created")
except:
print("Dataset already exists")
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bigquery
CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bash
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
taxifare.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please ignore any incompatibility warnings and errors.
Step2: Change the following cell as necessary
Step3: Create BigQuery tables
Step4: Let's create a table with 1 million examples.
Step5: Make the validation dataset be 1/10 the size of the training dataset.
Step6: Export the tables as CSV files
|
11,237
|
<ASSISTANT_TASK:>
Python Code:
# Create a SQLite database
# log in to your terminal, change to the folder where you want to keep the database file.
# type the following code and change xxxx to a name you want to give the database. Mine is called datarepo:
sqlite3 xxxx.db
.databases
# press control D to exit the database.
# Import relevant modules
import os
import sys
from sqlalchemy import create_engine
import pandas as pd
import datetime as dt
# Connect to the database
conn = create_engine('sqlite:///datarepo.db')
# Define full path to the directory containing all the datasets to be written
datapath = '/Users/xxxx/xxxx/dataset'
# a list to hold information
datfile = []
file_in_database = []
new_datfile = []
# store the existing txt and csv datasets in a list datfile
if os.path.isdir(datapath):
for i in os.listdir(datapath):
ipath = os.path.join(datapath, i)
if os.path.isfile(ipath):
if os.path.splitext(ipath)[1] == '.txt' or os.path.splitext(
ipath)[1] == '.csv' or os.path.splitext(
ipath)[1] == '.xlsx' or os.path.splitext(
ipath)[1] == '.xls':
datfile.append(ipath)
# read the existing files in the database and store the information in a list file_in_database
sql = "select tbl_name from sqlite_master where type='table'"
res = pd.read_sql(sql, conn)
for i in [j for k in res.values.tolist() for j in k]:
file_in_database.append(os.path.join(datapath, i))
# get the list of new datasets that have not been written to the database
for file in datfile:
if file not in file_in_database:
new_datfile.append(file)
# check whether list of new datasets is empty
if len(new_datfile) == 0:
print(
"The datasets in the source folder have all already been written to the database."
)
sys.exit
# otherwise, write the new datasets to the database
else:
for file in new_datfile:
#write excel dataset
if file.split(".")[-1] == 'xlsx' or file.split(".")[-1] == 'xls':
tab_name = file.split("/")[
-1] # strip the path and get the file name only
start = dt.datetime.now()
df = pd.read_excel(file)
df.to_sql(tab_name, conn, if_exists='append')
print('{} seconds: writing of {} to the database is completed.'.
format((dt.datetime.now() - start).seconds, tab_name))
#write csv or txt dataset
if file.split(".")[-1] == 'csv' or file.split(".")[-1] == 'txt':
tab_name = file.split("/")[
-1] # strip the path and get the file name only
start = dt.datetime.now()
chunksize = 1000 # you can increase the chunksize
counter = 0
index_start = 1
for df in pd.read_table(
# confirm the delimiter type (comma or tab) in your csv/txt file and change sep value accordingly
file, chunksize=chunksize, iterator=True, sep=",",
encoding='utf-8'):
df.index += index_start
counter += 1
print('{} seconds: completed {} rows'.format((
dt.datetime.now() - start).seconds, counter * chunksize))
df.to_sql(tab_name, conn, if_exists='append')
index_start = df.index[-1] + 1
from sqlalchemy import create_engine
import pandas as pd
# Connect to the database
conn = create_engine('sqlite:///datarepo.db')
# Get the list of current datasets in the database
def list_dataset():
try:
sql = "select tbl_name from sqlite_master where type='table'"
return pd.read_sql(sql, conn)
except Exception as err:
print(err)
#list_dataset()
from sqlalchemy import create_engine
import pandas as pd
# Connect to the database
conn = create_engine('sqlite:///datarepo.db')
# View a dataset columns or fields
def view_dataset_cols(table_name):
try:
h = "'" + table_name + "'"
sql = (
"select sql from sqlite_master where type ='table' and tbl_name = %s"
% h)
x = pd.read_sql(sql, conn)
j = str(x.values.tolist())
k = j[:len(j) - 6]
return k.split("\\n\\t")[1:]
except Exception as err:
print(err)
#view_dataset_cols('winequality-red.csv')
from sqlalchemy import create_engine
import pandas as pd
# Connect to the database
conn = create_engine('sqlite:///datarepo.db')
# Load a dataset into a dataframe
def load_dataset(table_name):
try:
tabname = "'" + table_name + "'"
sql = " SELECT * FROM %s" % tabname
return pd.read_sql(sql, conn)
except Exception as err:
print(err)
# You can amend the SQL code to select only the needed fields from a dataset.
#j = load_dataset('housing.csv')
#j.head()
from sqlalchemy import create_engine
import pandas as pd
# Connect to the database
conn = create_engine('sqlite:///datarepo.db')
# Drop a dataset
def drop_dataset(table_name):
try:
tabname = "'" + table_name + "'"
sql = "DROP TABLE %s" % tabname
print('The dataset: ' + tabname + ' has been dropped from the database.')
pd.read_sql(sql, conn)
except Exception as err:
pass
#drop_dataset('test.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Organize and manage datasets
Step2: Write the datasets into the SQLite file
Step3: List and view all the datasets in the database file
Step4: View a dataset fields or columns
Step5: Load a dataset into a dataframe directly from the database file
Step6: Drop any dataset from the database file
|
11,238
|
<ASSISTANT_TASK:>
Python Code:
## Functions
import sys
sys.path.append("../dev")
import bib_mri as FW
import numpy as np
import scipy as scipy
import scipy.misc as misc
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy import genfromtxt
import platform
%matplotlib inline
def sign_extract(seg, resols): #Function for shape signature extraction
splines = FW.get_spline(seg,smoothness)
sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector
for resol in resols:
sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))
return sign_vect
def sign_fit(sig_ref, sig_fit): #Function for signature fitting
dif_curv = []
for shift in range(points):
dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))
return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))
print "Python version: ", platform.python_version()
print "Numpy version: ", np.version.version
print "Scipy version: ", scipy.__version__
print "Matplotlib version: ", mpl.__version__
#Loading labeled segmentations
seg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')
list_mask = seg_label[seg_label[:,1] == 0, 0][:20] #Extracting correct segmentations for mean signature
list_normal_mask = seg_label[seg_label[:,1] == 0, 0][20:30] #Extracting correct names for configuration
list_error_mask = seg_label[seg_label[:,1] == 1, 0][:10] #Extracting correct names for configuration
mask_correct = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[0]))
mask_error = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[0]))
plt.figure()
plt.axis('off')
plt.imshow(mask_correct,'gray',interpolation='none')
plt.title("Correct segmentation example")
plt.show()
plt.figure()
plt.axis('off')
plt.imshow(mask_error,'gray',interpolation='none')
plt.title("Erroneous segmentation example")
plt.show()
n_list = len(list_mask)
smoothness = 700 #Smoothness
degree = 1 #Spline degree
fit_res = 0.35
resols = np.arange(0.01,0.5,0.01) #Signature resolutions
resols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting
points = 100 #Points of Spline reconstruction
refer_wat = np.empty((n_list,resols.shape[0],points)) #Initializing signature vector
for mask in xrange(n_list):
mask_p = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[mask]))
refer_temp = sign_extract(mask_p, resols) #Function for shape signature extraction
refer_wat[mask] = refer_temp
if mask > 0: #Fitting curves using the first one as basis
prof_ref = refer_wat[0]
refer_wat[mask] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting
print "Signatures' vector size: ", refer_wat.shape
res_ex = 10
plt.figure()
plt.plot(refer_wat[:,res_ex,:].T)
plt.title("Signatures for res: %f"%(resols[res_ex]))
plt.show()
refer_wat_mean = np.mean(refer_wat,axis=0) #Finding mean signature per resolution
print "Mean signature size: ", refer_wat_mean.shape
plt.figure() #Plotting mean signature
plt.plot(refer_wat_mean[res_ex,:])
plt.title("Mean signature for res: %f"%(resols[res_ex]))
plt.show()
n_list = np.amax((len(list_normal_mask),len(list_error_mask)))
refer_wat_n = np.empty((n_list,resols.shape[0],points)) #Initializing correct signature vector
refer_wat_e = np.empty((n_list,resols.shape[0],points)) #Initializing error signature vector
for mask in xrange(n_list):
#Loading correct mask
mask_pn = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_normal_mask[mask]))
refer_temp_n = sign_extract(mask_pn, resols) #Function for shape signature extraction
refer_wat_n[mask] = sign_fit(refer_wat_mean[0], refer_temp_n) #Function for signature fitting
#Loading erroneous mask
mask_pe = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[mask]))
refer_temp_e = sign_extract(mask_pe, resols) #Function for shape signature extraction
refer_wat_e[mask] = sign_fit(refer_wat_mean[0], refer_temp_e) #Function for signature fitting
print "Correct segmentations' vector: ", refer_wat_n.shape
print "Erroneous segmentations' vector: ", refer_wat_e.shape
plt.figure()
plt.plot(refer_wat_n[:,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(refer_wat_e[:,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
rmse_nacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_n)**2,axis=2)/(refer_wat_mean.shape[1]))
rmse_eacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_e)**2,axis=2)/(refer_wat_mean.shape[1]))
dif_dis = rmse_eacum - rmse_nacum #Difference between erroneous signatures and correct signatures
in_max_res = np.argmax(np.mean(dif_dis,axis=0)) #Finding optimal resolution at maximum difference
opt_res = resols[in_max_res]
print "Optimal resolution for error detection: ", opt_res
perc_th = 0.3 #Established percentage to threshold
correct_max = np.mean(rmse_nacum[:,in_max_res]) #Finding threshold for separate segmentations
error_min = np.mean(rmse_eacum[:,in_max_res])
th_res = perc_th*(error_min-correct_max)+correct_max
print "Threshold for separate segmentations: ", th_res
#### Plotting erroneous and correct segmentation signatures
ticksx_resols = ["%.2f" % el for el in np.arange(0.01,0.5,0.01)] #Labels for plot xticks
ticksx_resols = ticksx_resols[::6]
ticksx_index = np.arange(1,50,6)
figpr = plt.figure() #Plotting mean RMSE for correct segmentations
plt.boxplot(rmse_nacum[:,1:], showmeans=True) #Element 0 was introduced only for fitting,
#in comparation is not used.
plt.axhline(y=0, color='g', linestyle='--')
plt.axhline(y=th_res, color='r', linestyle='--')
plt.axvline(x=in_max_res, color='r', linestyle='--')
plt.xlabel('Resolutions', fontsize = 12, labelpad=-2)
plt.ylabel('RMSE correct signatures', fontsize = 12)
plt.xticks(ticksx_index, ticksx_resols)
plt.show()
figpr = plt.figure() #Plotting mean RMSE for erroneous segmentations
plt.boxplot(rmse_eacum[:,1:], showmeans=True)
plt.axhline(y=0, color='g', linestyle='--')
plt.axhline(y=th_res, color='r', linestyle='--')
plt.axvline(x=in_max_res, color='r', linestyle='--')
plt.xlabel('Resolutions', fontsize = 12, labelpad=-2)
plt.ylabel('RMSE error signatures', fontsize = 12)
plt.xticks(ticksx_index, ticksx_resols)
plt.show()
figpr = plt.figure() #Plotting difference for mean RMSE over all resolutions
plt.boxplot(dif_dis[:,1:], showmeans=True)
plt.axhline(y=0, color='g', linestyle='--')
plt.axvline(x=in_max_res, color='r', linestyle='--')
plt.xlabel('Resolutions', fontsize = 12, labelpad=-2)
plt.ylabel('Difference RMSE signatures', fontsize = 12)
plt.xticks(ticksx_index, ticksx_resols)
plt.show()
n_resols = [fit_res, opt_res] #Resolutions for fitting and comparison
#### Teste dataset (Watershed)
#Loading labels
seg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')
all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0][30:],
seg_label[seg_label[:,1] == 1, 0][10:])) #Extracting erroneous and correct names
lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1][30:],
seg_label[seg_label[:,1] == 1, 1][10:])) #Extracting erroneous and correct labels
refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting
#and optimal resolution
refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector
in_mask = 0
for mask in all_seg:
mask_ = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))
refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction
refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting
###### Uncomment this block to see each segmentation with true and predicted labels
#RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))
#plt.figure()
#plt.axis('off')
#plt.imshow(mask_,'gray',interpolation='none')
#plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))
#plt.show()
in_mask += 1
#### Segmentation evaluation result over all segmentations
RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))
pred_seg = RMSE > th_res #Apply threshold
comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels
acc_w = np.sum(comp_seg)/(1.0*len(comp_seg))
print "Final accuracy on Watershed {} segmentations: {}".format(len(comp_seg),acc_w)
#### Teste dataset (ROQS)
seg_label = genfromtxt('../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8') #Loading labels
all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],
seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names
lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],
seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels
refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting
#and optimal resolution
refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector
in_mask = 0
for mask in all_seg:
mask_ = np.load('../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask))
refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction
refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting
###### Uncomment this block to see each segmentation with true and predicted labels
#RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))
#plt.figure()
#plt.axis('off')
#plt.imshow(mask_,'gray',interpolation='none')
#plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))
#plt.show()
in_mask += 1
#### Segmentation evaluation result over all segmentations
RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))
pred_seg = RMSE > th_res #Apply threshold
comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels
acc_r = np.sum(comp_seg)/(1.0*len(comp_seg))
print "Final accuracy on ROQS {} segmentations: {}".format(len(comp_seg),acc_r)
#### Teste dataset (Pixel-based)
seg_label = genfromtxt('../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8') #Loading labels
all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0],
seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names
lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1],
seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels
refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting
#and optimal resolution
refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector
in_mask = 0
for mask in all_seg:
mask_ = np.load('../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask))
refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction
refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting
###### Uncomment this block to see each segmentation with true and predicted labels
#RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1]))
#plt.figure()
#plt.axis('off')
#plt.imshow(mask_,'gray',interpolation='none')
#plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8')))
#plt.show()
in_mask += 1
#### Segmentation evaluation result over all segmentations
RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1]))
pred_seg = RMSE > th_res #Apply threshold
comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels
acc_p = np.sum(comp_seg)/(1.0*len(comp_seg))
print "Final accuracy on pixel-based {} segmentations: {}".format(len(comp_seg),acc_p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Shape signature for comparison
Step3: In order to get a representative correct signature, mean signature per-resolution was generated using 20 correct signatures. The mean was calculated in each point.
Step4: Signature configuration
Step5: The RMSE over the 10 correct segmentations was compared with RMSE over the 10 erroneous segmentations. As expected, RMSE for correct segmentations was greater than RMSE for erroneous segmentations along all the resolutions. In general, this is true, but optimal resolution guarantee the maximum difference between both of RMSE results
Step6: The greatest difference resulted at resolution 0.09. In this resolution, threshold for separate erroneous and correct segmentations is established as 30% of the distance between the mean RMSE of the correct masks and the mean RMSE of the erroneous masks.
|
11,239
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
DF = pd.read_csv('../data/alternative.tsv', sep='\t')
DF
DF['Berri1'].plot() # plot easier
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Qué Entidades (tablas) puede definir?
|
11,240
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def trapz(f, a, b, N):
h = (b - a)/N
i = np.arange(1,N)
c = h*(0.5*f(a)+f(b)*0.5+f(a+i*h).sum())
return c
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
a = integrate.quad(f,0,1)
b = integrate.quad(g,0,np.pi)
print (a)
print (b)
assert True # leave this cell to grade the previous one
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Trapezoidal rule
Step2: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
|
11,241
|
<ASSISTANT_TASK:>
Python Code:
from menpowidgets.tools import (LogoWidget, ListWidget, SlicingCommandWidget, ColourSelectionWidget,
IndexButtonsWidget, IndexSliderWidget, ZoomOneScaleWidget, ZoomTwoScalesWidget,
ImageOptionsWidget, LineOptionsWidget, MarkerOptionsWidget, NumberingOptionsWidget,
AxesLimitsWidget, AxesTicksWidget, AxesOptionsWidget, LegendOptionsWidget,
GridOptionsWidget, HOGOptionsWidget, DSIFTOptionsWidget, DaisyOptionsWidget,
LBPOptionsWidget, IGOOptionsWidget)
from menpowidgets.style import map_styles_to_hex_colours
from menpo.visualize import print_dynamic
def render_function(change):
print(change['new'])
wid = LogoWidget(scale=0.6, style='info')
wid
list_cmd = [0, 1, 2]
wid = ListWidget(list_cmd, mode='int', description='List:', render_function=render_function, example_visible=True)
wid
wid.style(box_style='danger', font_size=15)
wid.set_widget_state([20, 16], allow_callback=False)
# Initial options
slice_cmd = {'command': ':3',
'length': 10}
# Create widget
wid = SlicingCommandWidget(slice_cmd, description='Command:', render_function=render_function,
example_visible=True, orientation='horizontal')
# Display widget
wid
wid.style(border_visible=True, border_style='dashed', font_weight='bold')
wid.set_widget_state({'command': ':40', 'length': 40}, allow_callback=True)
wid = ColourSelectionWidget([[255, 38, 31], 'blue', 'green'], labels=['a', 'b', 'c'],
render_function=render_function)
# Set styling
wid.style(box_style='warning', apply_to_all_style='info', label_colour='black',
label_background_colour=map_styles_to_hex_colours('info', background=True), font_weight='bold')
# Display widget
wid
wid.set_colours(['red', 'orange', 'pink'], allow_callback=True)
wid.set_widget_state(['red'], None)
# Initial options
index = {'min': 0,
'max': 100,
'step': 1,
'index': 10}
# Crete widget
wid = IndexSliderWidget(index, description='Index: ', render_function=render_function, continuous_update=False)
# Set styling
wid.style(box_style='danger', slider_handle_colour=map_styles_to_hex_colours('danger'),
slider_bar_colour=map_styles_to_hex_colours('danger'))
# Display widget
wid
wid.set_widget_state({'min': 10, 'max': 500, 'step': 2, 'index': 50}, allow_callback=True)
index = {'min': 0, 'max': 100, 'step': 1, 'index': 10}
wid = IndexButtonsWidget(index, render_function=render_function, loop_enabled=False, text_editable=True)
wid
wid.style(box_style='danger', plus_style='success', minus_style='danger', text_colour='blue',
text_background_colour=map_styles_to_hex_colours('info', background=True))
wid.set_widget_state({'min': 20, 'max': 500, 'step': 2, 'index': 50}, loop_enabled=True, text_editable=True,
allow_callback=True)
# Initial options
zoom_options = {'min': 0.1,
'max': 4.,
'step': 0.05,
'zoom': 1.}
# Create widget
wid = ZoomOneScaleWidget(zoom_options, render_function=render_function)
# Set styling
wid.style(box_style='danger')
wid.zoom_slider.background_color = map_styles_to_hex_colours('info')
wid.zoom_slider.slider_color = map_styles_to_hex_colours('danger')
# Display widget
wid
wid.set_widget_state({'zoom': 0.5, 'min': 0., 'max': 4., 'step': 0.2}, allow_callback=True)
# Initial options
zoom_options = {'min': 0.1,
'max': 4.,
'step': 0.1,
'zoom': [1., 1.],
'lock_aspect_ratio': False}
# Create widget
wid = ZoomTwoScalesWidget(zoom_options, render_function=render_function, continuous_update=True)
# Set styling
wid.style(box_style='danger')
# Display widget
wid
zoom_options = {'min': 0.5, 'max': 10., 'step': 0.3, 'zoom': [2., 3.]}
wid.set_widget_state(zoom_options, allow_callback=True)
# Initial options
image_options = {'alpha': 1.,
'interpolation': 'bilinear',
'cmap_name': None}
# Create widget
wid = ImageOptionsWidget(image_options, render_function=render_function)
# Set styling
wid.style(box_style='success', padding=10, border_visible=True, border_radius=45)
# Display widget
wid
wid.set_widget_state({'alpha': 0.8, 'interpolation': 'none', 'cmap_name': 'gray'}, allow_callback=True)
# Initial options
line_options = {'render_lines': True,
'line_width': 1,
'line_colour': ['blue', 'red'],
'line_style': '-'}
# Create widget
wid = LineOptionsWidget(line_options, render_function=render_function,
labels=['menpo', 'widgets'])
# Set styling
wid.style(box_style='danger', padding=6)
# Display widget
wid
wid.set_widget_state({'render_lines': False, 'line_width': 5, 'line_colour': ['purple'], 'line_style': '--'},
allow_callback=True, labels=None)
# Initial options
marker_options = {'render_markers': True,
'marker_size': 20,
'marker_face_colour': ['red', 'green'],
'marker_edge_colour': ['black', 'blue'],
'marker_style': 'o',
'marker_edge_width': 1}
# Create widget
wid = MarkerOptionsWidget(marker_options, render_function=render_function,
labels=['a', 'b'])
# Set styling
wid.style(box_style='info', padding=6)
# Display widget
wid
wid.set_widget_state({'render_markers': True, 'marker_size': 20, 'marker_face_colour': ['red'],
'marker_edge_colour': ['black'], 'marker_style': 'o', 'marker_edge_width': 1},
labels=None, allow_callback=True)
# Initial options
numbers_options = {'render_numbering': True,
'numbers_font_name': 'serif',
'numbers_font_size': 10,
'numbers_font_style': 'normal',
'numbers_font_weight': 'normal',
'numbers_font_colour': ['black'],
'numbers_horizontal_align': 'center',
'numbers_vertical_align': 'bottom'}
# Create widget
wid = NumberingOptionsWidget(numbers_options, render_function=render_function)
# Set styling
wid.style(box_style='success', border_visible=True, border_colour='black', border_style='solid', border_width=1,
border_radius=0, padding=10, margin=10)
# Display widget
wid
wid.set_widget_state({'render_numbering': True, 'numbers_font_name': 'serif', 'numbers_font_size': 10,
'numbers_font_style': 'normal', 'numbers_font_weight': 'normal',
'numbers_font_colour': ['green'], 'numbers_horizontal_align': 'center',
'numbers_vertical_align': 'bottom'}, allow_callback=True)
# Create widget
wid = AxesLimitsWidget(axes_x_limits=[0, 10], axes_y_limits=0.1, render_function=render_function)
# Set styling
wid.style(box_style='danger')
# Display widget
wid
wid.set_widget_state([-200, 200], None, allow_callback=True)
# Initial options
axes_ticks = {'x': [],
'y': [10., 20., 30.]}
# Create widget
wid = AxesTicksWidget(axes_ticks, render_function=render_function)
# St styling
wid.style(box_style='danger')
# Display widget
wid
wid.set_widget_state({'x': list(range(5)), 'y': None}, allow_callback=True)
# Initial options
axes_options = {'render_axes': True,
'axes_font_name': 'serif',
'axes_font_size': 10,
'axes_font_style': 'normal',
'axes_font_weight': 'normal',
'axes_x_limits': None,
'axes_y_limits': None,
'axes_x_ticks': [0, 100],
'axes_y_ticks': None}
# Create widget
wid = AxesOptionsWidget(axes_options, render_function=render_function)
# Set styling
wid.style(box_style='warning', padding=6, border_visible=True, border_colour=map_styles_to_hex_colours('warning'))
# Display widget
wid
axes_options = {'render_axes': True, 'axes_font_name': 'serif',
'axes_font_size': 10, 'axes_font_style': 'normal', 'axes_font_weight': 'normal',
'axes_x_limits': [0., 0.05], 'axes_y_limits': 0.1, 'axes_x_ticks': [0, 100], 'axes_y_ticks': None}
wid.set_widget_state(axes_options, allow_callback=True)
# Initial options
legend_options = {'render_legend': True,
'legend_title': '',
'legend_font_name': 'serif',
'legend_font_style': 'normal',
'legend_font_size': 10,
'legend_font_weight': 'normal',
'legend_marker_scale': 1.,
'legend_location': 2,
'legend_bbox_to_anchor': (1.05, 1.),
'legend_border_axes_pad': 1.,
'legend_n_columns': 1,
'legend_horizontal_spacing': 1.,
'legend_vertical_spacing': 1.,
'legend_border': True,
'legend_border_padding': 0.5,
'legend_shadow': False,
'legend_rounded_corners': True}
# Create widget
wid = LegendOptionsWidget(legend_options, render_function=render_function)
# Set styling
wid.style(border_visible=True, font_size=15)
# Display widget
wid
legend_options = {'render_legend': True, 'legend_title': 'asd', 'legend_font_name': 'sans-serif',
'legend_font_style': 'normal', 'legend_font_size': 60, 'legend_font_weight': 'normal',
'legend_marker_scale': 2., 'legend_location': 7, 'legend_bbox_to_anchor': (1.05, 1.),
'legend_border_axes_pad': 1., 'legend_n_columns': 2, 'legend_horizontal_spacing': 3.,
'legend_vertical_spacing': 7., 'legend_border': False,
'legend_border_padding': 0.5, 'legend_shadow': True, 'legend_rounded_corners': True}
wid.set_widget_state(legend_options, allow_callback=True)
# Initial options
grid_options = {'render_grid': True,
'grid_line_width': 1,
'grid_line_style': '-'}
# Create widget
wid = GridOptionsWidget(grid_options, render_function=render_function)
# Set styling
wid.style(box_style='warning')
# Display widget
wid
wid.set_widget_state({'render_grid': True, 'grid_line_width': 10, 'grid_line_style': ':'})
# Initial options
hog_options = {'mode': 'dense',
'algorithm': 'dalaltriggs',
'num_bins': 9,
'cell_size': 8,
'block_size': 2,
'signed_gradient': True,
'l2_norm_clip': 0.2,
'window_height': 1,
'window_width': 1,
'window_unit': 'blocks',
'window_step_vertical': 1,
'window_step_horizontal': 1,
'window_step_unit': 'pixels',
'padding': True}
# Create widget
wid = HOGOptionsWidget(hog_options, render_function=render_function)
# Set styling
wid.style('info')
# Display widget
wid
# Initial options
dsift_options = {'window_step_horizontal': 1,
'window_step_vertical': 1,
'num_bins_horizontal': 2,
'num_bins_vertical': 2,
'num_or_bins': 9,
'cell_size_horizontal': 6,
'cell_size_vertical': 6,
'fast': True}
# Create widget
wid = DSIFTOptionsWidget(dsift_options, render_function=render_function)
# Set styling
wid.style('success')
# Display widget
wid
# Initial options
daisy_options = {'step': 1,
'radius': 15,
'rings': 2,
'histograms': 2,
'orientations': 8,
'normalization': 'l1',
'sigmas': None,
'ring_radii': None}
# Create widget
wid = DaisyOptionsWidget(daisy_options, render_function=render_function)
# Set styling
wid.style('danger')
# Display widget
wid
# Initial options
lbp_options = {'radius': list(range(1, 5)),
'samples': [8] * 4,
'mapping_type': 'u2',
'window_step_vertical': 1,
'window_step_horizontal': 1,
'window_step_unit': 'pixels',
'padding': True}
# Create widget
wid = LBPOptionsWidget(lbp_options, render_function=render_function)
# Set styling
wid.style(box_style='warning')
# Display widget
wid
wid = IGOOptionsWidget({'double_angles': True}, render_function=render_function)
wid
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us also define a generic print function that will be the callback trigger when the selected_values trait of all the widgets changes.
Step2: <a name="sec
Step3: <a name="sec
Step4: Note that you need to press Enter in order to pass a new value into the textbox. Also, try typing a wrong command, such as
Step5: The state of the widget can be updated with the set_widget_state() method. Note that since allow_callback=False, nothing gets printed after running the command, even though selected_values is updated.
Step6: Similar to the list widget, MenpoWidgets has a widget for defining a command for slicing a list (or numpy.array). Commands can have any vald Python syntax, such as
Step7: Note that by defining a single int number, then an ipywidget.IntSlider appears that allows to select the index. Similarly, by inserting any slicing command with a constant step, then an ipywidgets.IntRangeSlider appears. The sliders are disabled when inserting a slicing command with non-constant step. The placement of the sliders with respect to the textbox is controlled by the orientation argument.
Step8: To update the widget's state, you need to pass in a new dict of options, as
Step9: <a name="sec
Step10: The Apply to all button sets the currently selected colour to all the labels.
Step11: In case there is only one label, defined either with a list of length 1 or by setting labels=None, then the drop-down menu to select object does not appear. For example, let's update the state of the widget
Step12: <a name="sec
Step13: As with all widgets, the state can be updated as
Step14: An index selection widget where the selection can be performed with -/+ (previous/next) buttons can be created as
Step15: Note that since text_editable is True, you can actually edit the index directly from the textbox. Additionally, by setting loop_enabled=True means that by pressing '+' when the textbox is at the last index, it takes you to the minimum index.
Step16: Let's also update its state with a new set of options
Step17: <a name="sec
Step18: and its state can be updated as
Step19: The second one defines two zoom values that are intended to control the height and width of a figure.
Step20: Note that the sliders can be linkedd in order to preserve the aspect ratio of the figure. The state can be updated as
Step21: <a name="sec
Step22: The widget can be updated with a new dict of options as
Step23: <a name="sec
Step24: The Render lines tick box also controls the visibility of the rest of the options. So by updating the state with render_lines=False, the options disappear.
Step25: <a name="sec
Step26: <a name="sec
Step27: Of course the state of the widget can be updated as
Step28: <a name="sec
Step29: Note that the percentage mode is accompanied by a ListWidget that expects a single float, whereas the range mode invokes a ListWidget that expects two float numbers. The state of the widget can be changed as
Step30: On the other hand, AxesTicksWidget has two functionalities per axis
Step31: The state can be updated as
Step32: The AxesOptionsWidget involves the AxesLimitsWidget and AxesTicksWidget widgets and also allows the selection of font-related options. As always, the initial options are provided in a dict
Step33: The state of the widget can be updated as
Step34: <a name="sec
Step35: <a name="sec
Step36: <a name="sec
|
11,242
|
<ASSISTANT_TASK:>
Python Code:
# important stuff:
import os
import pandas as pd
import numpy as np
import statsmodels.tools.numdiff as smnd
import scipy
# Graphics
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
# other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
n = 50 # number of data points
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
y = np.linspace(5, -5, n) + np.random.normal(0, yerr, n)
plt.scatter(x, y)
# bayes model fitting:
def log_prior(theta):
beta = theta
return -1.5 * np.log(1 + beta ** 2)
def log_likelihood(beta, x, y, yerr):
sigma = yerr
y_model = beta * x
return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)
def log_posterior(theta, x, y, yerr):
return log_prior(theta) + log_likelihood(theta, x, y, yerr)
def neg_log_prob_free(theta, x, y, yerr):
return -log_posterior(theta, x, y, yerr)
# calculate probability of free model:
res = scipy.optimize.minimize(neg_log_prob_free, 0, args=(x, y, yerr), method='Powell')
plt.scatter(x, y)
plt.plot(x, x*res.x, '-', color='g')
print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior(res.x, x, y, yerr))))
print('The optimized probability is {0:.4g}x'.format(np.float64(res.x)))
# bayes model fitting:
def log_likelihood_fixed(x, y, yerr):
sigma = yerr
y_model = -1/2*x
return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)
def log_posterior_fixed(x, y, yerr):
return log_likelihood_fixed(x, y, yerr)
plt.scatter(x, y)
plt.plot(x, -0.5*x, '-', color='purple')
print('The probability of this model is {0:.2g}'.format(np.exp(log_posterior_fixed(x, y, yerr))))
def model_selection(X, Y, Yerr, **kwargs):
guess = kwargs.pop('guess', -0.5)
# calculate probability of free model:
res = scipy.optimize.minimize(neg_log_prob_free, guess, args=(X, Y, Yerr), method='Powell')
# Compute error bars
second_derivative = scipy.misc.derivative(log_posterior, res.x, dx=1.0, n=2, args=(X, Y, Yerr), order=3)
cov_free = -1/second_derivative
alpha_free = np.float64(res.x)
log_free = log_posterior(alpha_free, X, Y, Yerr)
# log goodness of fit for fixed models
log_MAP = log_posterior_fixed(X, Y, Yerr)
good_fit = log_free - log_MAP
# occam factor - only the free model has a penalty
log_occam_factor =(-np.log(2 * np.pi) + np.log(cov_free)) / 2 + log_prior(alpha_free)
# give more standing to simpler models. but just a little bit!
lg = log_free - log_MAP + log_occam_factor - 2
return lg
model_selection(x, y, yerr)
n = 50 # number of data points
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
y = x*-0.55 + np.random.normal(0, yerr, n)
plt.scatter(x, y)
model_selection(x, y, yerr)
def simulate_many_odds_ratios(n):
Given a number `n` of data points, simulate 1,000 data points drawn from a null model and an alternative model and
compare the odds ratio for each.
iters = 1000
lg1 = np.zeros(iters)
lg2 = np.zeros(iters)
for i in range(iters):
x = np.linspace(-10, 10, n)
yerr = np.abs(np.random.normal(0, 2, n))
# simulate two models: only one matches the fixed model
y1 = -0.5*x + np.random.normal(0, yerr, n)
y2 = -0.46*x + np.random.normal(0, yerr, n)
lg1[i] = model_selection(x, y1, yerr)
m2 = model_selection(x, y2, yerr)
# Truncate OR for ease of plotting
if m2 < 10:
lg2[i] = m2
else:
lg2[i] = 10
return lg1, lg2
def make_figures(n):
lg1, lg2 = simulate_many_odds_ratios(n)
lg1 = np.sort(lg1)
lg2 = np.sort(lg2)
fifty_point1 = lg1[int(np.floor(len(lg1)/2))]
fifty_point2 = lg2[int(np.floor(len(lg2)/2))]
fig, ax = plt.subplots(ncols=2, figsize=(15, 7), sharey=True)
fig.suptitle('Log Odds Ratio for n={0} data points'.format(n), fontsize=20)
sns.kdeplot(lg1, label='slope=-0.5', ax=ax[0], cumulative=False)
ax[0].axvline(x=fifty_point1, ls='--', color='k')
ax[0].set_title('Data drawn from null model')
ax[0].set_ylabel('Density')
sns.kdeplot(lg2, label='slope=-0.46', ax=ax[1], cumulative=False)
ax[1].axvline(x=fifty_point2, ls='--', color='k')
ax[1].set_title('Data drawn from alternative model')
fig.text(0.5, 0.04, 'Log Odds Ratio', ha='center', size=18)
return fig, ax
fig, ax = make_figures(n=5)
fig, ax = make_figures(n=50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating synthetic data
Step2: Line fitting using Bayes' theorem
Step3: Specificity is necessary for credibility. Let's show that by optimizing the posterior function, we can fit a line.
Step4: We can see that the model is very close to the model we drew the data from. It works!
Step5: We can see that the probability of this model is very similar to the probability of the alternative model we fit above. How can we pick which one to use?
Step6: We performed the Odds Ratio calculation on logarithmic space, so negative values show that the simpler (fixed slope) model is preferred, whereas if the values are positive and large, the free-slope model is preferred.
Step7: Different datasets will prefer different models
Step9: Indeed, the answer changed sign. Odds Ratios, p-values and everything else should always be interpreted conservatively. I prefer odds ratios that are very large, larger than 1,000 before stating that one model is definitively preferred. Otherwise, I tend to prefer the simpler model.
Step10: Here we can see that with five data points, the odds ratio will tend to prefer the simpler model. We do not have too much information---why request the extra information? Note that for the second dataset in some cases the deviations are great enough that the alternative model is strongly preferred (right panel, extra bump at 10). However, this is rare.
|
11,243
|
<ASSISTANT_TASK:>
Python Code:
!pip install google-cloud-bigquery
!pip install google-cloud-bigquery-storage
!pip install pandas-gbq
# Reservation package needed to setup flex slots for flat-rate pricing
!pip install google-cloud-bigquery-reservation
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "UPDATE TO YOUR PROJECT ID"
REGION = 'US'
DATA_SET_ID = 'bqml_kmeans' # Ensure you first create a data set in BigQuery
!gcloud config set project $PROJECT_ID
# If you have not built the Data Set, the following command will build it for you
# !bq mk --location=$REGION --dataset $PROJECT_ID:$DATA_SET_ID
from google.cloud import bigquery
import numpy as np
import pandas as pd
import pandas_gbq
import matplotlib.pyplot as plt
pd.set_option('display.float_format', lambda x: '%.3f' % x) # used to display float format
client = bigquery.Client(project=PROJECT_ID)
# We start with GA360 data, and will eventually build synthetic CRM as an example.
# This block is the first step, just working with GA360
ga360_only_view = 'GA360_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
ga360_view_ref = shared_dataset_ref.table(ga360_only_view)
ga360_view = bigquery.Table(ga360_view_ref)
ga360_query = '''
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID, # This will be used to generate random data.
MAX(device.operatingSystem) AS OS, # We can aggregate this because an OS is tied to a fullVisitorID.
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Apparel' THEN 1 ELSE 0 END) AS Apparel,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Office' THEN 1 ELSE 0 END) AS Office,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Electronics' THEN 1 ELSE 0 END) AS Electronics,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Limited Supply' THEN 1 ELSE 0 END) AS LimitedSupply,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Accessories' THEN 1 ELSE 0 END) AS Accessories,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Shop by Brand' THEN 1 ELSE 0 END) AS ShopByBrand,
SUM (CASE
WHEN REGEXP_EXTRACT (v2ProductCategory,
r'^(?:(?:.*?)Home/)(.*?)/')
= 'Bags' THEN 1 ELSE 0 END) AS Bags,
ROUND (SUM (productPrice/1000000),2) AS productPrice_USD
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2
'''
ga360_view.view_query = ga360_query.format(PROJECT_ID)
ga360_view = client.create_table(ga360_view) # API request
print(f"Successfully created view at {ga360_view.full_table_id}")
# Show a sample of GA360 data
ga360_query_df = f'''
SELECT * FROM {ga360_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(ga360_query_df, job_config=job_config) #API Request
df_ga360 = query_job.result()
df_ga360 = df_ga360.to_dataframe()
df_ga360
# Create synthetic CRM data in SQL
CRM_only_view = 'CRM_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
CRM_view_ref = shared_dataset_ref.table(CRM_only_view)
CRM_view = bigquery.Table(CRM_view_ref)
# Query below works by hashing the fullVisitorID, which creates a random distribution.
# We use modulo to artificially split gender and hhi distribution.
CRM_query = '''
SELECT
fullVisitorID,
IF
(MOD(Hashed_fullVisitorID,2) = 0,
"M",
"F") AS gender,
CASE
WHEN MOD(Hashed_fullVisitorID,10) = 0 THEN 55000
WHEN MOD(Hashed_fullVisitorID,10) < 3 THEN 65000
WHEN MOD(Hashed_fullVisitorID,10) < 7 THEN 75000
WHEN MOD(Hashed_fullVisitorID,10) < 9 THEN 85000
WHEN MOD(Hashed_fullVisitorID,10) = 9 THEN 95000
ELSE
Hashed_fullVisitorID
END
AS hhi
FROM (
SELECT
fullVisitorID,
ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
WHERE
_TABLE_SUFFIX BETWEEN '20160801'
AND '20160831'
AND geoNetwork.country = 'United States'
AND type = 'EVENT'
GROUP BY
1,
2)
'''
CRM_view.view_query = CRM_query.format(PROJECT_ID)
CRM_view = client.create_table(CRM_view) # API request
print(f"Successfully created view at {CRM_view.full_table_id}")
# See an output of the synthetic CRM data
CRM_query_df = f'''
SELECT * FROM {CRM_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(CRM_query_df, job_config=job_config) #API Request
df_CRM = query_job.result()
df_CRM = df_CRM.to_dataframe()
df_CRM
# Build a final view, which joins GA360 data with CRM data
final_data_view = 'Final_View'
shared_dataset_ref = client.dataset(DATA_SET_ID)
final_view_ref = shared_dataset_ref.table(final_data_view)
final_view = bigquery.Table(final_view_ref)
final_data_query = f'''
SELECT
g.*,
c.* EXCEPT(fullVisitorId)
FROM {ga360_view.full_table_id.replace(":", ".")} g
JOIN {CRM_view.full_table_id.replace(":", ".")} c
ON g.fullVisitorId = c.fullVisitorId
'''
final_view.view_query = final_data_query.format(PROJECT_ID)
final_view = client.create_table(final_view) # API request
print(f"Successfully created view at {final_view.full_table_id}")
# Show final data used prior to modeling
sql_demo = f'''
SELECT * FROM {final_view.full_table_id.replace(":", ".")} LIMIT 5
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_demo, job_config=job_config) #API Request
df_demo = query_job.result()
df_demo = df_demo.to_dataframe()
df_demo
def makeModel (n_Clusters, Model_Name):
sql =f'''
CREATE OR REPLACE MODEL `{PROJECT_ID}.{DATA_SET_ID}.{Model_Name}`
OPTIONS(model_type='kmeans',
kmeans_init_method = 'KMEANS++',
num_clusters={n_Clusters}) AS
SELECT * except(fullVisitorID, Hashed_fullVisitorID) FROM `{final_view.full_table_id.replace(":", ".")}`
'''
job_config = bigquery.QueryJobConfig()
client.query(sql, job_config=job_config) # Make an API request.
# Let's start with a simple test to ensure everything works.
# After running makeModel(), allow a few minutes for training to complete.
model_test_name = "test"
makeModel(3, model_test_name)
# After training is completed, you can either check in the UI, or you can interact with it using list_models().
for model in client.list_models(DATA_SET_ID):
print(model)
# Define upper and lower bound for k, then build individual models for each.
# After running this loop, look at the UI to see several model objects that exist.
low_k = 3
high_k = 15
model_prefix_name = 'kmeans_clusters_'
lst = list(range (low_k, high_k+1)) #build list to iterate through k values
for k in lst:
model_name = model_prefix_name + str(k)
makeModel(k, model_name)
print(f"Model started: {model_name}")
# list all current models
models = client.list_models(DATA_SET_ID) # Make an API request.
print("Listing current models:")
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
print(full_model_id)
# Remove our sample model from BigQuery, so we only have remaining models from our previous loop
model_id = DATA_SET_ID+"."+model_test_name
client.delete_model(model_id) # Make an API request.
print(f"Deleted model '{model_id}'")
# This will create a dataframe with each model name, the Davies Bouldin Index, and Loss.
# It will be used for the elbow method and to help determine optimal K
df = pd.DataFrame(columns=['davies_bouldin_index', 'mean_squared_distance'])
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
sql =f'''
SELECT
davies_bouldin_index,
mean_squared_distance
FROM ML.EVALUATE(MODEL `{full_model_id}`)
'''
job_config = bigquery.QueryJobConfig()
# Start the query, passing in the extra configuration.
query_job = client.query(sql, job_config=job_config) # Make an API request.
df_temp = query_job.to_dataframe() # Wait for the job to complete.
df_temp['model_name'] = model.model_id
df = pd.concat([df, df_temp], axis=0)
# This will modify the dataframe above, produce a new field with 'n_clusters', and will sort for graphing
df['n_clusters'] = df['model_name'].str.split('_').map(lambda x: x[2])
df['n_clusters'] = df['n_clusters'].apply(pd.to_numeric)
df = df.sort_values(by='n_clusters', ascending=True)
df
df.plot.line(x='n_clusters', y=['davies_bouldin_index', 'mean_squared_distance'])
model_to_use = 'kmeans_clusters_5' # User can edit this
final_model = DATA_SET_ID+'.'+model_to_use
sql_get_attributes = f'''
SELECT
centroid_id,
feature,
categorical_value
FROM
ML.CENTROIDS(MODEL {final_model})
WHERE
feature IN ('OS','gender')
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_attributes, job_config=job_config) #API Request
df_attributes = query_job.result()
df_attributes = df_attributes.to_dataframe()
df_attributes.head()
# get numerical information about clusters
sql_get_numerical_attributes = f'''
WITH T AS (
SELECT
centroid_id,
ARRAY_AGG(STRUCT(feature AS name,
ROUND(numerical_value,1) AS value)
ORDER BY centroid_id)
AS cluster
FROM ML.CENTROIDS(MODEL {final_model})
GROUP BY centroid_id
),
Users AS(
SELECT
centroid_id,
COUNT(*) AS Total_Users
FROM(
SELECT
* EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {final_model},
(
SELECT
*
FROM
{final_view.full_table_id.replace(":", ".")}
)))
GROUP BY centroid_id
)
SELECT
centroid_id,
Total_Users,
(SELECT value from unnest(cluster) WHERE name = 'Apparel') AS Apparel,
(SELECT value from unnest(cluster) WHERE name = 'Office') AS Office,
(SELECT value from unnest(cluster) WHERE name = 'Electronics') AS Electronics,
(SELECT value from unnest(cluster) WHERE name = 'LimitedSupply') AS LimitedSupply,
(SELECT value from unnest(cluster) WHERE name = 'Accessories') AS Accessories,
(SELECT value from unnest(cluster) WHERE name = 'ShopByBrand') AS ShopByBrand,
(SELECT value from unnest(cluster) WHERE name = 'Bags') AS Bags,
(SELECT value from unnest(cluster) WHERE name = 'productPrice_USD') AS productPrice_USD,
(SELECT value from unnest(cluster) WHERE name = 'hhi') AS hhi
FROM T LEFT JOIN Users USING(centroid_id)
ORDER BY centroid_id ASC
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_get_numerical_attributes, job_config=job_config) #API Request
df_numerical_attributes = query_job.result()
df_numerical_attributes = df_numerical_attributes.to_dataframe()
df_numerical_attributes.head()
sql_score = f'''
SELECT * EXCEPT(nearest_centroids_distance)
FROM
ML.PREDICT(MODEL {final_model},
(
SELECT
*
FROM
{final_view.full_table_id.replace(":", ".")}
LIMIT 1))
'''
job_config = bigquery.QueryJobConfig()
# Start the query
query_job = client.query(sql_score, job_config=job_config) #API Request
df_score = query_job.result()
df_score = df_score.to_dataframe()
df_score
# Are you sure you want to do this? This is to delete all models
models = client.list_models(DATA_SET_ID) # Make an API request.
for model in models:
full_model_id = f"{model.dataset_id}.{model.model_id}"
client.delete_model(full_model_id) # Make an API request.
print(f"Deleted: {full_model_id}")
# Are you sure you want to do this? This is to delete all tables and views
tables = client.list_tables(DATA_SET_ID) # Make an API request.
for table in tables:
full_table_id = f"{table.dataset_id}.{table.table_id}"
client.delete_table(full_table_id) # Make an API request.
print(f"Deleted: {full_table_id}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up your Google Cloud Platform project
Step2: Import libraries and define constants
Step3: Data exploration and preparation
Step4: Build a final view for to use as trainding data for clustering
Step5: Create our initial model
Step6: Work towards creating a better model
Step7: Select optimal k
Step8: The code below assumes we've used the naming convention originally created in this notebook, and the k value occurs after the 2nd underscore. If you've changed the model_prefix_name variable, then this code might break.
Step9: Note - when you run this notebook, you will get different results, due to random cluster initialization. If you'd like to consistently return the same cluster for reach run, you may explicitly select your initialization through hyperparameter selection (https
Step10: In addition to the output above, I'll note a few insights we get from our clusters.
Step11: Clean up
|
11,244
|
<ASSISTANT_TASK:>
Python Code:
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
print(tree)
from xml.etree import ElementTree
import pprint
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.iter():
print(node.tag)
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.iter('outline'):
name = node.attrib.get('text')
url = node.attrib.get('xmlUrl')
if name and url:
print(' %s' % name)
print(' %s' % url)
else:
print(name)
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.findall('.//outline'):
url = node.attrib.get('xmlUrl')
if url:
print(url)
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.findall('.//outline/outline'):
url = node.attrib.get('xmlUrl')
print(url)
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
node = tree.find('./with_attributes')
print(node.tag)
for name, value in sorted(node.attrib.items()):
print(' %-4s = "%s"' % (name, value))
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
for path in ['./child', './child_with_tail']:
node = tree.find(path)
print(node.tag)
print(' child node text:', node.text)
print(' and tail text :', node.tail)
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
node = tree.find('entity_expansion')
print(node.tag)
print(' in attribute:', node.attrib['attribute'])
print(' in text :', node.text.strip())
from xml.etree.ElementTree import iterparse
depth = 0
prefix_width = 8
prefix_dots = '.' * prefix_width
line_template = ''.join([
'{prefix:<0.{prefix_len}}',
'{event:<8}',
'{suffix:<{suffix_len}} ',
'{node.tag:<12} ',
'{node_id}',
])
EVENT_NAMES = ['start', 'end', 'start-ns', 'end-ns']
for (event, node) in iterparse('podcasts.opml', EVENT_NAMES):
if event == 'end':
depth -= 1
prefix_len = depth * 2
print(line_template.format(
prefix=prefix_dots,
prefix_len=prefix_len,
suffix='',
suffix_len=(prefix_width - prefix_len),
node=node,
node_id=id(node),
event=event,
))
if event == 'start':
depth += 1
import csv
from xml.etree.ElementTree import iterparse
import sys
writer = csv.writer(sys.stdout, quoting=csv.QUOTE_NONNUMERIC)
group_name = ''
parsing = iterparse('podcasts.opml', events=['start'])
for (event, node) in parsing:
if node.tag != 'outline':
# Ignore anything not part of the outline
continue
if not node.attrib.get('xmlUrl'):
# Remember the current group
group_name = node.attrib['text']
else:
# Output a podcast entry
writer.writerow(
(group_name, node.attrib['text'],
node.attrib['xmlUrl'],
node.attrib.get('htmlUrl', ''))
)
from xml.etree.ElementTree import XML
def show_node(node):
print(node.tag)
if node.text is not None and node.text.strip():
print(' text: "%s"' % node.text)
if node.tail is not None and node.tail.strip():
print(' tail: "%s"' % node.tail)
for name, value in sorted(node.attrib.items()):
print(' %-4s = "%s"' % (name, value))
for child in node:
show_node(child)
parsed = XML('''
<root>
<group>
<child id="a">This is child "a".</child>
<child id="b">This is child "b".</child>
</group>
<group>
<child id="c">This is child "c".</child>
</group>
</root>
''')
print('parsed =', parsed)
for elem in parsed:
show_node(elem)
from xml.etree.ElementTree import XMLID
tree, id_map = XMLID('''
<root>
<group>
<child id="a">This is child "a".</child>
<child id="b">This is child "b".</child>
</group>
<group>
<child id="c">This is child "c".</child>
</group>
</root>
''')
for key, value in sorted(id_map.items()):
print('%s = %s' % (key, value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Traversing the parsed tree
Step2: To print only the groups of names and feed URL for the podcasts, leaving out all of the data in the header section by iterating over only the outline nodes and print the text and xmlURL attributes by looking up the values in the attrib dictionary
Step3: Finding Nodes in a Documents
Step4: It is possible to take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline means the loop will process only the second level of outline nodes.
Step5: Parsed Node Attributes
Step6: The text content of the nodes is available, along with the tail text, which comes after the end of a close tag.
Step7: XML entity references embedded in the document are converted to the appropriate characters before values are returned.
Step8: Watching Events While Parsing
Step9: The event-style of processing is more natural for some operations, such as converting XML input to some other format. This technique can be used to convert list of podcasts from the earlier examples from an XML file to a CSV file, so they can be loaded into a spreadsheet or database application.
Step10: Parsing Strings
Step11: For structured XML that uses the id attribute to identify unique nodes of interest, XMLID() is a convenient way to access the parse results.
|
11,245
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import IFrame
IFrame(
'https://www.sunfrog.com/Geek-Tech/First-solve-the-problem-Then-write-the-code.html',
width=800,
height=350,
)
from IPython.display import IFrame
IFrame(
'https://docs.mongodb.com/manual/reference/geojson/',
width=800,
height=350,
)
# a module for importing values so I do not have to expose them in this Jupyter notebook
from meta import dockerized_mongo_path
# the '!' preceding the command allows me to access the shell from the Jupyter notebook
# in which I am writing this blog post
# ./expect-up-daemon calls a /usr/bin/expect script
# in the $dockerized_mongo_path to bring up the dockerized MongoDB
!cd $dockerized_mongo_path && ./expect-up-daemon
from pymongo import MongoClient
client = MongoClient()
db = client.app
collection_names = sorted(db.collection_names())
print(collection_names)
collections = accidents, citations = [db.get_collection(collection)
for collection
in collection_names]
info = [{collection_name: format(collection.count(), ',')}
for collection_name, collection
in zip(collection_names, collections)]
print("document counts")
for item in info:
print(item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Tasks
Step3: Verify that the database is running and responding to the pymongo driver.
|
11,246
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import os
import sys
sys.path.append(os.path.join(os.getcwd(), "src"))
import util.io as mio
import util.plotting as mplot
from model.conversationDataframe import ConversationDataframe
from stats.wordsCountStats import WordsCountStats
from stats.iConvStats import IConvStats
from util import statsUtil
%matplotlib notebook
sns.set_context("paper")
%load_ext autoreload
%autoreload 2
# filepath to your conversation file. You can use my conversations examples for testing
FILEPATH = "src/resources/unittest/test_plotting.txt"
# load conversation
conv = ConversationDataframe(FILEPATH)
conv.loadMessages()
msgs = conv.messages
msgs.head()
# load sender stats
sender_stats = conv.stats.generateStats(IConvStats.STATS_NAME_WORDCOUNT, ngram_range=(1,1))
sender_word_count = sender_stats.wordsCount
sender_word_count.head(3)
total_word_count = sender_stats.getWordsCount()
total_word_count.sort_values(ascending=False)[:5]
sender_stats.getWordsCount(sender='s1').sort_values(ascending=False)[:5]
unbalances_word_count = (sender_word_count.loc['s1']-sender_word_count.loc['s2']).abs()
unbalances_word_count.sort_values(ascending=False)[:5]
sender_stats.getWordsUsedJustBy('s2', 's1').head(5)
# load month stats
month_stats = conv.stats.generateStats(IConvStats.STATS_NAME_WORDCOUNT, groupByColumns=['year', 'month'])
month_word_count = month_stats.wordsCount
month_word_count.head()
#filepath = conv.statsFolder + '\\' + 'wordCount.txt'
#mio.printDataFrameToFile(stats, filepath)
tot_month_word_count = month_stats.getWordsCount(['17','your','youth'])
tot_month_word_count.head()
mplot.plotWordsCount(month_stats, ['17','your','youth'])
mplot.plotWordsCount(month_stats, ['17','your','youth'], sender='s1')
# get data to plot (target words)
target_words = ['your','youth']
stats_to_plot = statsUtil.transformStats(month_word_count[target_words], 'word', 'val')
mplot._genericFactorPlot(stats_to_plot, 'month', 'val', {}, "Word count", "count", col='year', row='word')
# load hours stats
hour_stats = conv.stats.generateStats(IConvStats.STATS_NAME_WORDCOUNT, groupByColumns=['hour'])
hour_word_count = hour_stats.wordsCount
hour_word_count.head(3)
# plot total for target words
mplot.plotWordsCount(hour_stats, ['17','your','youth'])
# get data to plot (target words)
target_words = ['17','your','youth']
stats_to_plot = statsUtil.transformStats(hour_word_count[target_words], 'word', 'val')
mplot._genericFactorPlot(stats_to_plot, 'hour', 'val', {}, "", "", row='word')
# load month stats
month_stats = conv.stats.generateStats(IConvStats.STATS_NAME_WORDCOUNT, groupByColumns=['year', 'month'])
month_word_count = month_stats.wordsCount
total_month_word_trend = month_stats._computeWordsTrend(month_word_count)
total_month_word_trend.head()
change_threshold = 20
total_month_word_trend[(total_month_word_trend>change_threshold)|(total_month_word_trend<(-change_threshold))]\
.dropna(axis=1, how='all').dropna(axis=0, how='all')
total_month_word_trend.apply(lambda x:np.std(x), axis=0).sort_values(ascending=False)[:5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Word Count by Sender
Step2: Top N Words
Step3: Top Words by Sender
Step4: Top Unbalanced Words
Step5: Words Used Just By
Step6: Word Count by Month
Step7: Total
Step8: By Sender
Step9: Word Count by Hour
Step10: Total
Step11: By Sender
Step12: Word Trend [WIP]
Step13: Total
Step14: Words with highest variability
|
11,247
|
<ASSISTANT_TASK:>
Python Code:
import bitstring
from_hex = bitstring.BitArray('0x0001b3')
from_bin = bitstring.BitArray('0b001100')
from_oct = bitstring.BitArray('0o34100')
from_int = bitstring.BitArray(uint=45, length=8)
from_int_bigendian = bitstring.BitArray(intbe=-32768, length=16)
from_float = bitstring.BitArray(float=10.3, length=32) # floats must be 32 or 64 bits long
from_float_littleendian = bitstring.BitArray(floatle=-273.15, length=64) # floats must be 32 or 64 bits long
from_raw = bitstring.BitArray(bytes=b'\x00\x01\x02\xff', length=28)
from_bool = bitstring.BitArray(True)
from_hex, from_bin, from_oct, from_int, from_int_bigendian, from_float, from_float_littleendian, from_raw, from_bool
from_int.bin, from_int.bytes, from_int.uint, from_int.int, from_int.hex, from_float_littleendian.floatbe
len(from_bool), len(from_float)
from_int[-4:].bin, from_int[:4].int
from_hex + from_int
seven_bits = bitstring.BitArray(int=32, length=7)
len(seven_bits), seven_bits.bin
seven_bits.tobytes() # tobytes() ensures padding to byte-alignment
packed = bitstring.pack('bool, int:7, floatbe:32', True, -32, -273.15)
len(packed), packed.bin, packed.bytes
packed.unpack('bool, int:7, floatbe:32')
list(packed.find('0b0000')), list(packed.findall('0b0000'))
packed.replace('0b0000', '0b1111')
packed.bin
packed.startswith('0b111'), packed.endswith('0b111')
a = bitstring.BitArray('0b00001111')
b = bitstring.BitArray('0b01010101')
(a).bin, (~a).bin, (a<<2).bin, (a>>2).bin, (a*2).bin, (a&b).bin, (a|b).bin, (a^b).bin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basics
Step2: Converting to a different format
Step3: Lengths, slicing and joining
Step4: Non-byte aligned binary data
Step5: Packing and unpacking
Step6: Finding and replacing
Step7: Bit-operations
|
11,248
|
<ASSISTANT_TASK:>
Python Code:
# Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# Import the displaCy library
from spacy import displacy
doc = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million. '
u'By contrast, Sony sold only 7 thousand Walkman music players.')
displacy.render(doc, style='ent', jupyter=True)
for sent in doc.sents:
displacy.render(nlp(sent.text), style='ent', jupyter=True)
doc2 = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million. '
u'By contrast, my kids sold a lot of lemonade.')
for sent in doc2.sents:
displacy.render(nlp(sent.text), style='ent', jupyter=True)
for sent in doc2.sents:
docx = nlp(sent.text)
if docx.ents:
displacy.render(docx, style='ent', jupyter=True)
else:
print(docx.text)
options = {'ents': ['ORG', 'PRODUCT']}
displacy.render(doc, style='ent', jupyter=True, options=options)
colors = {'ORG': 'linear-gradient(90deg, #aa9cfc, #fc9ce7)', 'PRODUCT': 'radial-gradient(yellow, green)'}
options = {'ents': ['ORG', 'PRODUCT'], 'colors':colors}
displacy.render(doc, style='ent', jupyter=True, options=options)
displacy.serve(doc, style='ent', options=options)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Viewing Sentences Line by Line
Step2: <div class="alert alert-info"><font color=black>**NOTE**
Step3: <div class="alert alert-info"><font color=black>**WORKAROUND
Step4: Viewing Specific Entities
Step5: Customizing Colors and Effects
Step6: For more on applying CSS background colors and gradients, visit https
|
11,249
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import scanpy.api as sc
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.set_figure_params(dpi=70) # dots (pixels) per inch determine size of inline figures
sc.logging.print_versions()
adata = sc.read_10x_h5('./data/1M_neurons_neuron20k.h5')
adata.var_names_make_unique()
adata
sc.pp.recipe_zheng17(adata)
sc.tl.pca(adata)
sc.pp.neighbors(adata)
sc.tl.umap(adata)
sc.tl.louvain(adata)
sc.tl.paga(adata)
sc.pl.paga_compare(adata, edges=True, threshold=0.05)
anno = pd.read_csv('/Users/alexwolf/Dropbox/1M/louvain.csv.gz', compression='gzip', header=None, index_col=0)
anno.columns = ['louvain_ref']
adata.obs['louvain_ref'] = anno.loc[adata.obs.index]['louvain_ref'].astype(str)
sc.pl.umap(adata, color=['louvain_ref'], legend_loc='on data')
adata.write('./write/subsampled.h5ad')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run standard preprocessing steps, see here.
Step2: Now compare this with the reference clustering of PAGA preprint, Suppl. Fig. 12, available from here.
|
11,250
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
deg = np.arange(0.0, 90.01, 0.01)
def deg2dist(deg): return 10.29 * np.cos(np.pi / 180 * deg)
dist = deg2dist(deg)
# Note that using plt.subplots below is equivalent to using
# fig = plt.figure and then ax = fig.add_subplot(111)
fig, ax = plt.subplots()
ax.plot(deg, dist)
ax.set(xlabel='degrees', ylabel='Distance',
title='Distance between lines of longitude 1/3 arc second apart')
ax.grid()
fig.savefig("degrees_and_dist.png")
plt.show()
print("Notable Latitudes")
print("49th parrallel (US-Canada border): ", deg2dist(49))
print("25.9 deg N (Brownsville, TX): ", deg2dist(25.9))
print("35.0 deg (LA): ", deg2dist(35.0))
print("40.7 deg (NYC): ", deg2dist(40.7))
g = -9.81 #accelertion due to gravity, m/s
drag_c = .6 #drag coefficient of human body
cross_a = .68 #Cross-sectional area of human body
mass = 80 #kg
frict_c = .03 #Coefficient of friction
import math
def acceleration_due_to_wind(v):
return -v**2 * (1.225 * drag_c * cross_a) / (2 * mass)
def acceleration_due_to_slope_no_friction(theta):
return g * math.sin(theta)
def equivalent_slope(v):
return math.tan(math.asin(acceleration_due_to_wind(v) / g))
t = np.arange(0.0, 15.0, 0.01)
s = [equivalent_slope(v) for v in t]
plt.plot(t, s)
plt.xlabel('Speed (m/s)')
plt.ylabel('Equivalent Slope')
plt.title('Comparing deceleration due to wind to accelerations due to upward slopes')
plt.show()
t = np.arange(0.0, 15.0, 0.01)
s = [equivalent_slope(v) for v in t]
t = t * 2.237
plt.plot(t, s)
plt.xlabel('Speed (mph)')
plt.ylabel('Equivalent Slope')
plt.title('Comparing deceleration due to wind to accelerations due to upward slopes')
plt.show()
t = np.arange(0.0, 15.0, 0.01)
s = [acceleration_due_to_wind(v)/(g) for v in t]
t = t * 2.237
plt.plot(t, s)
plt.xlabel('Speed (mph)')
plt.ylabel('Equivalent Friction coefficient')
plt.title('Comparing deceleration due to wind to accelerations due to friction coefficients on flat')
plt.show()
data = {
'north': 33.7874,
'west': -84.4203,
'south': 33.7677,
'east': -84.3812,
}
import wget
from scan_product_links import urls
import math
import os
us_urls = urls("elevationproductslinks/13secondplots.csv")
mx_ca_urls = urls("elevationproductslinks/1secondplots.csv")
def download_coords(data, country='United States'):
# TODO check if request is gucci
# TODO Remove the following block of code in production
if country == 'United States':
path_suffix = '_13'
useful_urls = us_urls
else:
path_suffix = '_1'
useful_urls = mx_ca_urls
for lat in range(
math.ceil(float(data['south'])), math.ceil(float(data['north'])) + 1
# Eg N 87.7 to N 86.
):
for lng in range(
math.floor(float(data['west'])), math.floor(float(data['east'])) + 1
):
fname = ('grd' + ('n' if lat>0 else 's')
+ str(abs(math.ceil(lat))).zfill(2)
+ ('e' if lng>=0 else 'w')
+ str(abs(math.floor(lng))).zfill(3))
database_path = ('elevationdata/'
+ fname
+ path_suffix + '/w001001.adf'
)
if not os.path.exists(database_path):
try:
print("downloading" + useful_urls[(lat, lng)] + "\n")
wget.download(useful_urls[(lat, lng)])
print("\n")
file_name = useful_urls[(lat, lng)].split('/')[-1]
archive = zipfile.ZipFile(file_name)
for file in archive.namelist():
if file.startswith("grd" + fname[3:] + path_suffix + "/"):
archive.extract(file, "elevationdata")
os.remove(file_name)
except (urllib.error.HTTPError):
print("Could not download data for", (lat, lng))
except KeyError:
print("Thing not found in urls: " (lat, lng))
download_coords(data)
mapsize = (
data['west'],
data['south'],
data['east'],
data['north']
)
import subprocess
import math
import osmapi
import os.path
import pickle
def get_map_data(mapsize):
mapfilepath = 'maps/map'+str(mapsize)+'.dat'
# TODO Allow spanning countries
# (west, south, east, north), string
api_link = osmapi.OsmApi(#username='evanxq1@gmail.com',
#password='hrVQ*DO9aD9q'#,
#api="api06.dev.openstreetmap.org"
)
try:
if os.path.exists(mapfilepath):
print('loading local map...')
with open(mapfilepath, 'rb') as f:
map_data = pickle.load(f)
else:
print('requesting map...')
map_data = api_link.Map(mapsize[0], mapsize[1],
mapsize[2], mapsize[3])
with open(mapfilepath, 'wb') as f:
pickle.dump(map_data, f) # TODO delete this entire try block tbh
except IOError as e:
print("Couldn't write map data!", e.errorno, e.strerror)
# except Error as e: # osmapi.OsmApi.MaximumRetryLimitReachedError: #TODO: handle errors
# print(e.errorno, e.strerror)
# print("Could not get map data!")
# return False, [], [], [], [], [], []
return map_data
get_map_data(mapsize)
map_data = get_map_data(mapsize)
# Todo add to node ititialziation
class Node:
def __init__(self, node_id, lat, lng, is_stoplight, adj):
self.node_id = node_id
self.lat = lat
self.lng = lng
self.is_stoplight = is_stoplight
self.adj = adj
self.edge_coords = None
self.edge_elevations = []
self.edge_work = []
def __lt__(self, other):
return False
def __gt__(self, other):
return False
def create_adj_node_ptrs(self):
self.adj_node_ptrs = list(nodes[adj_node_id] for adj_node_id in self.adj)
# TODO support bridges
data['allow_bridges'] = False
data['banned_highway_types'] = [
'motorway', 'trunk', 'service', 'steps', 'footway', 'pedestrian', 'sidewalk', 'path'
]
# Also may include primary, secondary
# Highway type descriptions https://wiki.openstreetmap.org/wiki/Key:highway
# motorway: interstate
# trunk: mostly grade-separated state/us highways, always with medians
# primary: major roads
# footway: exclusively pedestrians
# sidewall: always on side of road
# pedestrian: pedestrian oriented path, but not sidewalk
# steps: stairsteps
# path: trail
# bridleway: horse trail
from collections import defaultdict
def map_to_graph(map_data):
banned_types = set(data['banned_highway_types'])
graph = defaultdict(set)
for entry in map_data:
if (entry['type'] == 'way'
and 'data' in entry.keys()
and 'tag' in entry['data'].keys()
and 'highway' in entry['data']['tag'].keys()
):
highway_type = entry['data']['tag']['highway']
else: # Is not labelled highway
continue
if (True
and highway_type not in banned_types
and (data['allow_bridges'] or ('bridge' not in entry['data']['tag']))
):
road_nodes = entry['data']['nd']
for i in range(len(road_nodes) - 1):
graph[road_nodes[i]].add(road_nodes[i+1])
graph[road_nodes[i+1]].add(road_nodes[i])
return graph
adj_list = map_to_graph(map_data)
def get_node_entries(target_nodes, map_data):
for item in map_data:
item_id = item["data"]["id"]
if item_id in target_nodes:
yield (item_id, item)
node_lat_lng = []
datapts_per_degree = 10800
def create_node_list_with_elevations(adj_list, map_data):
nodes = dict()
node_heights, node_latlons = dict(), dict()
stoplights = set()
for node_id, node_info in get_node_entries(adj_list.keys(), map_data):
nodes[node_id] = Node(
node_id,
lat=float(node_info['data']['lat']),
lng=float(node_info['data']['lon']),
is_stoplight =
('tag' in node_info['data'] and 'highway' in node_info['data']['tag']
and node_info['data']['tag']['highway'] == 'traffic_signals'
),
adj = list(adj_list[node_id])
)
return nodes
nodes = create_node_list_with_elevations(adj_list, map_data)
for node_id, node in nodes.items():
node.create_adj_node_ptrs()
import numpy as np
datapts_per_degree = 10800
def add_edges_return_queries(nodes):
large_query = set()
for node_id, node in nodes.items():
edge_coords = []
for adj_node in node.adj_node_ptrs:
# Degrees aren't squares, so this isn't super valid, but it's not important.
dist_in_degs = np.sqrt((node.lat - adj_node.lat)**2 + (node.lng - adj_node.lng)**2)
n_steps = max(int(dist_in_degs * datapts_per_degree), 2)
lat_steps = np.linspace(node.lat, adj_node.lat, num=n_steps, endpoint=True)
lng_steps = np.linspace(node.lng, adj_node.lng, num=n_steps, endpoint=True)
coords = list(zip(lat_steps, lng_steps))
edge_coords.append(coords)
large_query.update(coords)
node.edge_coords = edge_coords
return large_query
large_query = add_edges_return_queries(nodes)
# Make sure each node's edges start with the same coordinates
def test_edge_coords_start(nodes):
for node_id, node in nodes.items():
first = (node.lat, node.lng)
for edge in node.edge_coords:
assert edge[0] == first
test_edge_coords_start(nodes)
# Make sure each node's edges end with the same coordinates
# as its adjacent node's start with
def test_edge_coords_end(nodes):
for node_id, node in nodes.items():
assert len(node.edge_coords) == len(node.adj) == len(node.adj_node_ptrs)
for i in range(len(node.adj_node_ptrs)):
assert (node.adj_node_ptrs[i].lat, node.adj_node_ptrs[i].lng) == node.edge_coords[i][-1]
test_edge_coords_end(nodes)
from collections import defaultdict
def build_query_text(large_query, country="United States"):
queries = defaultdict(str)
latlng_order = defaultdict(list)
for lat_lng in large_query:
lat, lng = lat_lng
fname = ('grd' + ('n' if lat>0 else 's')
+ str(abs(math.ceil(lat))).zfill(2)
+ ('e' if lng>=0 else 'w') # lng = 0 block is all east I guess
+ str(abs(math.floor(lng))).zfill(3)
)
s = str(lng) + ' ' + str(lat) + '\n'
queries[fname] += s
latlng_order[fname].append(lat_lng)
return queries, latlng_order
def query_elevations(queries, latlng_order, country="United States"):
points = []
elevations = []
for fname in queries.keys():
if country == 'United States': # TODO deal with AK
database_path = 'elevationdata/' + fname + '_13/w001001.adf'
if country == 'Mexico' or country == 'Canada' or country == None:
# TODO deal with country == None which would be sorta weird
database_path = 'elevationdata/' + fname + '_1/w001001.adf'
proc = subprocess.Popen(
['gdallocationinfo', database_path, '-valonly', '-geoloc'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
universal_newlines=True
)
output, err = proc.communicate(queries[fname])
elevations += [float(s) for s in output.splitlines()]
points += latlng_order[fname]
if len(points) != len(elevations):
raise Exception("Error querying points: " + str(len(points)) + " points, " + str(len(elevations)) + " elevations")
ret = dict()
for i in range(len(points)):
ret[points[i]] = elevations[i]
return ret
queries, latlng_order = build_query_text(large_query, country="United States")
elevations = query_elevations(queries, latlng_order, country="United States")
def set_node_elevations(nodes, elevations):
for node_id, node in nodes.items():
for edge in node.edge_coords:
elevation_list = []
for coord_pair in edge:
elevation_list.append(elevations[coord_pair])
node.edge_elevations.append(elevation_list)
set_node_elevations(nodes, elevations)
deviations = []
high_deviations = []
low_deviations = []
for node_id, node in nodes.items():
for i, edge in enumerate(node.edge_elevations):
bottom_end = min(edge[0], edge[-1])
top_end = max(edge[0], edge[-1])
top = max(edge)
bottom = min(edge)
deviations.append(max(top-top_end, bottom_end-bottom))
high_deviations.append(top-top_end)
low_deviations.append(bottom_end-bottom)
print(0, len(deviations))
print(.01, sum([i >= .01 for i in deviations]))
print(.1, sum([i >= .1 for i in deviations]))
print(.3, sum([i >= .3 for i in deviations]))
print(1, sum([i >= 1 for i in deviations]))
print(3, sum([i >= 3 for i in deviations]))
print(5, sum([i >= 5 for i in deviations]))
edges_to_scan = 0
for node_id, node in nodes.items():
for i, edge in enumerate(node.edge_elevations):
bottom_end = min(edge[0], edge[-1])
top_end = max(edge[0], edge[-1])
top = max(edge)
bottom = min(edge)
edges_to_scan += len(node.edge_elevations)
print(edges_to_scan/4) # over 4 rather than 2 since we only have to look at "opposite" nodes from the ones we split
data['mass'] = 80 #kg
data['init_speed'] = 1.0
data['use_stoplights'] = True
def prep_graph(nodes):
init_energy = .5 *data['mass'] * data['init_speed'] ** 2
for node_id, node in nodes.items():
node.energy = init_energy
node.speed = data['init_speed']
node.prev_node = None
node.next_nodes = set()
node.path_start = node
# TODO add to some other method
node.elevation = node.edge_elevations[0][0]
prep_graph(nodes)
sorted_nodes = sorted(nodes.values(), key=lambda n: -n.elevation)
g = -9.81 #accelertion due to gravity, m/s
data['drag_c'] = .6 #drag coefficient of human body
data['cross_a'] = .68 #Cross-sectional area of human body
data['mass'] = 80 #kg
data['frict_c'] = .03 #Coefficient of friction
import math
def prep_data_constants(data):
data['c1'] = (1.225 * data['drag_c'] * data['cross_a']) / (2 * data['mass'])
data['c2'] = g * data['frict_c']
def new_velocity(v0, dh, dist): # for small changes in V; dist is horizontal dist
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
a = ((g * math.sin(theta))
- v0 ** 2 * data['c1']
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist**2 + dh**2) + v0 ** 2
if vel_sqr > 0:
return math.sqrt(vel_sqr)
else:
return 0
prep_data_constants(data)
new_velocity(1.0, -2, 30)
new_velocity(1.0, -1, 30)
def new_velocity(v0, dh, dist, integrations=1): # for small changes in V; dist is horizontal dist
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
v = v0
dist_per_i = dist/integrations
dh_per_i = dh/integrations
for i in range(integrations):
a = ((g * math.sin(theta))
- v ** 2 * data['c1']
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist_per_i**2 + dh_per_i**2) + v ** 2
if vel_sqr > 0:
v = math.sqrt(vel_sqr)
else:
return 0
return v
def new_velocity_no_friction(v0, dh, dist): # for small changes in V; dist is horizontal dist
integrations=1
if v0 == 0:
return 0
theta = math.atan2(dh, dist)
# Original implementation
# a = ((g * math.sin(theta))
# - (1.225 * drag_c * cross_a * v0 ** 2) / (2 * mass)
# + (g * frict_c * math.cos(theta))
# )
# Prematurely optimized (tm) implementation
v = v0
dist_per_i = dist/integrations
dh_per_i = dh/integrations
for i in range(integrations):
a = ((g * math.sin(theta))
+ math.cos(theta) * data['c2']
)
# Total Acceleration = grav, air resistance, rolling friction resistance
# Assumes final velocity causes about the amount of air resistance as
# inital velocity
vel_sqr = 2 * a * math.sqrt(dist_per_i**2 + dh_per_i**2) + v ** 2
if vel_sqr > 0:
v = math.sqrt(vel_sqr)
else:
return 0
return v
new_velocity_no_friction(1.0, -1, 30)
new_velocity(1.0, -1, 30, 1)
new_velocity(1.0, -1, 30, 10)
new_velocity(1.0, -1, 30, 100)
new_velocity(1.0, -1, 30, 1000)
new_velocity(1.0, -1, 30, 10000)
new_velocity_no_friction(1.0, -2, 30)
new_velocity(1.0, -2, 30, 1)
new_velocity(1.0, -2, 30, 10)
new_velocity(1.0, -2, 30, 100)
new_velocity(1.0, -2, 30, 1000)
new_velocity(1.0, -2, 30, 10000)
def latlong_dist(lat1_raw, lon1_raw, lat2_raw, lon2_raw):
lat1 = math.radians(float(lat1_raw))
lon1 = math.radians(float(lon1_raw))
lat2 = math.radians(float(lat2_raw))
lon2 = math.radians(float(lon2_raw))
# approximate radius of earth in m
R = 6373000.0
dlon = lon2 - lon1
dlat = lat2 - lat1
a = (math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2)
* math.sin(dlon / 2)**2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
distance = R * c
return distance
data['approx_frict_c'] = .03
def calculate_work(dist, dh): # Work done by gravity
theta = math.atan2(dh, dist)
a = ((g * math.sin(theta))
+ math.cos(theta) * g * data['approx_frict_c']
)
real_dist = math.sqrt(dist**2 + dh**2)
return real_dist * a * data['mass']
def find_work_all_edges(sorted_nodes):
for node in sorted_nodes:
node.edge_work = []
for i in range(len(node.adj)):
edge_coords = node.edge_coords[i]
edge_elevations = node.edge_elevations[i]
work = 0
horiz_dist = latlong_dist(edge_coords[0][0], edge_coords[0][1],
edge_coords[1][0], edge_coords[1][1])
for j in range(len(edge_coords) - 1):
dh = edge_elevations[j+1] - edge_elevations[j]
# horiz dist is actually same for each part of an edge
# horiz_dist = latlong_dist(edge_coords[j][0], edge_coords[j][1], edge_coords[j+1][0], edge_coords[j+1][1])
work += calculate_work(horiz_dist, dh)
node.edge_work.append(work)
find_work_all_edges(sorted_nodes)
import numpy as np
import pylab as pl
from matplotlib import collections as mc
# lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
# c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])
def graph_paths(sorted_nodes):
lines = []
colors = []
max_lat = -90
min_lat = 90
max_lng = -180
min_lng = 180
done = set()
for node in sorted_nodes:
max_lat = max(node.lat, max_lat)
min_lat = min(node.lat, min_lat)
max_lng = max(node.lng, max_lng)
min_lng = min(node.lng, min_lng)
for adj in node.next_nodes:
if ((adj.lng,adj.lat), (node.lng, node.lat)) not in done:
lines.append([(node.lng, node.lat),(adj.lng,adj.lat)])
done.add(((node.lng, node.lat),(adj.lng,adj.lat)))
colors = []
for line in lines:
beginning, end = line
x1, y1 = beginning
x2, y2 = end
angle = math.atan2(x2-x1, y2-y1)
colors.append((math.cos(angle) * .5 + .5, math.sin(angle) * .5 + .5, 0, 1))
lc = mc.LineCollection(lines, colors=colors, linewidths=1)
fig, ax = pl.subplots(figsize=(16,10))
ax.add_collection(lc)
# ax.autoscale()
# ax.margins(0.001)
ax.set_xlim(min_lng, max_lng)
ax.set_ylim(min_lat, max_lat)
plt.show()
import numpy as np
import pylab as pl
from matplotlib import collections as mc
# lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]
# c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])
def generate_compass():
lines = []
for x in [-1, -.5, 0, .5, 1]:
for y in [-1, -.5, 0, .5, 1]:
if (x, y) != (0,0):
lines.append([(0,0), ((x,y))])
colors = []
for line in lines:
beginning, end = line
x1, y1 = beginning
x2, y2 = end
angle = math.atan2(x2-x1, y2-y1)
colors.append((math.cos(angle) * .5 + .5, math.sin(angle) * .5 + .5, 0, 1))
lc = mc.LineCollection(lines, colors=colors, linewidths=1)
fig, ax = pl.subplots()
ax.add_collection(lc)
ax.autoscale()
# ax.margins(0.001)
plt.show()
generate_compass()
# Adding this for reference
# class Node:
# def __init__(self, node_id, lat, lng, is_stoplight, adj):
# self.node_id = node_id
# self.lat = lat
# self.lng = lng
# self.is_stoplight = is_stoplight
# self.adj = adj
# self.edge_coords = None
# self.edge_elevations = []
# def create_adj_node_ptrs(self):
# self.adj_node_ptrs = list(nodes[adj_node_id] for adj_node_id in self.adj)
# node.speed = data['init_speed']
# node.prev_node = None
# node.path_start = node
# # TODO add to some other method
# node.elevation = node.edge_elevations[0][0]
# g = -9.81 #accelertion due to gravity, m/s
# data['drag_c'] = .6 #drag coefficient of human body
# data['cross_a'] = .68 #Cross-sectional area of human body
# data['mass'] = 80 #kg
# data['frict_c'] = .03 #Coefficient of friction
# def prep_data_constants(data):
# data['c1'] = (1.225 * data['drag_c'] * data['cross_a']) / (2 * data['mass'])
# data['c2'] = g * data['frict_c']
from heapq import heappush, heappop
def algo_1(sorted_nodes):
edges_explored = 0
for top_node in sorted_nodes:
if top_node.prev_node != None: # Already part of a path
continue
need_to_explore = set([top_node])
heap = [(-top_node.elevation, top_node)]
while need_to_explore:
_, node = heappop(heap)
if node not in need_to_explore:
continue
need_to_explore.remove(node)
node_energy = node.energy
for i in range(len(node.adj)):
adj = node.adj_node_ptrs[i]
edge_work = node.edge_work[i]
# For air resistance version, check first if
# (node.speed > adj.speed or node.elevation > adj.elevaton)
# then ride down nodes
if edge_work + node_energy > adj.energy:
adj.energy = edge_work + node_energy
if adj.prev_node is not None:
prev = adj.prev_node
next_nodes = prev.next_nodes
next_nodes.remove(adj)
adj.prev_node = node
node.next_nodes.add(adj)
adj.path_start = top_node
need_to_explore.add(adj)
heappush(heap, (-adj.elevation, adj))
edges_explored += 1
return edges_explored
prep_graph(nodes) # reset graph
algo_1(sorted_nodes)
graph_paths(sorted_nodes)
def test_nodes(sorted_nodes):
for node in sorted_nodes:
for adj in node.adj_node_ptrs:
if adj in node.next_nodes:
assert adj.prev_node == node
else:
assert adj.prev_node != node
test_nodes(sorted_nodes)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So how should we query points?
Step2: Solving for theta and finding the equivelent slope of the angle...
Step3: It's actually pretty bad lol
Step4: Next, we query OpenStreetMap for the map data!
Step5: We need to turn the map into a usable graph structure now!
Step6: Possible optimization here
Step7: How nice! Now we need to decide what kind of edges we need to split on due to elevation differences. Let's test some stuff out.
Step8: Note that all values are even since our graph is directed, so there are edges going in each direction. Here's a question
Step9: I've seen better, but it's not worth doing anything about now. Road graphs are always kind of sparse, so I imagine this would increase somewhat linearly with edge total. Programming splitting sounds like a pain, though. I'll do it later (tm)
Step10: Here comes that big dramatic |V|log|V| sort that all CS classes have been prepraring me for!!
Step11: Well that was quick. Let's get to the «physics»
Step12: Let's see what happens when we try to "integrate" this more precisely.
Step13: Attempt #1
|
11,251
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
11,252
|
<ASSISTANT_TASK:>
Python Code:
pairs = [(4, 4), (3, 4), (7, 16), (6, 8)]
time_signatures = [abjad.TimeSignature(_) for _ in pairs]
durations = [_.duration for _ in time_signatures]
time_signature_total = sum(durations)
counts = [1, 2, -3, 4]
denominator = 16
talea = rmakers.Talea(counts, denominator)
talea_index = 0
talea[:10]
events = []
accumulated_duration = abjad.Duration(0)
while accumulated_duration < time_signature_total:
duration = talea[talea_index]
if 0 < duration:
pitch = abjad.NamedPitch("c'")
else:
pitch = None
duration = abs(duration)
if time_signature_total < (duration + accumulated_duration):
duration = time_signature_total - accumulated_duration
events_ = abjad.LeafMaker()([pitch], [duration])
events.extend(events_)
accumulated_duration += duration
talea_index += 1
staff = abjad.Staff(events)
abjad.show(staff)
selections = abjad.mutate.split(staff[:], time_signatures, cyclic=True)
for time_signature, selection in zip(time_signatures, selections):
first_leaf = abjad.get.leaf(selection, 0)
abjad.attach(time_signature, first_leaf)
abjad.show(staff)
measure_selections = abjad.select(staff).leaves().group_by_measure()
for time_signature, measure_selection in zip(time_signatures, measure_selections):
abjad.Meter.rewrite_meter(measure_selection, time_signature)
abjad.show(staff)
string = "d' fs' a' d'' g' ef'"
strings = string.split()
pitches = abjad.CyclicTuple(strings)
plts = abjad.select(staff).logical_ties(pitched=True)
for i, plt in enumerate(plts):
pitch = pitches[i]
for note in plt:
note.written_pitch = pitch
abjad.show(staff)
for selection in abjad.select(staff).runs():
print(selection)
for selection in abjad.select(staff).runs():
articulation = abjad.Articulation("tenuto")
abjad.attach(articulation, selection[0])
if 3 <= len(selection):
abjad.hairpin("p < f", selection)
else:
dynamic = abjad.Dynamic("ppp")
abjad.attach(dynamic, selection[0])
abjad.override(staff).dynamic_line_spanner.staff_padding = 4
abjad.show(staff)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can ask our talea for as many durations as we want. (Taleas output nonreduced fractions instead of durations. This is to allow talea output to model either durations or time signatures, depending on the application.) We include some negative values, which we will later interpret as rests. We can ask our talea for ten durations like this
Step2: Let's use our talea to make notes and rests, stopping when the duration of the accumulated notes and rests sums to that of the four time signatures defined above
Step3: To attach the four time signatures defined above, we must split our notes and rests at measure boundaries. Then we can attach a time signature to the first note or rest in each of the four selections that result
Step4: Then we group our notes and rests by measure, and metrically respell each group
Step5: Pitching notes
Step6: Then we loop through pitched logical ties, pitching notes as we go
Step7: Attaching articulations and dynamics
Step8: We can use Abjad's run selector to loop through the runs in our music, attaching articulations and dynamics along the way
|
11,253
|
<ASSISTANT_TASK:>
Python Code:
a = 99
if a % 9 == 0:
if a % 11 == 0:
print '!!!'
elif a % 10 ==0:
print '???'
else:
print 'hahaha'
x = 7
if (5 > x) and (x < 10):
print 'oh!'
a = []
if a:
print 'right'
else:
print "empty"
5 == 6
5 != 6
a = 0
if not a == 0:
print 'haha'
count = 1
while count <=3:
print count
count += 1
print count
print 'a'
count = 1
while True:
print count
count += 1
if count > 3:
break
var = 7
while var > 0:
var -= 1
if var == 5:
continue
print 'Current variable value :', var
print 'Good bye!'
nums = [1, 3, 5]
position = 0
while position < len(nums):
number = nums[position]
if number % 2 == 0:
print 'even number'
break
position += 1
else:
print 'no even number'
nums = [1, 3, 2, 5, 4, 6, 7, 9, 8]
pos = 0
while pos < len(nums):
print nums[pos],
pos += 1
a = [1, 2, 3, 4]
b = a
print a == b
a = [1, 2, 3, 4, 5]
for i in a:
print i
nums = [1, 3, 2, 5, 4, 6, 7, 8, 9]
for val in nums:
print val,
print
for i in nums:
print i,
print
for num in nums:
print num,
str = 'hello world'
for char in str:
print char
capitals = {'korea' : 'seoul', 'japan' : 'tokyo', 'usa' : 'Washington D.C'}
for country in capitals:
print country, capitals[country]
print '-' * 20
for key in capitals.keys():
print key
print '-' * 20
for val in capitals.values():
print val
print '-' * 20
for k, v in capitals.items():
print k, v
nums = [1, 3, 2, 5, 4, 6, 7, 9, 8]
for val in nums:
print val,
for i, _ in enumerate(nums):
print 'value at index', i
a = [1, 2, 3, 4]
b = [2, 3, 4]
for i in a:
for j in b:
print i, j
nums = range(1, 101)
for i in nums:
if i % 2 == 0 or i % 11 ==0:
print i,
a = [22, 1, 3, 4, 7, 98, 21, 55, 87, 99, 19, 20, 45]
minimum = a[0]
for i in a[1:]:
if minimum > i:
minimum = i
print minimum
a = [22, 1, 3, 4, 7, 98, 21, 55, 87, 99, 19, 20, 45]
sum1 = 0
for i in a:
sum1 = sum1 + i
print float(sum1)/len(a)
nums = [1, 2, 3, 4, 5, 6]
even_nums = []
for i in nums:
if i % 2 == 0:
even_nums.append(i)
print even_nums
nums = [1, 2, 3, 4, 5, 6]
even_nums2 = [i for i in nums if i % 2 == 0]
even_nums3 = [i + 3 for i in nums if i % 2 == 0]
print even_nums2, even_nums3
nums2 = [i for i in nums]
nums3 = [i**2 for i in nums]
print nums2
print nums3
nums4 = [i for i in nums if i < 4]
nums5 = [i**2 if i < 4 else i * 2 for i in nums]
print nums4
print nums5
rows = range(1, 4)
cols = range(1, 3)
for row in rows:
for col in cols:
print row, col
print '-' * 20
cells = [(row, col) for row in rows for col in cols]
for row, col in cells:
print row, col
words = ['apple', 'banana', 'chicago', 'do', 'elephant']
dict1 = {}
for w in words:
dict1[w] = len(w)
print dict1
words = ['apple', 'banana', 'chicago', 'do', 'elephant']
words_dict = {w: len(w) for w in words}
print words_dict
Celsius = [39.2, 36.5, 37.3, 37.8]
F = [1.8 * c + 32 for c in Celsius]
print F
str1 = 'Today is very nice and I want to go out for dinner'
temp = [ch for ch in str1 if ch == ' ']
print len(temp)
str1 = 'Today is very nice and I want to go out for dinner'
vowels = 'aeiou'
temp = [ch for ch in str1 if not ch in vowels]
print ''.join(temp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: for
Step2: if & for 연습문제
Step3: comprehension
Step4: practice
|
11,254
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female' or (passenger['Sex'] == 'male' and passenger['Age'] < 10) :
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'SibSp', ["Sex == 'male'", "Age < 10"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female' and passenger['SibSp'] < 3 and passenger['Parch'] < 3:
predictions.append(1)
elif passenger['Sex'] == 'male' and passenger['Age'] < 10 and passenger['SibSp'] < 3:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
11,255
|
<ASSISTANT_TASK:>
Python Code:
import wget
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=",")
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
from scipy.stats import uniform
from scipy.stats import norm
from sklearn.grid_search import RandomizedSearchCV
from sklearn import metrics
# Designate distributions to sample hyperparameters from
n_estimators = np.random.uniform(25, 45, 5).astype(int)
max_features = np.random.normal(20, 10, 5).astype(int)
hyperparameters = {'n_estimators': list(n_estimators),
'max_features': list(max_features)}
print hyperparameters
from sklearn.ensemble import RandomForestClassifier
# Run randomized search
randomCV = RandomizedSearchCV(RandomForestClassifier(), param_distributions=hyperparameters, n_iter=10)
randomCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = randomCV.best_params_['n_estimators']
best_max_features = randomCV.best_params_['max_features']
print("The best performing n_estimators value is: {:5.1f}".format(best_n_estim))
print("The best performing max_features value is: {:5.1f}".format(best_max_features))
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from randomCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
g_range = 2. ** np.arange(-15, 5, step=5)
C_range = 2. ** np.arange(-5, 15, step=5)
hyperparameters = [{'gamma': g_range,
'C': C_range}]
print hyperparameters
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
# Grid search using cross-validation
grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
bestG = grid.best_params_['gamma']
bestC = grid.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
from sklearn.linear_model import LogisticRegression
# Search for good hyperparameter values
# Specify values to grid search over
penalty = ["l1", "l2"]
C_range = np.arange(0.1, 1.1, 0.1)
hyperparameters = [{'penalty': penalty,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(LogisticRegression(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
bestPenalty = grid.best_params_['penalty']
bestC = grid.best_params_['C']
print bestPenalty
print bestC
# Train model and output predictions
classifier_logistic = LogisticRegression(penalty=bestPenalty, C=bestC)
classifier_logistic_fit = classifier_logistic.fit(XTrain, yTrain)
logistic_predictions = classifier_logistic_fit.predict(XTest)
print metrics.classification_report(yTest, logistic_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, logistic_predictions),2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introducing randomized search
Step2: We then run the random search
Step3: Either grid search or randomized search is probably fine for tuning random forests.
Step4: How does this compare an untuned SVM? What about an SVM with especially badly tuned hyperparams?
|
11,256
|
<ASSISTANT_TASK:>
Python Code:
import sys
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import seaborn as sns
import numpy as np
%matplotlib inline
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
MaleFemale = "/Users/kavyabeheraj/Desktop/Current Classes/Data Bootcamp/Male_Female_Sample_Org_Output.csv"
# Sample organizational output for males and females
df = pd.read_csv(MaleFemale)
df
male = pd.read_csv(MaleFemale).head(14)
female = pd.read_csv(MaleFemale).tail(14)
mlmean = male.head(7) # Reads the first seven lines of the dataframe
mlmean = mlmean[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
mlmean = mlmean.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"SELF-ADVOCACY" : "SelfAd",
"BELONGING" : "Belong"},
columns={"Mean" : "Male Mean"})
mlmean
mlpp = male.tail(7) # Reads the first seven lines of the dataframe
mlpp = mlpp[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
mlpp = mlpp.rename(index={"Academic Behavior Percent Perfect" : "AcaBeh",
"Academic Self-efficacy Percent Perfect" : "AcaEf",
"Growth Mindset Percent Perfect" : "Growth",
"Interpersonal Skills Percent Perfect" : "Intp",
"Problem Solving Percent Perfect" : "Prob",
"SELF ADVOCACY PERCENT PERFECT" : "SelfAd",
"BELONGING PERCENT PERFECT ge 4" : "Belong"},
columns={"Mean" : "Male Percent Positive"})
mlpp
fmmean = female.head(7) # Reads the first seven lines of the dataframe
fmmean = fmmean[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
fmmean = fmmean.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"SELF-ADVOCACY" : "SelfAd",
"BELONGING" : "Belong"},
columns={"Mean" : "Female Mean"})
fmmean
fmpp = female.tail(7)
fmpp = fmpp[["Label","Mean"]].set_index("Label")
fmpp = fmpp.rename(index={"Academic Behavior Percent Perfect" : "AcaBeh",
"Academic Self-efficacy Percent Perfect" : "AcaEf",
"Growth Mindset Percent Perfect" : "Growth",
"Interpersonal Skills Percent Perfect" : "Intp",
"Problem Solving Percent Perfect" : "Prob",
"SELF ADVOCACY PERCENT PERFECT" : "SelfAd",
"BELONGING PERCENT PERFECT ge 4" : "Belong"},
columns={"Mean" : "Female Percent Positive"})
fmpp
mean = pd.concat([mlmean, fmmean], axis=1)
mean
ppos = pd.concat([mlpp, fmpp], axis=1)
ppos
meanppos = pd.concat([mlpp, fmpp, mlmean, fmmean], axis=1)
meanppos
mean.plot.barh(figsize = (10,7))
df2 = "/Users/kavyabeheraj/Desktop/Current Classes/Data Bootcamp/Network_Summary_Gender.csv"
network = pd.read_csv(df2)
network
mlnet = network.tail(7)
mlnet = mlnet[["label","mean", "percentPositive"]].set_index("label")
mlnet = mlnet.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"Self-Advocacy" : "SelfAd",
"Belonging" : "Belong"},
columns={"mean" : "Male Mean, Network",
"percentPositive" : "Male Percent Positive, Network"})
mlnet
fmnet = network[7:14]
fmnet = fmnet[["label","mean", "percentPositive"]].set_index("label")
fmnet = fmnet.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"Self-Advocacy" : "SelfAd",
"Belonging" : "Belong"},
columns={"mean" : "Female Mean, Network", "percentPositive" : "Female Percent Positive, Network"})
fmnet
output = pd.concat([meanppos, fmnet, mlnet ], axis=1)
output
mean_output = output[["Male Mean",
"Female Mean",
"Male Mean, Network",
"Female Mean, Network"]]
mean_output
pp_output = output[["Male Percent Positive",
"Male Percent Positive, Network",
"Female Percent Positive",
"Female Percent Positive, Network"]]
pp_output
mean_net = output[["Male Mean, Network",
"Female Mean, Network"]]
mean_net
ppos_net = output[["Male Percent Positive, Network",
"Female Percent Positive, Network"]]
ppos_net
plt.style.use('seaborn-pastel')
ax = ppos_net.plot(linestyle='-', marker='o', colormap = "Accent")
ppos.plot(kind='bar', colormap = "Pastel2",
ax=ax,
figsize = (10,7))
ax.set_ylim(0, 0.8)
ax.set_title("Percent Positive Male and Female SEL Outcomes, Organization vs. Network")
plt.style.use('seaborn-pastel')
ax = mean_net.plot(linestyle='-', marker='o', colormap = "Accent")
mean.plot(kind='bar', colormap = "Pastel2",
ax=ax,
figsize = (10,7))
ax.set_ylim(0, 5)
ax.set_title("Mean Male and Female SEL Outcomes, Organization vs. Network")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 | Background
Step2: 3.1 | Slicing the Organizational Data
Step3: 3.2 | Joining the Organizational Data
Step4: 4 | Network Summary Data
Step5: 5 | Slicing the Final Output
Step6: 6 | Visualizing the Data
Step7: From the data above, we can see that this organization has a greater percentage of students who meet or exceed requirements for the 7 SEL indicators, except for Problem-Solving.
|
11,257
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
sess = tf.Session()
print(sess.run([node1, node2]))
node3 = tf.add(node1, node2)
print(node3)
print(sess.run(node3))
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b: 4.5}))
print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
init = tf.global_variables_initializer()
sess.run(init)
Sessionprint(sess.run(linear_model, {x: [1, 2, 3, 4]}))
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})
print(sess.run([W, b]))
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
g = tf.Graph()
with g.as_default():
a = tf.placeholder(tf.float32, name="node1")
b = tf.placeholder(tf.float32, name="node2")
c = a + b
tf.summary.FileWriter("logs", g).close()
#from this notebook's directory run > tensorboard --logdir=logs
#then open TensorBoard at: http://localhost:6006/#graphs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computational Graph
Step2: Session
Step3: More complicated computations can be performed by combining Tensor nodes with Operation nodes. Use the tf.add node to mathematically add node1 and node2
Step4: Placeholders
Step5: Make the graph more complex by adding another operation
Step6: Variables
Step7: Initialize all variables with a special operation. Until this point, they are uninitialized
Step8: Because x is a placeholder, linear_model can be evaluated for several x values simultaneously
Step9: Loss Function
Step10: The values of W and b need to be updated in order to get a perfect fit. We can manually figure out what they should be in order to get the right y output (with a loss of 0)
Step11: tf.train API
Step12: The above values are the final model parameters which minimize the loss function!
Step13: TensorBoard Graphs
|
11,258
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
tf.__version__
# To generate GIFs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# You will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as you go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
anim_file = 'dcgan.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 深度卷积生成对抗网络
Step2: 加载和准备数据集
Step3: 创建模型
Step4: 使用(尚未训练的)生成器创建一张图片。
Step5: 判别器
Step6: 使用(尚未训练的)判别器对所生成的图像进行真伪分类。模型将被训练为对真实图像输出正值,对伪造图像输出负值。
Step7: 定义损失函数和优化器
Step8: 判别器损失
Step9: 生成器损失
Step10: 判别器和生成器优化器不同,因为您将分别训练两个网络。
Step11: 保存检查点
Step12: 定义训练循环
Step13: 训练循环在生成器接收到一个随机种子作为输入时开始。该种子用于生成一个图像。判别器随后被用于对真实图像(选自训练集)和伪造图像(由生成器生成)进行分类。为每一个模型计算损失,并使用梯度更新生成器和判别器。
Step14: 生成与保存图片
Step15: 训练模型
Step16: 恢复最新的检查点。
Step17: 创建 GIF
Step18: 使用训练过程中生成的图片通过 imageio 生成动态 gif
|
11,259
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install tensorflow-lattice pydot
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)[ASDF]
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()[ASDF]
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}[ASDF]
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
][ASDF]
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)[ASDF]
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs[:5], test_ys))[ASDF]
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs[:5], test_ys))[ASDF]
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF Lattice 预制模型
Step2: 导入所需的软件包:
Step3: 下载 UCI Statlog (Heart) 数据集:
Step4: 提取特征和标签并将它们转换为张量:
Step5: 设置用于在本指南中进行训练的默认值:
Step6: 特征配置
Step7: 定义我们的特征配置
Step8: 接下来,我们需要确保为使用自定义词汇表的特征(例如上面的“thal”)正确设置单调性。
Step9: 校准线性模型
Step10: 现在,与任何其他 tf.keras.Model 一样,我们编译该模型并将其拟合到我们的数据中。
Step11: 训练完模型后,我们可以在测试集中对其进行评估。
Step12: 校准点阵模型
Step13: 和以前一样,我们编译、拟合并评估我们的模型。
Step14: 校准点阵集成模型
Step15: 和以前一样,我们编译、拟合并评估我们的模型。
Step16: 随机点阵集成
Step17: 和以前一样,我们编译、拟合并评估我们的模型。
Step18: RTL 层随机点阵集成
Step19: 和以前一样,我们编译、拟合并评估我们的模型。
Step20: 晶体点阵集成
Step21: 和以前一样,我们编译、拟合并评估我们的模型。
|
11,260
|
<ASSISTANT_TASK:>
Python Code:
set([1,2,3,4])
# Translate here. Add as many new cells as you like.
sum(range(5))
# One liner here :)
sum(map(lambda x: x * x, range(5)))
# Config environment for code examples.
%matplotlib inline
import networkx as nx
import matplotlib as plt
plt.rcParams['figure.figsize'] = 17, 12
g = nx.scale_free_graph(40).to_undirected()
nx.draw_networkx(g)
g = nx.gnp_random_graph(40, 0.1)
nx.draw_networkx(g)
def degree_distribution(g):
:param g: networkx.Graph
:returns ?: whatever you need to plot the degree distribution.
pass
# Here you can either script or write a function to plot the degree distributions.
# Use as many cells as you need.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some common symbols we will see in this class
Step2: Everyone knows how to do a summation in Python right?
Step3: What if we need to apply an operation to each element in the summation first?
Step4: Ok down to business
Step6: Measures of centrality
|
11,261
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname)
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
# The following parameters will scale the channels and times plotting
# friendly. The info columns 'epoch' and 'time' will be used as hierarchical
# index whereas the condition is treated as categorial data. Note that
# this is optional. By passing None you could also print out all nesting
# factors in a long table style commonly used for analyzing repeated measure
# designs.
index, scale_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)
df = epochs.to_data_frame(picks=None, scalings=scalings, scale_time=scale_time,
index=index)
# Create MEG channel selector and drop EOG channel.
meg_chs = [c for c in df.columns if 'MEG' in c]
df.pop('EOG 061') # this works just like with a list.
# Pandas is using a MultiIndex or hierarchical index to handle higher
# dimensionality while at the same time representing data in a flat 2d manner.
print(df.index.names, df.index.levels)
# Inspecting the index object unveils that 'epoch', 'time' are used
# for subsetting data. We can take advantage of that by using the
# .ix attribute, where in this case the first position indexes the MultiIndex
# and the second the columns, that is, channels.
# Plot some channels across the first three epochs
xticks, sel = np.arange(3, 600, 120), meg_chs[:15]
df.ix[:3, sel].plot(xticks=xticks)
mne.viz.tight_layout()
# slice the time starting at t0 in epoch 2 and ending 500ms after
# the base line in epoch 3. Note that the second part of the tuple
# represents time in milliseconds from stimulus onset.
df.ix[(1, 0):(3, 500), sel].plot(xticks=xticks)
mne.viz.tight_layout()
# Note: For convenience the index was converted from floating point values
# to integer values. To restore the original values you can e.g. say
# df['times'] = np.tile(epoch.times, len(epochs_times)
# We now reset the index of the DataFrame to expose some Pandas
# pivoting functionality. To simplify the groupby operation we
# we drop the indices to treat epoch and time as categroial factors.
df = df.reset_index()
# The ensuing DataFrame then is split into subsets reflecting a crossing
# between condition and trial number. The idea is that we can broadcast
# operations into each cell simultaneously.
factors = ['condition', 'epoch']
sel = factors + ['MEG 1332', 'MEG 1342']
grouped = df[sel].groupby(factors)
# To make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
# Now we compare the mean of two channels response across conditions.
grouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',
color=['steelblue', 'orange'])
mne.viz.tight_layout()
# We can even accomplish more complicated tasks in a few lines calling
# apply method and passing a function. Assume we wanted to know the time
# slice of the maximum response for each condition.
max_latency = grouped[sel[2]].apply(lambda x: df.time[x.argmax()])
print(max_latency)
# Then make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
plt.figure()
max_latency.plot(kind='barh', title='Latency of Maximum Reponse',
color=['steelblue'])
mne.viz.tight_layout()
# Finally, we will again remove the index to create a proper data table that
# can be used with statistical packages like statsmodels or R.
final_df = max_latency.reset_index()
final_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.
# The index is now written into regular columns so it can be used as factor.
print(final_df)
plt.show()
# To save as csv file, uncomment the next line.
# final_df.to_csv('my_epochs.csv')
# Note. Data Frames can be easily concatenated, e.g., across subjects.
# E.g. say:
#
# import pandas as pd
# group = pd.concat([df_1, df_2])
# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Export DataFrame
Step2: Explore Pandas MultiIndex
|
11,262
|
<ASSISTANT_TASK:>
Python Code:
print(type(12))
print(type('python'))
class A:
pass
print(type(A))
print(type.__doc__)
class A:
pass
# 实际上等于
B = type('A', (), {})
print(A.__name__ == B.__name__)
class Enum:
def __new__(cls, value):
print(cls, value)
return value
def __init__(self):
print("Will not be called!")
e = Enum(1)
class MetaEnum(type):
def __new__(metaclass, name, base, attrs):
print("Metaclass: {}\nName: {}\nParents: {}\nAttributes: {}".format(metaclass, name, base, attrs))
return super().__new__(metaclass, name, base, attrs)
class Enum(metaclass=MetaEnum):
# Python 2.7 中定义元类的方法是使用 __metaclass__ 变量
# [PEP 3115](https://www.python.org/dev/peps/pep-3115/)
# 将 Python 3.0 以后语法改为 class Cls(metaclass=Meta)
test = 0
type(Enum)
# 定义新的字典类,在赋值新的 dict[k] = v 时
# 检查 k 是否重复
class _EnumDict(dict):
def __init__(self):
super().__init__()
self.members = []
def __setitem__(self, k, v):
if k in self.members:
raise TypeError("Attempted to reuse key: '{}'".format(k))
else:
self.members.append(k)
super().__setitem__(k, v)
class MetaEnum(type):
@classmethod
def __prepare__(metaclass, cls, bases):
return _EnumDict()
def __new__(metaclass, name, base, attrs):
return super().__new__(metaclass, name, base, attrs)
class Enum(metaclass=MetaEnum):
pass
class Color(Enum):
try:
red = 1
red = 2
except TypeError:# 这里没有使用 as err: 的原因是?
print("TypeError catched")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 通过这段代码可以看出,类对象 A 是由type() 产生的,也就是说 type 也可以用来产生新的对象,而且产生的是类对象,因此它是所有类对象的类:
Step2: class 定义类的语法实际上转化为 type(name, bases, dict),其中 name 参数为类的名字,bases 为继承父类的元组,dict 为类的属性和方法:
Step3: 理论上说这就是元类的意义,但从实际的角度出发显然使用 class 语法更方便、合理,而元类的实际意义则是通过继承 type 类来构造一个新的元类,并进行特定的操作以产生具有特定行为的类对象。这样看来它的本质与普通的类对象没有差异,只不过继承的是 type 类。
Step4: 通常情况下自己定义 __new__ 需要通过调用父类的 __new__ 方法创建一个 cls 的实例,同样在定义元类的时候则是调用上面提到的 type 的用法(因为元类继承自 type):
Step5: 此时我们再来看 Enum 的类,已经不再是 type 而是其元类 MetaEnum:
Step6: 除了 __new__ 方法之外,PEP 3115 还定义了 __prepare__ 属性,用于设定初始化的命名空间(即 type 的第 3 个参数),还是以 enum.Enum 为例,我们需要限制枚举类型中属性名称不得重复使用,则可以通过元类限制类的行为:
|
11,263
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
11,264
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc')
print(b.get_dataset(kind='lc', check_visible=False))
print(b.get_parameter(qualifier='times'))
print(b.get_parameter(qualifier='fluxes'))
print(b.get_parameter(qualifier='sigmas'))
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
print(b.get_parameter(qualifier='ld_mode', component='primary'))
b.set_value('ld_mode', component='primary', value='lookup')
print(b.get_parameter(qualifier='ld_func', component='primary'))
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary'))
b.set_value('ld_mode', component='primary', value='manual')
print(b.get_parameter(qualifier='ld_coeffs', component='primary'))
print(b.get_parameter(qualifier='passband'))
print(b.get_parameter(qualifier='intens_weighting'))
print(b.get_parameter(qualifier='pblum_mode'))
b.set_value('pblum_mode', value='component-coupled')
print(b.get_parameter(qualifier='pblum_component'))
b.set_value('pblum_mode', value='dataset-coupled')
print(b.get_parameter(qualifier='pblum_dataset'))
b.set_value('pblum_mode', value='decoupled')
print(b.get_parameter(qualifier='pblum', component='primary'))
print(b.get_parameter(qualifier='l3_mode'))
b.set_value('l3_mode', value='flux')
print(b.get_parameter(qualifier='l3'))
b.set_value('l3_mode', value='fraction')
print(b.get_parameter(qualifier='l3_frac'))
print(b.get_compute())
print(b.get_parameter(qualifier='irrad_method'))
print(b.get_parameter(qualifier='boosting_method'))
print(b.get_parameter(qualifier='atm', component='primary'))
b.set_value('times', phoebe.linspace(0,1,101))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', kind='lc', context='model'))
print(b.get_parameter(qualifier='fluxes', kind='lc', context='model'))
afig, mplfig = b.plot(show=True)
afig, mplfig = b.plot(x='phases', show=True)
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['intensities@lc01',
'abs_intensities@lc01',
'normal_intensities@lc01',
'abs_normal_intensities@lc01',
'pblum_ext@lc01',
'boost_factors@lc01'])
b.run_compute()
print(b.get_model().datasets)
print(b.filter(dataset='lc01', kind='mesh', context='model').twigs)
afig, mplfig = b.filter(kind='mesh').plot(fc='intensities', ec='None', show=True)
print(b.get_parameter(qualifier='pblum_ext',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
print(b.get_parameter(qualifier='abs_normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
print(b.get_parameter(qualifier='normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
print(b.get_parameter(qualifier='abs_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
print(b.get_parameter(qualifier='intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
print(b.get_parameter(qualifier='boost_factors',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Step3: times
Step4: fluxes
Step5: sigmas
Step6: compute_times / compute_phases
Step7: NOTE
Step8: ld_mode
Step9: ld_func
Step10: ld_coeffs_source
Step11: ld_coeffs
Step12: passband
Step13: intens_weighting
Step14: pblum_mode
Step15: pblum_component
Step16: pblum_dataset
Step17: pblum
Step18: l3_mode
Step19: l3
Step20: l3_frac
Step21: Compute Options
Step22: irrad_method
Step23: For more details on irradiation, see the Irradiation tutorial
Step24: For more details on boosting, see the Beaming and Boosting example script
Step25: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Step26: Plotting
Step27: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
Step28: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
Step29: Mesh Fields
Step30: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
Step31: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
Step32: Now let's look at each of the available fields.
Step33: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
Step34: abs_normal_intensities are the absolute normal intensities per-element.
Step35: normal_intensities are the relative normal intensities per-element.
Step36: abs_intensities are the projected absolute intensities (towards the observer) per-element.
Step37: intensities are the projected relative intensities (towards the observer) per-element.
|
11,265
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
rc('figure', figsize=(13,6))
def plot_lc(time, flux, c=None, ylim=(0.9865, 1.0025), ax=None, alpha=1):
if ax is None:
fig, ax = subplots()
else:
fig, ax = None, ax
ax.plot(time, flux, c=c, alpha=alpha)
ax.autoscale(axis='x', tight=True)
setp(ax, xlabel='Time [d]', ylabel='Flux', xlim=time[[0,-1]], ylim=ylim)
if fig is not None:
fig.tight_layout()
return ax
from pytransit import RoadRunnerModel, LDTkLDModel
from ldtk import sdss_g, sdss_i, sdss_z
time = linspace(-0.05, 0.05, 1500)
ldm = LDTkLDModel(teff=(5500, 150), logg=(4.5, 0.1), z=(0.0, 0.1), pbs=[sdss_i], frozen=True)
tm = RoadRunnerModel(ldm)
tm.set_data(time)
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1);
ldm = LDTkLDModel([sdss_g, sdss_z], (5500, 150), (4.5, 0.1), (0.0, 0.1), frozen=True)
lcids = zeros(time.size, int)
lcids[time.size//2:] = 1
tm = RoadRunnerModel(ldm)
tm.set_data(time, lcids=lcids, pbids=[0,1])
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1, ylim=(0.986, 1.0025));
ldm.frozen = False
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1);
for i in range(10):
flux1 = tm.evaluate(k=0.1, ldc=[None], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1, ax=ax, c='C0', alpha=0.25);
setp(ax, ylim=(0.986, 1.0025))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the model
Step2: Example 1
Step3: and given to the RoadRunnnerModel as any other limb darkening model.
Step4: after which the transit model evaluation goes as usual
Step5: Example 2
Step6: Thawing the model
|
11,266
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# To check the data in the Terminal/Konsole
% ls data/bike_data/
data_path = 'data/bike_data/hour.csv'
rides = pd.read_csv(data_path)
# # The new historical/time-seri data to visualize
# data_path_watch = 'data/watch_multisensor_data/2_Year_Data_Basis_Watch/2-year_data/Basis_Watch_Data.csv'
# watch = pd.read_csv(data_path_watch)
rides.head()
rides[:10]
rides[:24*10].plot(x='dteday', y='cnt')
# watch[:1000].plot()
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days * 24 hours-per-day (24 hours/day)
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features_txn, test_targets_txm = test_data.drop(target_fields, axis=1), test_data[target_fields]
test_features_txn.shape, test_targets_txm.shape
# Hold out the last 60 days for validation set
# txn: t is time/row (num of records) and n is space/col (input feature space dims)
# txm: t is time/row (num of records) and m is space/col (output feature space dims)
train_features_txn, train_targets_txm = features[:-60*24], targets[:-60*24]
valid_features_txn, valid_targets_txm = features[-60*24:], targets[-60*24:]
train_features_txn.shape, train_targets_txm.shape, valid_features_txn.shape, valid_targets_txm.shape
class NN(object):
# n: num_input_units in input layer,
# h: num_hidden_units in the hidden layer,
# m: num_out_units in the output layer, and
# lr: learning_rate
def __init__(self, n, h, m, lr):
# Initialize parameters: weights and biases
self.w_nxh = np.random.normal(loc=0.0, scale=n**(-0.5), size=(n, h))
self.b_1xh = np.zeros(shape=(1, h))
self.w_hxm = np.random.normal(loc=0.0, scale=h**(-0.5), size=(h, m))
self.b_1xm = np.zeros(shape=(1, m))
self.lr = lr
# Train to update NN paramteres (w, b) in each epoch using NN hyper-parameters
def train(self, X_txn, Y_txm):
''' Train the network on batch of features (X_txn) and targets (Y_txm).
Arguments
---------
features: X_txn is a 2D array, each row is one data record (t), each column is a feature (n)
txn: ixj, rowxcol, and hxw
targets: Y_txm is a 2D array as well.
'''
dw_nxh = np.zeros_like(self.w_nxh)
db_1xh = np.zeros_like(self.b_1xh)
dw_hxm = np.zeros_like(self.w_hxm)
db_1xm = np.zeros_like(self.b_1xm)
for each_X, each_Y in zip(X_txn, Y_txm):
#### Implement the forward pass here ####
### Forward pass ###
x_1xn = np.array(each_X, ndmin=2) # [[each]]
y_1xm = np.array(each_Y, ndmin=2) # [[each]]
# TODO: Hidden layer - Replace these values with your calculations.
h_in_1xh = (x_1xn @ self.w_nxh) + self.b_1xh # signals into hidden layer
h_out_1xh = np.tanh(h_in_1xh)
# TODO: Output layer - Replace these values with your calculations.
out_logits_1xm = (h_out_1xh @ self.w_hxm) + self.b_1xm # signals into final output layer
y_pred_1xm = np.tanh(out_logits_1xm) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
dy_1xm = y_pred_1xm - y_1xm # Output layer error: difference between actual target and desired output.
# TODO: Output error - Replace this value with your calculations.
dout_logits_1xm = dy_1xm * (1-(np.tanh(out_logits_1xm)**2)) # dtanh= (1-(np.tanh(x))**2)
dh_out_1xh = dout_logits_1xm @ self.w_hxm.T
# TODO: Calculate the hidden layer's contribution to the error
dh_in_1xh = dh_out_1xh * (1-(np.tanh(h_in_1xh)**2))
dx_1xn = dh_in_1xh @ self.w_nxh.T # is dx_1xn USELESS?
# TODO: Backpropagated error terms - Replace these values with your calculations.
db_1xm += dout_logits_1xm
dw_hxm += (dout_logits_1xm.T @ h_out_1xh).T
db_1xh += dh_in_1xh
dw_nxh += (dh_in_1xh.T @ x_1xn).T
# TODO: Update the NN parameters (w, b) in each epoch of training
self.w_hxm -= self.lr * dw_hxm # update hidden-to-output weights with gradient descent step
self.b_1xm -= self.lr * db_1xm # output units/neurons/cells/nodes
self.w_nxh -= self.lr * dw_nxh # update input-to-hidden weights with gradient descent step
self.b_1xh -= self.lr * db_1xh # hidden units/cells/neurons/nodes
def run(self, X_txn):
''' Run a forward pass through the network with input features
Arguments
---------
features: X_txn is a 2D array of records (t as row) and their features (n as col)
'''
#### Implement the forward pass here ####
### Forward pass ###
x_txn = X_txn
# TODO: Hidden layer - Replace these values with your calculations.
h_in_txh = (x_txn @ self.w_nxh) + self.b_1xh # signals into hidden layer
h_out_txh = np.tanh(h_in_txh)
# TODO: Output layer - Replace these values with your calculations.
out_logits_txm = (h_out_txh @ self.w_hxm) + self.b_1xm # signals into final output layer
y_pred_txm = np.tanh(out_logits_txm) # signals from final output layer
return y_pred_txm
# Mean Squared-Error(MSE)
def MSE(Y_pred_1xt, Y_1xt):
return np.mean((Y_pred_1xt-Y_1xt)**2)
### Set the hyperparameters here ###
num_epochs = 100 # updating NN parameters (w, b)
learning_rate = 2 * 1/train_features_txn.shape[0] # train_features = x_txn, t: number of recorded samples/records
hidden_nodes = 5
output_nodes = 1 # y_tx1
input_nodes = train_features_txn.shape[1] # x_txn
# Buidlingthe NN by initializing/instantiating the NN class
nn = NN(h=hidden_nodes, lr=learning_rate, m=output_nodes, n=input_nodes)
# Training-validating the NN - learning process
losses_tx2 = {'train':[], 'valid':[]}
for each_epoch in range(num_epochs):
# # Go through a random minibatch of 128 records from the training data set
# random_minibatch = np.random.choice(train_features_txn.index, size=128)
# x_txn, y_txm = train_features_txn.ix[random_minibatch].values, train_targets_txm.ix[random_minibatch]['cnt']
# Go through the full batch of records in the training data set
x_txn , y_tx1 = train_features_txn.values, train_targets_txm['cnt']
nn.train(X_txn=x_txn, Y_txm=y_tx1)
# Printing out the training progress
train_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=train_features_txn).T, Y_1xt=train_targets_txm['cnt'].values)
valid_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=valid_features_txn).T, Y_1xt=valid_targets_txm['cnt'].values)
print('each_epoch:', each_epoch, 'num_epochs:', num_epochs,
'train_loss:', train_loss_1x1_value, 'valid_loss:', valid_loss_1x1_value)
losses_tx2['train'].append(train_loss_1x1_value)
losses_tx2['valid'].append(valid_loss_1x1_value)
plt.plot(losses_tx2['train'], label='Train loss')
plt.plot(losses_tx2['valid'], label='Valid loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions_tx1 = nn.run(test_features_txn).T*std + mean
ax.plot(predictions_tx1[0], label='Prediction')
ax.plot((test_targets_txm['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions_tx1))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load, explore, and prepare the dataset
Step2: Downloading/ checking out the bike sharing dataset
Step3: Dummy variables
Step4: Batch normalization for standardizing the dataset using mean and variance/ scaling the data
Step5: Splitting the dataset into training, validation, and testing sets
Step6: Dividing the entire training data into training and validation sets to avoid overfitting and underfitting during the training
Step7: How to build the NN
Step8: Training the NN
Step9: Test predictions
|
11,267
|
<ASSISTANT_TASK:>
Python Code:
labVersion = 'cs190_week5_v_1_2'
import matplotlib.pyplot as plt
import numpy as np
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
def create2DGaussian(mn, sigma, cov, n):
Randomly sample points from a two-dimensional Gaussian distribution
np.random.seed(142)
return np.random.multivariate_normal(np.array([mn, mn]), np.array([[sigma, cov], [cov, sigma]]), n)
dataRandom = create2DGaussian(mn=50, sigma=1, cov=0, n=100)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45, 54.5), ax.set_ylim(45, 54.5)
plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
dataCorrelated = create2DGaussian(mn=50, sigma=1, cov=.9, n=100)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
pass
# TODO: Replace <FILL IN> with appropriate code
correlatedData = sc.parallelize(dataCorrelated)
meanCorrelated = correlatedData.sum() / correlatedData.count()
correlatedDataZeroMean = correlatedData.map(lambda x : x - meanCorrelated)
print meanCorrelated
print correlatedData.take(1)
print correlatedDataZeroMean.take(1)
# TEST Interpreting PCA (1a)
from test_helper import Test
Test.assertTrue(np.allclose(meanCorrelated, [49.95739037, 49.97180477]),
'incorrect value for meanCorrelated')
Test.assertTrue(np.allclose(correlatedDataZeroMean.take(1)[0], [-0.28561917, 0.10351492]),
'incorrect value for correlatedDataZeroMean')
# TODO: Replace <FILL IN> with appropriate code
# Compute the covariance matrix using outer products and correlatedDataZeroMean
correlatedCov = correlatedDataZeroMean.map(lambda x : np.outer(x, x)).reduce(lambda x, y: x + y) / correlatedData.count()
print correlatedCov
# TEST Sample covariance matrix (1b)
covResult = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]
Test.assertTrue(np.allclose(covResult, correlatedCov), 'incorrect value for correlatedCov')
# TODO: Replace <FILL IN> with appropriate code
def estimateCovariance(data):
Compute the covariance matrix for a given rdd.
Note:
The multi-dimensional covariance array should be calculated using outer products. Don't
forget to normalize the data by first subtracting the mean.
Args:
data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.
Returns:
np.ndarray: A multi-dimensional array where the number of rows and columns both equal the
length of the arrays in the input `RDD`.
# First obtain the zero mean data
meanCorrelated = (data.sum() / data.count())
zeroMeanCorrelatedData = data.map(lambda x : x - meanCorrelated)
# Covariance matrix
covMatrix = zeroMeanCorrelatedData.map(lambda x : np.outer(x, x)).reduce(lambda x, y: x + y) / data.count()
return covMatrix
correlatedCovAuto= estimateCovariance(correlatedData)
print correlatedCovAuto
# TEST Covariance function (1c)
correctCov = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]
Test.assertTrue(np.allclose(correctCov, correlatedCovAuto),
'incorrect value for correlatedCovAuto')
# TODO: Replace <FILL IN> with appropriate code
from numpy.linalg import eigh
# Calculate the eigenvalues and eigenvectors from correlatedCovAuto
eigVals, eigVecs = eigh(correlatedCovAuto)
print 'eigenvalues: {0}'.format(eigVals)
print '\neigenvectors: \n{0}'.format(eigVecs)
# Use np.argsort to find the top eigenvector based on the largest eigenvalue
inds = np.argsort(eigVals)
print "inds", inds
topComponent = eigVecs[:,inds[1]]
print '\ntop principal component: {0}'.format(topComponent)
# TEST Eigendecomposition (1d)
def checkBasis(vectors, correct):
return np.allclose(vectors, correct) or np.allclose(np.negative(vectors), correct)
Test.assertTrue(checkBasis(topComponent, [0.68915649, 0.72461254]),
'incorrect value for topComponent')
# TODO: Replace <FILL IN> with appropriate code
# Use the topComponent and the data from correlatedData to generate PCA scores
correlatedDataScores = correlatedData.map(lambda x : x.dot(topComponent))
print 'one-dimensional data (first three):\n{0}'.format(np.asarray(correlatedDataScores.take(3)))
# TEST PCA Scores (1e)
firstThree = [70.51682806, 69.30622356, 71.13588168]
Test.assertTrue(checkBasis(correlatedDataScores.take(3), firstThree),
'incorrect value for correlatedDataScores')
# TODO: Replace <FILL IN> with appropriate code
def pca(data, k=2):
Computes the top `k` principal components, corresponding scores, and all eigenvalues.
Note:
All eigenvalues should be returned in sorted order (largest to smallest). `eigh` returns
each eigenvectors as a column. This function should also return eigenvectors as columns.
Args:
data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.
k (int): The number of principal components to return.
Returns:
tuple of (np.ndarray, RDD of np.ndarray, np.ndarray): A tuple of (eigenvectors, `RDD` of
scores, eigenvalues). Eigenvectors is a multi-dimensional array where the number of
rows equals the length of the arrays in the input `RDD` and the number of columns equals
`k`. The `RDD` of scores has the same number of rows as `data` and consists of arrays
of length `k`. Eigenvalues is an array of length d (the number of features).
# Correlated Covariance matrix
covMatrix = estimateCovariance(data)
#print "covMatrix", covMatrix
# Eigen decomposition
eigVals, eigVecs = eigh(covMatrix)
#print "eigVecs", eigVecs
# Sort the eigVals
inds = np.argsort(eigVals)
#print "Inds", inds
# Obtain the k-top
k_top = eigVecs[:,[inds[i] for i in range(k-1, -1, -1)]]
if k==1:
k_top = eigVecs[:,inds[len(inds)-1]]
#print "k_top", k_top
# k-scores
k_scores = data.map(lambda x : x.dot(k_top))
# Return the `k` principal components, `k` scores, and all eigenvalues
return k_top, k_scores, eigVals
# Run pca on correlatedData with k = 2
topComponentsCorrelated, correlatedDataScoresAuto, eigenvaluesCorrelated = pca(correlatedData)
# Note that the 1st principal component is in the first column
print 'topComponentsCorrelated: \n{0}'.format(topComponentsCorrelated)
print ('\ncorrelatedDataScoresAuto (first three): \n{0}'
.format('\n'.join(map(str, correlatedDataScoresAuto.take(3)))))
print '\neigenvaluesCorrelated: \n{0}'.format(eigenvaluesCorrelated)
# Create a higher dimensional test set
pcaTestData = sc.parallelize([np.arange(x, x + 4) for x in np.arange(0, 20, 4)])
componentsTest, testScores, eigenvaluesTest = pca(pcaTestData, 3)
print '\npcaTestData: \n{0}'.format(np.array(pcaTestData.collect()))
print '\ncomponentsTest: \n{0}'.format(componentsTest)
print ('\ntestScores (first three): \n{0}'
.format('\n'.join(map(str, testScores.take(3)))))
print '\neigenvaluesTest: \n{0}'.format(eigenvaluesTest)
# TEST PCA Function (2a)
Test.assertTrue(checkBasis(topComponentsCorrelated.T,
[[0.68915649, 0.72461254], [-0.72461254, 0.68915649]]),
'incorrect value for topComponentsCorrelated')
firstThreeCorrelated = [[70.51682806, 69.30622356, 71.13588168], [1.48305648, 1.5888655, 1.86710679]]
Test.assertTrue(np.allclose(firstThreeCorrelated,
np.vstack(np.abs(correlatedDataScoresAuto.take(3))).T),
'incorrect value for firstThreeCorrelated')
Test.assertTrue(np.allclose(eigenvaluesCorrelated, [1.94345403, 0.13820481]),
'incorrect values for eigenvaluesCorrelated')
topComponentsCorrelatedK1, correlatedDataScoresK1, eigenvaluesCorrelatedK1 = pca(correlatedData, 1)
Test.assertTrue(checkBasis(topComponentsCorrelatedK1.T, [0.68915649, 0.72461254]),
'incorrect value for components when k=1')
Test.assertTrue(np.allclose([70.51682806, 69.30622356, 71.13588168],
np.vstack(np.abs(correlatedDataScoresK1.take(3))).T),
'incorrect value for scores when k=1')
Test.assertTrue(np.allclose(eigenvaluesCorrelatedK1, [1.94345403, 0.13820481]),
'incorrect values for eigenvalues when k=1')
Test.assertTrue(checkBasis(componentsTest.T[0], [ .5, .5, .5, .5]),
'incorrect value for componentsTest')
Test.assertTrue(np.allclose(np.abs(testScores.first()[0]), 3.),
'incorrect value for testScores')
Test.assertTrue(np.allclose(eigenvaluesTest, [ 128, 0, 0, 0 ]), 'incorrect value for eigenvaluesTest')
# TODO: Replace <FILL IN> with appropriate code
randomData = sc.parallelize(dataRandom)
# Use pca on randomData
topComponentsRandom, randomDataScoresAuto, eigenvaluesRandom = pca(randomData)
print 'topComponentsRandom: \n{0}'.format(topComponentsRandom)
print ('\nrandomDataScoresAuto (first three): \n{0}'
.format('\n'.join(map(str, randomDataScoresAuto.take(3)))))
print '\neigenvaluesRandom: \n{0}'.format(eigenvaluesRandom)
# TEST PCA on `dataRandom` (2b)
Test.assertTrue(checkBasis(topComponentsRandom.T,
[[-0.2522559 , 0.96766056], [-0.96766056, -0.2522559]]),
'incorrect value for topComponentsRandom')
firstThreeRandom = [[36.61068572, 35.97314295, 35.59836628],
[61.3489929 , 62.08813671, 60.61390415]]
Test.assertTrue(np.allclose(firstThreeRandom, np.vstack(np.abs(randomDataScoresAuto.take(3))).T),
'incorrect value for randomDataScoresAuto')
Test.assertTrue(np.allclose(eigenvaluesRandom, [1.4204546, 0.99521397]),
'incorrect value for eigenvaluesRandom')
def projectPointsAndGetLines(data, components, xRange):
Project original data onto first component and get line details for top two components.
topComponent= components[:, 0]
slope1, slope2 = components[1, :2] / components[0, :2]
means = data.mean()[:2]
demeaned = data.map(lambda v: v - means)
projected = demeaned.map(lambda v: (v.dot(topComponent) /
topComponent.dot(topComponent)) * topComponent)
remeaned = projected.map(lambda v: v + means)
x1,x2 = zip(*remeaned.collect())
lineStartP1X1, lineStartP1X2 = means - np.asarray([xRange, xRange * slope1])
lineEndP1X1, lineEndP1X2 = means + np.asarray([xRange, xRange * slope1])
lineStartP2X1, lineStartP2X2 = means - np.asarray([xRange, xRange * slope2])
lineEndP2X1, lineEndP2X2 = means + np.asarray([xRange, xRange * slope2])
return ((x1, x2), ([lineStartP1X1, lineEndP1X1], [lineStartP1X2, lineEndP1X2]),
([lineStartP2X1, lineEndP2X1], [lineStartP2X2, lineEndP2X2]))
((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \
projectPointsAndGetLines(correlatedData, topComponentsCorrelated, 5)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')
plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')
plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)
pass
((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \
projectPointsAndGetLines(randomData, topComponentsRandom, 5)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')
plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')
plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)
pass
from mpl_toolkits.mplot3d import Axes3D
m = 100
mu = np.array([50, 50, 50])
r1_2 = 0.9
r1_3 = 0.7
r2_3 = 0.1
sigma1 = 5
sigma2 = 20
sigma3 = 20
c = np.array([[sigma1 ** 2, r1_2 * sigma1 * sigma2, r1_3 * sigma1 * sigma3],
[r1_2 * sigma1 * sigma2, sigma2 ** 2, r2_3 * sigma2 * sigma3],
[r1_3 * sigma1 * sigma3, r2_3 * sigma2 * sigma3, sigma3 ** 2]])
np.random.seed(142)
dataThreeD = np.random.multivariate_normal(mu, c, m)
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
norm = Normalize()
cmap = get_cmap("Blues")
clrs = cmap(np.array(norm(dataThreeD[:,2])))[:,0:3]
fig = plt.figure(figsize=(11, 6))
ax = fig.add_subplot(121, projection='3d')
ax.azim=-100
ax.scatter(dataThreeD[:,0], dataThreeD[:,1], dataThreeD[:,2], c=clrs, s=14**2)
xx, yy = np.meshgrid(np.arange(-15, 10, 1), np.arange(-50, 30, 1))
normal = np.array([0.96981815, -0.188338, -0.15485978])
z = (-normal[0] * xx - normal[1] * yy) * 1. / normal[2]
xx = xx + 50
yy = yy + 50
z = z + 50
ax.set_zlim((-20, 120)), ax.set_ylim((-20, 100)), ax.set_xlim((30, 75))
ax.plot_surface(xx, yy, z, alpha=.10)
ax = fig.add_subplot(122, projection='3d')
ax.azim=10
ax.elev=20
#ax.dist=8
ax.scatter(dataThreeD[:,0], dataThreeD[:,1], dataThreeD[:,2], c=clrs, s=14**2)
ax.set_zlim((-20, 120)), ax.set_ylim((-20, 100)), ax.set_xlim((30, 75))
ax.plot_surface(xx, yy, z, alpha=.1)
plt.tight_layout()
pass
# TODO: Replace <FILL IN> with appropriate code
threeDData = sc.parallelize(dataThreeD)
componentsThreeD, threeDScores, eigenvaluesThreeD = pca(threeDData)
print 'componentsThreeD: \n{0}'.format(componentsThreeD)
print ('\nthreeDScores (first three): \n{0}'
.format('\n'.join(map(str, threeDScores.take(3)))))
print '\neigenvaluesThreeD: \n{0}'.format(eigenvaluesThreeD)
# TEST 3D to 2D (2c)
Test.assertEquals(componentsThreeD.shape, (3, 2), 'incorrect shape for componentsThreeD')
Test.assertTrue(np.allclose(np.sum(eigenvaluesThreeD), 969.796443367),
'incorrect value for eigenvaluesThreeD')
Test.assertTrue(np.allclose(np.abs(np.sum(componentsThreeD)), 1.77238943258),
'incorrect value for componentsThreeD')
Test.assertTrue(np.allclose(np.abs(np.sum(threeDScores.take(3))), 237.782834092),
'incorrect value for threeDScores')
scoresThreeD = np.asarray(threeDScores.collect())
# generate layout and plot data
fig, ax = preparePlot(np.arange(20, 150, 20), np.arange(-40, 110, 20))
ax.set_xlabel(r'New $x_1$ values'), ax.set_ylabel(r'New $x_2$ values')
ax.set_xlim(5, 150), ax.set_ylim(-45, 50)
plt.scatter(scoresThreeD[:,0], scoresThreeD[:,1], s=14**2, c=clrs, edgecolors='#8cbfd0', alpha=0.75)
pass
# TODO: Replace <FILL IN> with appropriate code
def varianceExplained(data, k=1):
Calculate the fraction of variance explained by the top `k` eigenvectors.
Args:
data (RDD of np.ndarray): An RDD that contains NumPy arrays which store the
features for an observation.
k: The number of principal components to consider.
Returns:
float: A number between 0 and 1 representing the percentage of variance explained
by the top `k` eigenvectors.
components, scores, eigenvalues = pca(data, k)
<FILL IN>
varianceRandom1 = varianceExplained(randomData, 1)
varianceCorrelated1 = varianceExplained(correlatedData, 1)
varianceRandom2 = varianceExplained(randomData, 2)
varianceCorrelated2 = varianceExplained(correlatedData, 2)
varianceThreeD2 = varianceExplained(threeDData, 2)
print ('Percentage of variance explained by the first component of randomData: {0:.1f}%'
.format(varianceRandom1 * 100))
print ('Percentage of variance explained by both components of randomData: {0:.1f}%'
.format(varianceRandom2 * 100))
print ('\nPercentage of variance explained by the first component of correlatedData: {0:.1f}%'.
format(varianceCorrelated1 * 100))
print ('Percentage of variance explained by both components of correlatedData: {0:.1f}%'
.format(varianceCorrelated2 * 100))
print ('\nPercentage of variance explained by the first two components of threeDData: {0:.1f}%'
.format(varianceThreeD2 * 100))
# TEST Variance explained (2d)
Test.assertTrue(np.allclose(varianceRandom1, 0.588017172066), 'incorrect value for varianceRandom1')
Test.assertTrue(np.allclose(varianceCorrelated1, 0.933608329586),
'incorrect value for varianceCorrelated1')
Test.assertTrue(np.allclose(varianceRandom2, 1.0), 'incorrect value for varianceRandom2')
Test.assertTrue(np.allclose(varianceCorrelated2, 1.0), 'incorrect value for varianceCorrelated2')
Test.assertTrue(np.allclose(varianceThreeD2, 0.993967356912), 'incorrect value for varianceThreeD2')
import os
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'neuro.txt')
inputFile = os.path.join(baseDir, inputPath)
lines = sc.textFile(inputFile)
print lines.first()[0:100]
# Check that everything loaded properly
assert len(lines.first()) == 1397
assert lines.count() == 46460
# TODO: Replace <FILL IN> with appropriate code
def parse(line):
Parse the raw data into a (`tuple`, `np.ndarray`) pair.
Note:
You should store the pixel coordinates as a tuple of two ints and the elements of the pixel intensity
time series as an np.ndarray of floats.
Args:
line (str): A string representing an observation. Elements are separated by spaces. The
first two elements represent the coordinates of the pixel, and the rest of the elements
represent the pixel intensity over time.
Returns:
tuple of tuple, np.ndarray: A (coordinate, pixel intensity array) `tuple` where coordinate is
a `tuple` containing two values and the pixel intensity is stored in an NumPy array
which contains 240 values.
myline = line.split(' ')
myline = [float(x) for x in myline]
return ((int(myline[0]), int(myline[1])), np.array(myline[2:]))
rawData = lines.map(parse)
rawData.cache()
entry = rawData.first()
print 'Length of movie is {0} seconds'.format(len(entry[1]))
print 'Number of pixels in movie is {0:,}'.format(rawData.count())
print ('\nFirst entry of rawData (with only the first five values of the NumPy array):\n({0}, {1})'
.format(entry[0], entry[1][:5]))
# TEST Parse the data (3b)
Test.assertTrue(isinstance(entry[0], tuple), "entry's key should be a tuple")
Test.assertEquals(len(entry), 2, 'entry should have a key and a value')
Test.assertTrue(isinstance(entry[0][1], int), 'coordinate tuple should contain ints')
Test.assertEquals(len(entry[0]), 2, "entry's key should have two values")
Test.assertTrue(isinstance(entry[1], np.ndarray), "entry's value should be an np.ndarray")
Test.assertTrue(isinstance(entry[1][0], np.float), 'the np.ndarray should consist of np.float values')
Test.assertEquals(entry[0], (0, 0), 'incorrect key for entry')
Test.assertEquals(entry[1].size, 240, 'incorrect length of entry array')
Test.assertTrue(np.allclose(np.sum(entry[1]), 24683.5), 'incorrect values in entry array')
# TODO: Replace <FILL IN> with appropriate code
#mn = rawData.min()
mn = rawData.map(lambda (x, y) : min(y)).min()
mx = rawData.map(lambda (x, y) : max(y)).max()
#print mn, mx
# TEST Min and max flouresence (3c)
Test.assertTrue(np.allclose(mn, 100.6), 'incorrect value for mn')
Test.assertTrue(np.allclose(mx, 940.8), 'incorrect value for mx')
example = rawData.filter(lambda (k, v): np.std(v) > 100).values().first()
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 300, 50), np.arange(300, 800, 100))
ax.set_xlabel(r'time'), ax.set_ylabel(r'flouresence')
ax.set_xlim(-20, 270), ax.set_ylim(270, 730)
plt.plot(range(len(example)), example, c='#8cbfd0', linewidth='3.0')
pass
# TODO: Replace <FILL IN> with appropriate code
def rescale(ts):
Take a np.ndarray and return the standardized array by subtracting and dividing by the mean.
Note:
You should first subtract the mean and then divide by the mean.
Args:
ts (np.ndarray): Time series data (`np.float`) representing pixel intensity.
Returns:
np.ndarray: The times series adjusted by subtracting the mean and dividing by the mean.
ts_mean = ts.mean()
return (ts - ts_mean) / ts_mean
scaledData = rawData.mapValues(lambda v: rescale(v))
mnScaled = scaledData.map(lambda (k, v): v).map(lambda v: min(v)).min()
mxScaled = scaledData.map(lambda (k, v): v).map(lambda v: max(v)).max()
print mnScaled, mxScaled
# TEST Fractional signal change (3d)
Test.assertTrue(isinstance(scaledData.first()[1], np.ndarray), 'incorrect type returned by rescale')
Test.assertTrue(np.allclose(mnScaled, -0.27151288), 'incorrect value for mnScaled')
Test.assertTrue(np.allclose(mxScaled, 0.90544876), 'incorrect value for mxScaled')
example = scaledData.filter(lambda (k, v): np.std(v) > 0.1).values().first()
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 300, 50), np.arange(-.1, .6, .1))
ax.set_xlabel(r'time'), ax.set_ylabel(r'flouresence')
ax.set_xlim(-20, 260), ax.set_ylim(-.12, .52)
plt.plot(range(len(example)), example, c='#8cbfd0', linewidth='3.0')
pass
# TODO: Replace <FILL IN> with appropriate code
# Run pca using scaledData
componentsScaled, scaledScores, eigenvaluesScaled = pca(scaledData, k=3)
# TEST PCA on the scaled data (3e)
Test.assertEquals(componentsScaled.shape, (240, 3), 'incorrect shape for componentsScaled')
Test.assertTrue(np.allclose(np.abs(np.sum(componentsScaled[:5, :])), 0.283150995232),
'incorrect value for componentsScaled')
Test.assertTrue(np.allclose(np.abs(np.sum(scaledScores.take(3))), 0.0285507449251),
'incorrect value for scaledScores')
Test.assertTrue(np.allclose(np.sum(eigenvaluesScaled[:5]), 0.206987501564),
'incorrect value for eigenvaluesScaled')
import matplotlib.cm as cm
scoresScaled = np.vstack(scaledScores.collect())
imageOneScaled = scoresScaled[:,0].reshape(230, 202).T
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)
ax.grid(False)
ax.set_title('Top Principal Component', color='#888888')
image = plt.imshow(imageOneScaled,interpolation='nearest', aspect='auto', cmap=cm.gray)
pass
imageTwoScaled = scoresScaled[:,1].reshape(230, 202).T
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)
ax.grid(False)
ax.set_title('Second Principal Component', color='#888888')
image = plt.imshow(imageTwoScaled,interpolation='nearest', aspect='auto', cmap=cm.gray)
pass
# Adapted from python-thunder's Colorize.transform where cmap='polar'.
# Checkout the library at: https://github.com/thunder-project/thunder and
# http://thunder-project.org/
def polarTransform(scale, img):
Convert points from cartesian to polar coordinates and map to colors.
from matplotlib.colors import hsv_to_rgb
img = np.asarray(img)
dims = img.shape
phi = ((np.arctan2(-img[0], -img[1]) + np.pi/2) % (np.pi*2)) / (2 * np.pi)
rho = np.sqrt(img[0]**2 + img[1]**2)
saturation = np.ones((dims[1], dims[2]))
out = hsv_to_rgb(np.dstack((phi, saturation, scale * rho)))
return np.clip(out * scale, 0, 1)
# Show the polar mapping from principal component coordinates to colors.
x1AbsMax = np.max(np.abs(imageOneScaled))
x2AbsMax = np.max(np.abs(imageTwoScaled))
numOfPixels = 300
x1Vals = np.arange(-x1AbsMax, x1AbsMax, (2 * x1AbsMax) / numOfPixels)
x2Vals = np.arange(x2AbsMax, -x2AbsMax, -(2 * x2AbsMax) / numOfPixels)
x2Vals.shape = (numOfPixels, 1)
x1Data = np.tile(x1Vals, (numOfPixels, 1))
x2Data = np.tile(x2Vals, (1, numOfPixels))
# Try changing the first parameter to lower values
polarMap = polarTransform(2.0, [x1Data, x2Data])
gridRange = np.arange(0, numOfPixels + 25, 25)
fig, ax = preparePlot(gridRange, gridRange, figsize=(9.0, 7.2), hideLabels=True)
image = plt.imshow(polarMap, interpolation='nearest', aspect='auto')
ax.set_xlabel('Principal component one'), ax.set_ylabel('Principal component two')
gridMarks = (2 * gridRange / float(numOfPixels) - 1.0)
x1Marks = x1AbsMax * gridMarks
x2Marks = -x2AbsMax * gridMarks
ax.get_xaxis().set_ticklabels(map(lambda x: '{0:.1f}'.format(x), x1Marks))
ax.get_yaxis().set_ticklabels(map(lambda x: '{0:.1f}'.format(x), x2Marks))
pass
# Use the same transformation on the image data
# Try changing the first parameter to lower values
brainmap = polarTransform(2.0, [imageOneScaled, imageTwoScaled])
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)
ax.grid(False)
image = plt.imshow(brainmap,interpolation='nearest', aspect='auto')
pass
# TODO: Replace <FILL IN> with appropriate code
vector = np.array([0., 1., 2., 3., 4., 5.])
# Create a multi-dimensional array that when multiplied (using .dot) against vector, results in
# a two element array where the first element is the sum of the 0, 2, and 4 indexed elements of
# vector and the second element is the sum of the 1, 3, and 5 indexed elements of vector.
# This should be a 2 row by 6 column array
sumEveryOther = np.array([[1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1]])
# Create a multi-dimensional array that when multiplied (using .dot) against vector, results in a
# three element array where the first element is the sum of the 0 and 3 indexed elements of vector,
# the second element is the sum of the 1 and 4 indexed elements of vector, and the third element is
# the sum of the 2 and 5 indexed elements of vector.
# This should be a 3 row by 6 column array
sumEveryThird = np.array([[1, 0, 0, 1, 0, 0], [0, 1, 0, 0, 1, 0], [0, 0, 1, 0, 0, 1]])
# Create a multi-dimensional array that can be used to sum the first three elements of vector and
# the last three elements of vector, which returns a two element array with those values when dotted
# with vector.
# This should be a 2 row by 6 column array
sumByThree = np.array([[1, 1, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1]])
# Create a multi-dimensional array that sums the first two elements, second two elements, and
# last two elements of vector, which returns a three element array with those values when dotted
# with vector.
# This should be a 3 row by 6 column array
sumByTwo = np.array([[1, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 1, 1]])
print 'sumEveryOther.dot(vector):\t{0}'.format(sumEveryOther.dot(vector))
print 'sumEveryThird.dot(vector):\t{0}'.format(sumEveryThird.dot(vector))
print '\nsumByThree.dot(vector):\t{0}'.format(sumByThree.dot(vector))
print 'sumByTwo.dot(vector): \t{0}'.format(sumByTwo.dot(vector))
# TEST Aggregation using arrays (4a)
Test.assertEquals(sumEveryOther.shape, (2, 6), 'incorrect shape for sumEveryOther')
Test.assertEquals(sumEveryThird.shape, (3, 6), 'incorrect shape for sumEveryThird')
Test.assertTrue(np.allclose(sumEveryOther.dot(vector), [6, 9]), 'incorrect value for sumEveryOther')
Test.assertTrue(np.allclose(sumEveryThird.dot(vector), [3, 5, 7]),
'incorrect value for sumEveryThird')
Test.assertEquals(sumByThree.shape, (2, 6), 'incorrect shape for sumByThree')
Test.assertEquals(sumByTwo.shape, (3, 6), 'incorrect shape for sumByTwo')
Test.assertTrue(np.allclose(sumByThree.dot(vector), [3, 12]), 'incorrect value for sumByThree')
Test.assertTrue(np.allclose(sumByTwo.dot(vector), [1, 5, 9]), 'incorrect value for sumByTwo')
# Reference for what to recreate
print 'sumEveryOther: \n{0}'.format(sumEveryOther)
print '\nsumEveryThird: \n{0}'.format(sumEveryThird)
# TODO: Replace <FILL IN> with appropriate code
# Use np.tile and np.eye to recreate the arrays
sumEveryOtherTile = np.tile(np.eye(2), 3)
sumEveryThirdTile = np.tile(np.eye(3), 2)
print sumEveryOtherTile
print 'sumEveryOtherTile.dot(vector): {0}'.format(sumEveryOtherTile.dot(vector))
print '\n', sumEveryThirdTile
print 'sumEveryThirdTile.dot(vector): {0}'.format(sumEveryThirdTile.dot(vector))
# TEST Recreate with `np.tile` and `np.eye` (4b)
Test.assertEquals(sumEveryOtherTile.shape, (2, 6), 'incorrect shape for sumEveryOtherTile')
Test.assertEquals(sumEveryThirdTile.shape, (3, 6), 'incorrect shape for sumEveryThirdTile')
Test.assertTrue(np.allclose(sumEveryOtherTile.dot(vector), [6, 9]),
'incorrect value for sumEveryOtherTile')
Test.assertTrue(np.allclose(sumEveryThirdTile.dot(vector), [3, 5, 7]),
'incorrect value for sumEveryThirdTile')
# Reference for what to recreate
print 'sumByThree: \n{0}'.format(sumByThree)
print '\nsumByTwo: \n{0}'.format(sumByTwo)
# TODO: Replace <FILL IN> with appropriate code
# Use np.kron, np.eye, and np.ones to recreate the arrays
sumByThreeKron = np.kron(np.eye(2), np.ones(shape=(1,3)))
sumByTwoKron = np.kron(np.eye(3), np.ones(shape=(1,2)))
print sumByThreeKron
print 'sumByThreeKron.dot(vector): {0}'.format(sumByThreeKron.dot(vector))
print '\n', sumByTwoKron
print 'sumByTwoKron.dot(vector): {0}'.format(sumByTwoKron.dot(vector))
# TEST Recreate with `np.kron` (4c)
Test.assertEquals(sumByThreeKron.shape, (2, 6), 'incorrect shape for sumByThreeKron')
Test.assertEquals(sumByTwoKron.shape, (3, 6), 'incorrect shape for sumByTwoKron')
Test.assertTrue(np.allclose(sumByThreeKron.dot(vector), [3, 12]),
'incorrect value for sumByThreeKron')
Test.assertTrue(np.allclose(sumByTwoKron.dot(vector), [1, 5, 9]),
'incorrect value for sumByTwoKron')
# TODO: Replace <FILL IN> with appropriate code
# Create a multi-dimensional array to perform the aggregation
T = np.array(np.tile([[0]*19+[1]]*20, 12))
print T
# Transform scaledData using T. Make sure to retain the keys.
timeData = scaledData.map(lambda (x, y) : (x, T.dot(y)))
timeData.cache()
print timeData.count()
print timeData.first()
timeDataFirst = timeData.values().first()
print timeDataFirst[:2]
# TEST Aggregate by time (4d)
Test.assertEquals(T.shape, (20, 240), 'incorrect shape for T')
timeDataFirst = timeData.values().first()
timeDataFifth = timeData.values().take(5)[4]
Test.assertEquals(timeData.count(), 46460, 'incorrect length of timeData')
Test.assertEquals(timeDataFirst.size, 20, 'incorrect value length of timeData')
Test.assertEquals(timeData.keys().first(), (0, 0), 'incorrect keys in timeData')
Test.assertTrue(np.allclose(timeDataFirst[:2], [0.00802155, 0.00607693]),
'incorrect values in timeData')
Test.assertTrue(np.allclose(timeDataFifth[-2:],[-0.00636676, -0.0179427]),
'incorrect values in timeData')
# TODO: Replace <FILL IN> with appropriate code
componentsTime, timeScores, eigenvaluesTime = pca(timeData, k=3)
print 'componentsTime: (first five) \n{0}'.format(componentsTime[:5,:])
print ('\ntimeScores (first three): \n{0}'
.format('\n'.join(map(str, timeScores.take(3)))))
print '\neigenvaluesTime: (first five) \n{0}'.format(eigenvaluesTime[:5])
# TEST Obtain a compact representation (4e)
Test.assertEquals(componentsTime.shape, (20, 3), 'incorrect shape for componentsTime')
Test.assertTrue(np.allclose(np.abs(np.sum(componentsTime[:5, :])), 2.37299020),
'incorrect value for componentsTime')
Test.assertTrue(np.allclose(np.abs(np.sum(timeScores.take(3))), 0.0213119114),
'incorrect value for timeScores')
Test.assertTrue(np.allclose(np.sum(eigenvaluesTime[:5]), 0.844764792),
'incorrect value for eigenvaluesTime')
scoresTime = np.vstack(timeScores.collect())
imageOneTime = scoresTime[:,0].reshape(230, 202).T
imageTwoTime = scoresTime[:,1].reshape(230, 202).T
brainmap = polarTransform(3, [imageOneTime, imageTwoTime])
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)
ax.grid(False)
image = plt.imshow(brainmap,interpolation='nearest', aspect='auto')
pass
# TODO: Replace <FILL IN> with appropriate code
# Create a multi-dimensional array to perform the aggregation
D = np.kron(np.eye(12), np.ones(shape=(1,20)))
# Transform scaledData using D. Make sure to retain the keys.
directionData = scaledData.map(lambda (x, y) : (x, D.dot(y)))
directionData.cache()
print directionData.count()
print directionData.first()
# TEST Aggregate by direction (4f)
Test.assertEquals(D.shape, (12, 240), 'incorrect shape for D')
directionDataFirst = directionData.values().first()
directionDataFifth = directionData.values().take(5)[4]
Test.assertEquals(directionData.count(), 46460, 'incorrect length of directionData')
Test.assertEquals(directionDataFirst.size, 12, 'incorrect value length of directionData')
Test.assertEquals(directionData.keys().first(), (0, 0), 'incorrect keys in directionData')
Test.assertTrue(np.allclose(directionDataFirst[:2], [ 0.03346365, 0.03638058]),
'incorrect values in directionData')
Test.assertTrue(np.allclose(directionDataFifth[:2], [ 0.01479147, -0.02090099]),
'incorrect values in directionData')
# TODO: Replace <FILL IN> with appropriate code
componentsDirection, directionScores, eigenvaluesDirection = pca(directionData, k=3)
print 'componentsDirection: (first five) \n{0}'.format(componentsDirection[:5,:])
print ('\ndirectionScores (first three): \n{0}'
.format('\n'.join(map(str, directionScores.take(3)))))
print '\neigenvaluesDirection: (first five) \n{0}'.format(eigenvaluesDirection[:5])
# TEST Compact representation of direction data (4g)
Test.assertEquals(componentsDirection.shape, (12, 3), 'incorrect shape for componentsDirection')
Test.assertTrue(np.allclose(np.abs(np.sum(componentsDirection[:5, :])), 1.080232069),
'incorrect value for componentsDirection')
Test.assertTrue(np.allclose(np.abs(np.sum(directionScores.take(3))), 0.10993162084),
'incorrect value for directionScores')
Test.assertTrue(np.allclose(np.sum(eigenvaluesDirection[:5]), 2.0089720377),
'incorrect value for eigenvaluesDirection')
scoresDirection = np.vstack(directionScores.collect())
imageOneDirection = scoresDirection[:,0].reshape(230, 202).T
imageTwoDirection = scoresDirection[:,1].reshape(230, 202).T
brainmap = polarTransform(2, [imageOneDirection, imageTwoDirection])
# with thunder: Colorize(cmap='polar', scale=2).transform([imageOneDirection, imageTwoDirection])
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(0, 10, 1), figsize=(9.0, 7.2), hideLabels=True)
ax.grid(False)
image = plt.imshow(brainmap, interpolation='nearest', aspect='auto')
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Part 1
Step4: (1a) Interpreting PCA
Step5: (1b) Sample covariance matrix
Step7: (1c) Covariance Function
Step8: (1d) Eigendecomposition
Step9: (1e) PCA scores
Step11: Part 2
Step12: (2b) PCA on dataRandom
Step14: Visualization 2
Step15: Visualization 3
Step16: (2c) 3D to 2D
Step17: Visualization 4
Step19: (2d) Variance explained
Step20: Part 3
Step22: (3b) Parse the data
Step23: (3c) Min and max flouresence
Step24: Visualization 5
Step26: (3d) Fractional signal change
Step27: Visualization 6
Step28: (3e) PCA on the scaled data
Step29: Visualization 7
Step31: Visualization 8
Step32: Part 4
Step33: (4b) Recreate with np.tile and np.eye
Step34: (4c) Recreate with np.kron
Step35: (4d) Aggregate by time As we discussed in Part (4a), we would like to incorporate knowledge of our experimental setup into our analysis. To do this, we'll first study the temporal aspects of neural response, by aggregating our features by time. In other words, we want to see how different pixels (and the underlying neurons captured in these pixels) react in each of the 20 seconds after a new visual pattern is displayed, regardless of what the pattern is. Hence, instead of working with the 240 features individually, we'll aggregate the original features into 20 new features, where the first new feature captures the pixel response one second after a visual pattern appears, the second new feature is the response after two seconds, and so on.
Step36: (4e) Obtain a compact representation
Step37: Visualization 9
Step38: (4f) Aggregate by direction
Step39: (4g) Compact representation of direction data
Step40: Visualization 10
|
11,268
|
<ASSISTANT_TASK:>
Python Code:
import surprise
data = surprise.Dataset.load_builtin('ml-100k')
df = pd.DataFrame(data.raw_ratings, columns=["user", "item", "rate", "id"])
del df["id"]
df.head(10)
df_table = df.set_index(["user", "item"]).unstack()
df_table.fillna("").ix[212:222, 808:817]
plt.imshow(df_table)
plt.grid(False)
plt.xlabel("item")
plt.ylabel("user")
plt.title("Rate Matrix")
plt.show()
data.split(n_folds=3)
bsl_options = {'method': 'als',
'n_epochs': 5,
'reg_u': 12,
'reg_i': 5
}
algo = surprise.BaselineOnly(bsl_options=bsl_options)
np.random.seed(0)
acc = np.zeros(3)
for i, (trainset, testset) in enumerate(data.folds()):
algo.train(trainset)
predictions = algo.test(testset)
acc[i] = surprise.accuracy.fcp(predictions, verbose=True)
acc.mean()
surprise.evaluate(algo, data)
sim_options = {'name': 'msd'}
algo = surprise.KNNBasic(sim_options=sim_options)
surprise.evaluate(algo, data)
sim_options = {'name': 'cosine'}
algo = surprise.KNNBasic(sim_options=sim_options)
surprise.evaluate(algo, data)
sim_options = {'name': 'pearson'}
algo = surprise.KNNBasic(sim_options=sim_options)
surprise.evaluate(algo, data)
sim_options = {'name': 'pearson_baseline'}
algo = surprise.KNNBasic(sim_options=sim_options)
surprise.evaluate(algo, data)
sim_options = {'name': 'pearson_baseline'}
algo = surprise.KNNWithMeans(sim_options=sim_options)
surprise.evaluate(algo, data)
sim_options = {'name': 'pearson_baseline'}
algo = surprise.KNNBaseline(sim_options=sim_options)
surprise.evaluate(algo, data)
algo = surprise.SVD(n_factors=200, n_epochs=30)
surprise.evaluate(algo, data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 평점 데이터
Step2: 이 데이터는 다음과 같이 데이터프레임으로 변환할 수 있다.
Step3: 여기에서 user 열은 사용자 아이디, item 열은 상품 아이디, rate 열은 평점이다. 즉, 196번 사용자는 242번 영화에 대해 평점 3점을 주었음을 알 수 있다.
Step4: 이 평점 행렬의 일부만 살펴보면 다음과 같이 평점 데이터가 일부 위치에만 존재하는 sparse 행렬임을 알 수 있다.
Step5: 평점 행렬의 빈칸을 흰색, 점수를 검은색으로 시각화 하면 다음과 같다.
Step6: 추천 시스템 알고리즘
Step7: 추천성능 평가기준
Step8: Collaborative Filter
Step9: Latent Factor 모형
|
11,269
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sys
from IPython.display import display, clear_output
sys.path.insert(0, 'helpers')
from efunctions import * # load my helper function(s) to save pdf figures, etc.
from hc3 import load_data, get_sessions
from hmmlearn import hmm # see https://github.com/ckemere/hmmlearn
import klabtools as klab
import seqtools as sq
import importlib
importlib.reload(sq) # reload module here only while prototyping...
importlib.reload(klab) # reload module here only while prototyping...
%matplotlib inline
sns.set(rc={'figure.figsize': (12, 4),'lines.linewidth': 1.5})
sns.set_style("white")
datadirs = ['/home/etienne/Dropbox/neoReader/Data',
'C:/etienne/Dropbox/neoReader/Data',
'/Users/etienne/Dropbox/neoReader/Data']
fileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)
animal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units
spikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)
eeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)
posdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)
speed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False)
## bin ALL spikes
ds = 0.125 # bin spikes into 125 ms bins (theta-cycle inspired)
binned_spikes_all = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)
## identify boundaries for running (active) epochs and then bin those observations into separate sequences:
runbdries = klab.get_boundaries_from_bins(eeg.samprate,bins=speed.active_bins,bins_fs=60)
binned_spikes_bvr = klab.bin_spikes(spikes.data, fs=spikes.samprate, boundaries=runbdries, boundaries_fs=eeg.samprate, ds=ds)
## stack data for hmmlearn:
seq_stk_bvr = sq.data_stack(binned_spikes_bvr, verbose=True)
seq_stk_all = sq.data_stack(binned_spikes_all, verbose=True)
## split data into train, test, and validation sets:
tr_b,vl_b,ts_b = sq.data_split(seq_stk_bvr, tr=60, vl=20, ts=20, randomseed = 0, verbose=False)
Smax = 40
S = np.arange(start=5,step=1,stop=Smax+1)
tr_ll = []
vl_ll = []
ts_ll = []
for num_states in S:
clear_output(wait=True)
print('Training and evaluating {}-state hmm'.format(num_states))
sys.stdout.flush()
myhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=30, verbose=False)
tr_ll.append( (np.array(list(sq.hmm_eval(myhmm, tr_b)))/tr_b.sequence_lengths ).mean())
vl_ll.append( (np.array(list(sq.hmm_eval(myhmm, vl_b)))/vl_b.sequence_lengths ).mean())
ts_ll.append( (np.array(list(sq.hmm_eval(myhmm, ts_b)))/ts_b.sequence_lengths ).mean())
clear_output(wait=True)
print('Done!')
sys.stdout.flush()
num_states = 35
fig = plt.figure(1, figsize=(12, 4))
ax = fig.add_subplot(111)
ax.annotate('plateau at approx ' + str(num_states), xy=(num_states, -38.5), xycoords='data',
xytext=(-140, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"),
)
ax.plot(S, tr_ll, lw=1.5, label='train')
ax.plot(S, vl_ll, lw=1.5, label='validation')
ax.plot(S, ts_ll, lw=1.5, label='test')
ax.legend(loc=2)
ax.set_xlabel('number of states')
ax.set_ylabel('normalized (to single time bin) log likelihood')
ax.axhspan(-38.5, -37.5, facecolor='0.75', alpha=0.25)
ax.set_xlim([5, S[-1]])
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
vth = ii+1
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, verbose=False)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
num_states = 5 + ii*5
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states), y=1.02)
ax.axis('tight')
saveFigure('posterfigs/numstates.pdf')
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.03125
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.0625
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
from placefieldviz import hmmplacefieldposviz
num_states = 25
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
saveFigure('posterfigs/expsplit.pdf')
from placefieldviz import hmmplacefieldposviz
num_states = 45
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
from placefieldviz import hmmplacefieldposviz
num_states = 100
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
import matplotlib.patches as patches
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[stateorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[stateorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 13], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 7], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.plot([35, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([41, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[stateorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 13], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 7], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.plot([35, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([41, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
fig.suptitle('State ordering not by peak location, but by the state transition probability matrix', y=1.08, fontsize=14)
saveFigure('posterfigs/zigzag.pdf')
state_pos_b[state_pos_b < np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 0
state_pos_b[state_pos_b == np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 1
state_pos_1[state_pos_1 < np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 0
state_pos_1[state_pos_1 == np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 1
state_pos_2[state_pos_2 < np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 0
state_pos_2[state_pos_2 == np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 1
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Find most appropriate number of states using cross validation
Step3: Remarks
Step4: Remarks
Step5: Remarks
Step6: Remarks
Step7: Remarks
Step8: Remarks
Step9: then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
Step10: With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals.
|
11,270
|
<ASSISTANT_TASK:>
Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
centroid = []
c = 0
while (c < k):
x = np.array(np.random.uniform(min(dataset[:,0]),max(dataset[:,0])))
y = np.array(np.random.uniform(min(dataset[:,1]),max(dataset[:,1])))
centroid.append([x,y])
c += 1
centroids = np.array(centroid)
### END OF CODE ###
return centroids
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
import math
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
#s = 0
#for i in range(len(a)):
# diff = (a[i] - b[i])
# s += diff*diff
#distance = math.sqrt(s)
#distance = math.sqrt(sum([((a[i] - b[i])**2) for i in range(len(a))]))
distance = math.sqrt(sum((a-b)**2))
### END OF CODE ###
return distance
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
dist = float("inf")
k = len(centroids)
for i in range(k):
d = euclidean_distance(a, centroids[i])
if d < dist:
nindex = i
dist = d
nearest_index = nindex
### END OF CODE ###
return nearest_index
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]], [a[1], centroids[idx_nearest_centroid,1]], c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1],))
plt.show()
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
nearest_indexes = [nearest_centroid(dataset[i], centroids) for i in range(len(dataset))]
### END OF CODE ###
return nearest_indexes
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
#### CODE HERE ####
'''s = 0
for i in range(len(tmp_data)):
m = euclidean_distance(tmp_data[i],tmp_centroide[0])**2
s += m'''
inertia = sum([euclidean_distance(tmp_data[i],tmp_centroide[0])**2 for i in range(len(tmp_data))])
### END OF CODE ###
return inertia
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
centroids = update_centroids(dataset, centroids, nearest_indexes)
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = [None]
# Computa o cluster de cada amostra
self.labels_ = [None]
# Calcula a inércia inicial
old_inertia = [None]
for index in [None]:
#### CODE HERE ####
### END OF CODE ###
return self
def predict(self, X):
return [None]
kmeans = KMeans(n_clusters=k)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
#### CODE HERE ####
# fonte: https://stackabuse.com/k-means-clustering-with-scikit-learn/
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
plt.scatter(dataset[:,0],dataset[:,1], label='True Position')
kmeans = KMeans(n_clusters=k)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
#print(kmeans.cluster_centers_)
#print(kmeans.labels_)
plt.scatter(dataset[:,0],dataset[:,1], c=kmeans.labels_, cmap='Set3')
plt.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], marker='^', color='black', s=100)
#### CODE HERE ####
#### CODE HERE ####
from sklearn import metrics
url="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
data=pd.read_csv(url, header=None)
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
metrics.homogeneity_score(labels_true, labels_pred)
metrics.completeness_score(labels_true, labels_pred)
#2
#3
#4
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Implementar o algoritmo K-means
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os Clusters
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Step23: 3. Método do cotovelo
Step24: 4. Dataset Real
|
11,271
|
<ASSISTANT_TASK:>
Python Code:
version = '2020-08-25'
import logging
import os
import posixpath
import urllib.parse
import urllib.request
import re
import zipfile
import pickle
import urllib
import shutil
import datetime
import numpy as np
import pandas as pd
import utm # for transforming geoinformation in the utm format
import requests
import fake_useragent
from string import Template
from IPython.display import display
import xlrd
import bs4
import bng_to_latlon
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
# for visualizing locations on maps
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io import shapereader
import geopandas
import shapely
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%d %b %Y %H:%M:%S'
)
logger = logging.getLogger()
# Create input, intermediate and output folders if they don't exist.
# If the paths are relative, the correspoding folders will be created
# inside the current working directory.
input_directory_path = os.path.join('input', 'original_data')
intermediate_directory_path = 'intermediate'
output_directory_path = os.path.join('output', 'renewable_power_plants')
os.makedirs(input_directory_path, exist_ok=True)
os.makedirs(intermediate_directory_path, exist_ok=True)
os.makedirs(output_directory_path, exist_ok=True)
# Create the folder to which the Eurostat files with data at the level of the whole EU/Europe
#are going to be downloaded
eurostat_eu_directory_path = os.path.join('input', 'eurostat_eu')
os.makedirs(eurostat_eu_directory_path, exist_ok=True)
# Define the path of the file with the list of sources.
source_list_filepath = os.path.join('input', 'sources.csv')
# Import the utility functions and classes from the util package
import util.helper
from util.visualizer import visualize_points
download_from = 'original_sources'
#download_from = 'opsd_server'
source_df = pd.read_csv(source_list_filepath)
uk_main_page = 'https://www.gov.uk/government/publications/renewable-energy-planning-database-monthly-extract'
current_link = util.helper.get_beis_link(uk_main_page)
current_filename = current_link.split('/')[-1]
source_df.loc[(source_df['country'] == 'UK') & (source_df['source'] == 'BEIS'), 'url'] = current_link
source_df.loc[(source_df['country'] == 'UK') & (source_df['source'] == 'BEIS'), 'filename'] = current_filename
source_df.to_csv(source_list_filepath, index=False, header=True)
source_df.fillna('')
import util.downloader
from util.downloader import Downloader
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
#import importlib
#importlib.reload(util.nuts_converter)
#importlib.reload(util.downloader)
#from util.downloader import Downloader
#downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
from util.nuts_converter import NUTSConverter
nuts_converter = NUTSConverter(downloader, eurostat_eu_directory_path)
# Get column translation list
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
columnnames.head(2)
# Get value translation list
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
valuenames.head(2)
# Define the lists of source names
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
tsos = ['50Hertz', 'Amprion', 'TenneT', 'TransnetBW']
datasets = ['50Hertz', 'Amprion', 'TenneT', 'TransnetBW','bnetza','bnetza_pv','bnetza_pv_historic']
# Download the files and get the local file paths indexed by source names
filepaths = downloader.download_data_for_country('DE')
# Remove the Eurostat NUTS file as it's a geoinformation source
DE_postcode2nuts_filepath = filepaths.pop('Eurostat')
# Open all data sets before processing.
filenames = {}
for source in filepaths:
filepath = filepaths[source]
print(source, filepath)
if os.path.splitext(filepath)[1] != '.xlsx' and zipfile.is_zipfile(filepath):
filenames[source] = zipfile.ZipFile(filepath)
else:
filenames[source] = filepath
# Read TSO data from the zip files
dfs = {}
basenames_by_tso = {
'50Hertz': '50Hertz Transmission GmbH EEG-Zahlungen Stammdaten 2019',
'Amprion': 'Amprion GmbH EEG-Zahlungen Anlagenstammdaten 2019',
'TenneT': 'TenneT TSO GmbH Anlagenstammdaten 2019',
'TransnetBW': 'TransnetBW GmbH Anlagenstammdaten 2019',
}
for tso in tsos:
filename = basenames_by_tso[tso]+'.csv'
print('Reading', filename)
#print(filenames[tso].namelist())
dfs[tso] = pd.read_csv(
filenames[tso].open(filename),
sep=';',
thousands='.',
decimal=',',
# Headers have to have the same order for all TSOs. Therefore just define headers here.
# Remove the following three lines if for next version, headers should be read out initially
# to then check if order is the same everywhere.
names=['EEG-Anlagenschlüssel', 'MASTR_Nr_EEG','Netzbetreiber Betriebsnummer','Netzbetreiber Name',
'Strasse_flurstueck','PLZ','Ort / Gemarkung','Gemeindeschlüssel','Bundesland',
'Installierte Leistung','Energieträger','Spannungsebene','Leistungsmessung','Regelbarkeit',
'Inbetriebnahme','Außerbetriebnahme','Netzzugang','Netzabgang'],
header=None,
skiprows=1,
parse_dates=[14, 15, 16, 17], #[11, 12, 13, 14]
#infer_datetime_format=True,
date_parser = lambda x: pd.to_datetime(x, errors='coerce', format='%d.%m.%Y'),
encoding='iso-8859-1',
dayfirst=True,
low_memory=False
)
print('Done reading ' + filename)
for filename in filenames.values():
if(isinstance(filename, zipfile.ZipFile)):
#print(filename)
filename.close()
# define the date parser
def date_parser(x):
if type(x) == str:
return datetime.datetime.strptime(x, '%D.%M.%Y')
elif type(x) == float and pd.isnull(x):
return pd.NaT
def inspect(x):
try:
converted = datetime.datetime.strptime(x, '%d.%m.%Y')
return False
except:
return True
# Read BNetzA register
print('Reading bnetza: '+filenames['bnetza'])
dfs['bnetza'] = pd.read_excel(filenames['bnetza'],
sheet_name='Gesamtübersicht',
header=0,
converters={'4.9 Postleit-zahl': str, 'Gemeinde-Schlüssel': str}
)
skiprows = {'bnetza_pv_historic': 10, 'bnetza_pv': 9}
for dataset in ['bnetza_pv', 'bnetza_pv_historic']:
print(dataset)
print('Reading ' + dataset + ': ' + filenames[dataset])
xls_handle = pd.ExcelFile(filenames[dataset])
print('Concatenating all '+dataset+' sheets into one dataframe')
dfs[dataset] = pd.concat(
(xls_handle.parse(
sheet,
skiprows=skiprows[dataset],
converters={'Anlage \nPLZ': str}
) for sheet in xls_handle.sheet_names),
sort=True
)
# Make sure that the column `Inbetriebnahme-datum *)` (commissioning date) in the bnetza_pv set is datetime.
mask = dfs['bnetza_pv']['Inbetriebnahme-datum *)'].apply(lambda x: type(x) == int)
dfs['bnetza_pv']['Inbetriebnahme-datum *)'] = pd.to_datetime(dfs['bnetza_pv']['Inbetriebnahme-datum *)'],
errors='coerce',
dayfirst=True,
infer_datetime_format=True)
dfs['bnetza_pv']['Inbetriebnahme-datum *)'] = dfs['bnetza_pv']['Inbetriebnahme-datum *)'].apply(
lambda x: x.to_datetime64()
)
dfs['bnetza_pv_historic'] = dfs['bnetza_pv_historic'].drop(['Unnamed: 7'], axis=1)
pickle.dump( dfs, open( "intermediate/temp_dfs_DE_after_reading.pickle", "wb" ) )
dfs = pickle.load( open( "intermediate/temp_dfs_DE_after_reading.pickle", "rb" ) )
# Choose the translation terms for Germany, create dictionary and show dictionary
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_DE = columnnames[columnnames['country'] == 'DE'].index
column_dict_DE = columnnames.loc[idx_DE].set_index('original_name')['opsd_name'].to_dict()
column_dict_DE
# Start the column translation process for each original data source
print('Translation...')
for dataset in dfs:
# Remove newlines and any other duplicate whitespaces in column names:
dfs[dataset] = dfs[dataset].rename(columns={col: re.sub(r"\s+", ' ', col) for col in dfs[dataset].columns})
# Do column name translations
print(dataset)
#print(list(dfs[dataset].columns))
dfs[dataset].rename(columns=column_dict_DE, inplace=True)
#print(list(dfs[dataset].columns).index('decommissioning_date'))
#print('--------------------------------------------')
print('done.')
# Add data source names to the DataFrames
for tso in tsos:
dfs[tso]['data_source'] = tso
dfs[tso]['tso'] = tso
dfs['bnetza']['data_source'] = 'BNetzA'
dfs['bnetza_pv']['data_source'] = 'BNetzA_PV'
dfs['bnetza_pv_historic']['data_source'] = 'BNetzA_PV_historic'
# Add for the BNetzA PV data the energy source level 2
dfs['bnetza_pv']['energy_source_level_2'] = 'Photovoltaics'
dfs['bnetza_pv_historic']['energy_source_level_2'] = 'Photovoltaics'
# Select those columns of the original data which are utilised further
dfs['bnetza'] = dfs['bnetza'].loc[:, ('commissioning_date', 'decommissioning_date',
'notification_reason', 'energy_source_level_2',
'electrical_capacity_kW', 'thermal_capacity_kW',
'voltage_level', 'dso', 'eeg_id', 'bnetza_id',
'federal_state', 'postcode', 'municipality_code',
'municipality', 'address', 'address_number',
'utm_zone', 'utm_east', 'utm_north',
'data_source')]
for dataset in datasets: print(dataset+':'); display(dfs[dataset].tail(2))
# Merge DataFrames of each original source into a common DataFrame DE_renewables
dfs_list = []
for dataset in datasets:
dfs_list.append(dfs[dataset])
DE_renewables = pd.concat(dfs_list, sort=True)
DE_renewables.head(2)
DE_renewables.reset_index(drop=True, inplace=True)
DE_renewables.head(2)
# Choose the translation terms for Germany, create dictionary and show dictionary
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
idx_DE = valuenames[valuenames['country'] == 'DE'].index
value_dict_DE = valuenames.loc[idx_DE].set_index('original_name')['opsd_name'].to_dict()
value_dict_DE
print('replacing...')
# Replace all original value names by the OPSD value names.
# Running time: some minutes.
DE_renewables.replace(value_dict_DE, inplace=True)
print('Done!')
DE_renewables['postcode'] = DE_renewables['postcode'].apply(pd.to_numeric, errors='ignore')
# Create dictionary in order to assign energy_source to its subtype
energy_source_dict_DE = valuenames.loc[idx_DE].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
# Column energy_source partly contains energy source level 3 and technology information,
# thus this column is copied to new column technology...
DE_renewables['technology'] = DE_renewables['energy_source_level_2']
# ...and the energy source level 2 values are replaced by the higher level classification
DE_renewables['energy_source_level_2'].replace(energy_source_dict_DE, inplace=True)
# Choose energy source level 2 entries where energy_source is "Bioenergy" in order to
# separate Bioenergy subtypes to "energy_source_level_3" and subtypes for the rest to "technology"
idx_DE_Bioenergy = DE_renewables[DE_renewables['energy_source_level_2'] == 'Bioenergy'].index
# Assign technology to energy source level 3 for all entries where energy source level 2 is
# Bioenergy and delete those entries from technology
DE_renewables[['energy_source_level_3']] = DE_renewables.iloc[idx_DE_Bioenergy][['technology']]
DE_renewables.loc[idx_DE_Bioenergy]['technology'] = np.nan
# Assign energy source level 1 to the dataframe
DE_renewables['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of the energy types present in the frame
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
DE_renewables[energy_columns].drop_duplicates().sort_values(by='energy_source_level_2')
drop_mask = DE_renewables['energy_source_level_2'].isin(['Other fossil fuels', 'Storage'])
DE_renewables.drop(DE_renewables.index[drop_mask], axis=0, inplace=True)
# Electrical capacity per energy source level 2 (in MW)
DE_renewables.groupby(['energy_source_level_2'])['electrical_capacity_kW'].sum() / 1000
# kW to MW
DE_renewables[['electrical_capacity_kW', 'thermal_capacity_kW']] /= 1000
# adapt column name
DE_renewables.rename(columns={'electrical_capacity_kW': 'electrical_capacity',
'thermal_capacity_kW': 'thermal_capacity'}, inplace=True)
# Read generated postcode/location file
postcode = pd.read_csv(os.path.join('input', 'de_tso_postcode_full.csv'))
# Drop possible duplicates in postcodes
postcode.drop_duplicates('postcode', keep='last', inplace=True)
# Show first entries
postcode.head(2)
# Take postcode and longitude/latitude information
postcode = postcode[['postcode', 'lon', 'lat']]
# Cast DE_renewables['postcode'] to int64 in order to do the natural join of the dataframes
DE_renewables['postcode'] = pd.to_numeric(DE_renewables['postcode'], errors='coerce')
# Join two dataframes
DE_renewables = DE_renewables.merge(postcode, on=['postcode'], how='left')
DE_renewables.groupby(['utm_zone'])['utm_zone'].count()
# Find entries with 32 value at the beginning
idx_32 = (DE_renewables['utm_east'].astype(str).str[:2] == '32')
idx_notnull = DE_renewables['utm_east'].notnull()
# Remove 32 from utm_east entries
DE_renewables.loc[idx_32, 'utm_east'] = DE_renewables.loc[idx_32,
'utm_east'].astype(str).str[2:].astype(float)
def convert_to_latlon(utm_east, utm_north, utm_zone):
try:
return utm.to_latlon(utm_east, utm_north, utm_zone, 'U')
except:
return ''
DE_renewables['latlon'] = DE_renewables.loc[idx_notnull, ['utm_east', 'utm_north', 'utm_zone']].apply(
lambda x: convert_to_latlon(x[0], x[1], x[2]), axis=1).astype(str)
lat = []
lon = []
for row in DE_renewables['latlon']:
try:
# Split tuple format into the column lat and lon
row = row.lstrip('(').rstrip(')')
parts = row.split(',')
if(len(parts)<2):
raise Exception('This is not a proper tuple. So go to exception block.')
lat.append(parts[0])
lon.append(parts[1])
except:
# set NaN
lat.append(np.NaN)
lon.append(np.NaN)
DE_renewables['latitude'] = pd.to_numeric(lat)
DE_renewables['longitude'] = pd.to_numeric(lon)
# Add new values to DataFrame lon and lat
DE_renewables['lat'] = DE_renewables[['lat', 'latitude']].apply(
lambda x: x[1] if pd.isnull(x[0]) else x[0],
axis=1)
DE_renewables['lon'] = DE_renewables[['lon', 'longitude']].apply(
lambda x: x[1] if pd.isnull(x[0]) else x[0],
axis=1)
#DE_renewables[DE_renewables['data_source'] == '50Hertz'].to_excel('test.xlsx')
print('Missing coordinates ', DE_renewables.lat.isnull().sum())
display(
DE_renewables[DE_renewables.lat.isnull()].groupby(
['energy_source_level_2','data_source']
)['data_source'].count()
)
print('Share of missing coordinates (note that NaN can mean it\'s all fine):')
DE_renewables[DE_renewables.lat.isnull()].groupby(
['energy_source_level_2','data_source']
)['data_source'].count() / DE_renewables.groupby(
['energy_source_level_2','data_source']
)['data_source'].count()
# drop lonlat column that contains both, latitute and longitude
DE_renewables.drop(['latlon', 'longitude', 'latitude'], axis=1, inplace=True)
pickle.dump(DE_renewables, open( "intermediate/temp_dfs_DE_before_cleaning.pickle", "wb" ) )
DE_renewables = pickle.load( open( "intermediate/temp_dfs_DE_before_cleaning.pickle", "rb" ) )
# Remove out-of-range dates
# Keep only values between 1900 and 2100 to rule out outliers / wrong values.
# Also, Excel doesn't support dates before 1900..
mask = ((DE_renewables['commissioning_date']>pd.Timestamp('1900')) &
(DE_renewables['commissioning_date']<pd.Timestamp('2100')))
DE_renewables = DE_renewables[mask]
DE_renewables['municipality_code'] = DE_renewables['municipality_code'].astype(str)
# Remove spaces from municipality code
DE_renewables['municipality_code'] = DE_renewables['municipality_code'].str.replace(' ', '', regex=False)
DE_renewables['municipality_code'] = pd.to_numeric(DE_renewables['municipality_code'], errors='coerce', downcast='integer')
# Merge address and address_number
to_string = lambda x: str(x) if not pd.isnull(x) else ''
DE_renewables['address'] = DE_renewables['address'].map(to_string) + ' ' + DE_renewables['address_number'].map(to_string)
# Make sure that the column has no whitespaces at the beginning and the end
DE_renewables['address'] = DE_renewables['address'].str.strip()
# Remove the column with address numbers as it is not needed anymore
del DE_renewables['address_number']
# Set up a temporary postcode column as a string column for joining with the appropriate NUTS correspondence table
DE_renewables['postcode_str'] = DE_renewables['postcode'].astype(str).str[:-2]
DE_renewables = nuts_converter.add_nuts_information(DE_renewables, 'DE', DE_postcode2nuts_filepath,
postcode_column='postcode_str',
how=['postcode', 'municipality_code', 'municipality', 'latlon'])
# Drop the temporary column
DE_renewables.drop('postcode_str', axis='columns', inplace=True)
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = DE_renewables['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', DE_renewables.shape[0], 'facilities in DE.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = DE_renewables['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', DE_renewables.shape[0], 'facilities in DE.')
visualize_points(DE_renewables['lat'],
DE_renewables['lon'],
'Germany',
categories=DE_renewables['energy_source_level_2']
)
DE_renewables.to_pickle('intermediate/DE_renewables.pickle')
del DE_renewables
# Download the data for Denmark
filepaths = downloader.download_data_for_country('DK')
print(filepaths)
def read_dk_wind_turbines(filepath, sheet_name):
# Reads the data on Danish wind turbines
# from the sheet of the given name
# in the file with the path.
# Returns the data as a Pandas dataframe.
book = xlrd.open_workbook(filepath)
sheet = book.sheet_by_name(sheet_name)
# Since the column names are in two rows, not one,
# collect them in two parts. The first part is
# fixed and contains column names.
header = []
for i in range(0, 16):
# Make sure that strings 1) do not contain the newline sign
# and 2) have no trailing blank spaces.
column_name = sheet.cell_value(17, i).replace("\n", "").strip()
header = header + [column_name]
# The second part is variable. It consists of two subparts:
# 1) previous years (type float)
# 2) the past months of the current year (type date)
# Reading previous years as column names
i = 16
cell = sheet.cell(16, i)
while cell.ctype == xlrd.XL_CELL_NUMBER:
column_name = str(int(cell.value))
header = header + [column_name]
i = i + 1
cell = sheet.cell(16, i)
# Reading the months of the current year as column names
while cell.ctype == xlrd.XL_CELL_DATE:
year, month, _, _, _, _ = xlrd.xldate_as_tuple(cell.value, book.datemode)
column_name = str("{}-{}".format(year, month))
header = header + [column_name]
i = i + 1
cell = sheet.cell(16, i)
# Add the final column for the total of the current year
header += ['{}-total'.format(header[-1].split('-')[0])]
# Skip the first 17 rows in the sheet. The rest contains the data.
df = pd.read_excel(filepath,
sheet_name=sheet_name,
skiprows=17,
skipfooter=3
)
#
#df.drop(df.columns[len(df.columns)-1], axis=1, inplace=True)
# Set the column names.
df.columns = header
return df
# Get wind turbines data
wind_turbines_sheet_name = 'IkkeAfmeldte-Existing turbines'
DK_wind_filepath = filepaths['Energistyrelsen']
DK_wind_df = read_dk_wind_turbines(DK_wind_filepath,
wind_turbines_sheet_name
)
# Get photovoltaic data
DK_solar_filepath = filepaths['Energinet']
DK_solar_df = pd.read_excel(DK_solar_filepath,
sheet_name='Data',
skiprows=[0],
converters={'Postnr': str}
)
# Remove duplicates
DK_wind_df.drop_duplicates(inplace=True)
DK_solar_df.drop_duplicates(inplace=True)
# Choose the translation terms for Denmark, create dictionary and show dictionary
idx_DK = columnnames[columnnames['country'] == 'DK'].index
column_dict_DK = columnnames.loc[idx_DK].set_index('original_name')['opsd_name'].to_dict()
# Windows has problems reading the csv entry for east and north (DK).
# The reason might be the difference when opening the csv between linux and
# windows.
column_dict_DK_temp = {}
for k, v in column_dict_DK.items():
column_dict_DK_temp[k] = v
if v == 'utm_east' or v == 'utm_north':
# merge 2 lines to 1
new_key = ''.join(k.splitlines())
column_dict_DK_temp[new_key] = v
column_dict_DK = column_dict_DK_temp
column_dict_DK
# Replace column names based on column_dict_DK
DK_wind_df.rename(columns=column_dict_DK, inplace=True)
DK_solar_df.rename(columns=column_dict_DK, inplace=True)
# Add names of the data sources to the DataFrames
DK_wind_df['data_source'] = 'Energistyrelsen'
DK_solar_df['data_source'] = 'Energinet.dk'
# Add energy source level 2 and technology for each of the two DataFrames
DK_wind_df['energy_source_level_2'] = 'Wind'
DK_solar_df['energy_source_level_2'] = 'Solar'
DK_solar_df['technology'] = 'Photovoltaics'
mask=DK_solar_df['commissioning_date'] == '1970-01-01'
DK_solar_df.loc[mask, 'commissioning_date'] = np.nan
# Choose the translation terms for Denmark, create dictionary and show dictionary
idx_DK = valuenames[valuenames['country'] == 'DK'].index
value_dict_DK = valuenames.loc[idx_DK].set_index('original_name')['opsd_name'].to_dict()
# Replace all original value names by the OPSD value names
DK_wind_df.replace(value_dict_DK, inplace=True)
DK_solar_df.replace(value_dict_DK, inplace=True)
# Index for all values with utm information
idx_notnull = DK_wind_df['utm_east'].notnull()
# Convert from UTM values to latitude and longitude coordinates
DK_wind_df['lonlat'] = DK_wind_df.loc[idx_notnull, ['utm_east', 'utm_north']
].apply(lambda x: utm.to_latlon(x[0],
x[1],
32,
'U'), axis=1).astype(str)
# Split latitude and longitude in two columns
lat = []
lon = []
for row in DK_wind_df['lonlat']:
try:
# Split tuple format
# into the column lat and lon
row = row.lstrip('(').rstrip(')')
lat.append(row.split(',')[0])
lon.append(row.split(',')[1])
except:
# set NAN
lat.append(np.NaN)
lon.append(np.NaN)
DK_wind_df['lat'] = pd.to_numeric(lat)
DK_wind_df['lon'] = pd.to_numeric(lon)
# drop lonlat column that contains both, latitute and longitude
DK_wind_df.drop('lonlat', axis=1, inplace=True)
# Get geo-information
zip_DK_geo = zipfile.ZipFile(filepaths['Geonames'])
# Read generated postcode/location file
DK_geo = pd.read_csv(zip_DK_geo.open('DK.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
DK_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
DK_geo.drop_duplicates('postcode', keep='last', inplace=True)
DK_geo['postcode'] = DK_geo['postcode'].astype(str)
# Add longitude/latitude infomation assigned by postcode (for Energinet.dk data)
DK_solar_df = DK_solar_df.merge(DK_geo[['postcode', 'lon', 'lat']],
on=['postcode'],
how='left')
# Show number of units with missing coordinates separated by wind and solar
print('Missing Coordinates DK_wind', DK_wind_df.lat.isnull().sum(), 'out of', len(DK_wind_df.index))
print('Missing Coordinates DK_solar', DK_solar_df.lat.isnull().sum(), 'out of', len(DK_solar_df.index))
zip_DK_geo.close()
# Merge DataFrames for wind and solar into DK_renewables
dataframes = [DK_wind_df, DK_solar_df]
DK_renewables = pd.concat(dataframes, sort=False)
DK_renewables = DK_renewables.reset_index()
# Assign energy source level 1 to the dataframe
DK_renewables['energy_source_level_1'] = 'Renewable energy'
# Merge the address and address-number columns into one
to_string = lambda x: str(x) if not pd.isnull(x) else ""
DK_renewables['address'] = DK_renewables['address'].map(to_string) + " " + DK_renewables['address_number'].map(to_string)
# Make sure that the column has no whitespaces at the beginning or the end
DK_renewables['address'] = DK_renewables['address'].str.strip()
# Assign NUTS codes
DK_postcode2nuts = filepaths['Eurostat']
DK_renewables = nuts_converter.add_nuts_information(DK_renewables, 'DK', DK_postcode2nuts,
how=['latlon', 'postcode', 'municipality_code', 'municipality_name'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = DK_renewables['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', DK_renewables.shape[0], 'facilities in DK.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = DK_renewables['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', DK_renewables.shape[0], 'facilities in DK.')
DK_renewables[DK_renewables['nuts_1_region'].isnull()][['municipality', 'municipality_code', 'lat', 'lon']]
# Select those columns of the orignal data which are utilised further
columns_of_interest = ['commissioning_date', 'energy_source_level_1', 'energy_source_level_2',
'technology', 'electrical_capacity_kW', 'dso', 'gsrn_id', 'postcode',
'municipality_code', 'municipality', 'address',
'utm_east', 'utm_north', 'lon', 'lat', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'hub_height', 'rotor_diameter', 'manufacturer', 'model', 'data_source']
# Clean DataFrame from columns other than specified above
DK_renewables = DK_renewables.loc[:, columns_of_interest]
DK_renewables.reset_index(drop=True, inplace=True)
# Remove duplicates
DK_renewables.drop_duplicates(inplace=True)
DK_renewables.reset_index(drop=True, inplace=True)
# kW to MW
DK_renewables['electrical_capacity_kW'] /= 1000
# adapt column name
DK_renewables.rename(columns={'electrical_capacity_kW': 'electrical_capacity'},
inplace=True)
visualize_points(DK_renewables['lat'],
DK_renewables['lon'],
'Denmark',
categories=DK_renewables['energy_source_level_2']
)
DK_renewables.to_pickle('intermediate/DK_renewables.pickle')
del DK_renewables
# Download the data
filepaths = downloader.download_data_for_country('FR')
# Show the local paths
filepaths
# Load the data
FR_re_filepath = filepaths['ODRE']
FR_re_df = pd.read_csv(FR_re_filepath,
sep=';',
parse_dates=['dateRaccordement', 'dateDeraccordement',
'dateMiseEnService', 'dateDebutVersion'],
infer_datetime_format=True)
# Make sure that the column dateDeraccordement is datetime
FR_re_df['dateDeraccordement'] = pd.to_datetime(FR_re_df['dateDeraccordement'], errors='coerce')
# Choose the translation terms for France, create dictionary and show it
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_FR = columnnames[(columnnames['country'] == 'FR') & (columnnames['data_source'] == 'ODRE')].index
column_dict_FR = columnnames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
column_dict_FR
# Translate column names
FR_re_df.rename(columns=column_dict_FR, inplace=True)
# Keep only the columns specified in the translation dictionary as we'll need only them
columns_to_keep = list(column_dict_FR.values())
FR_re_df = FR_re_df.loc[:, columns_to_keep]
FR_re_df.reset_index(drop=True, inplace=True)
# Show a pair of rows
FR_re_df.head(2)
FR_re_df['data_source'] = 'Open Data Réseaux Énergies'
FR_re_df['as_of_year'] = 2018 # Year for which the dataset has been compiled by the data source
# Choose the translation terms for France, create a dictionary and show it
valuenames = pd.read_csv(os.path.join('input', 'value_translation_list.csv'))
idx_FR = valuenames[(valuenames['country'] == 'FR') & (valuenames['data_source'] == 'ODRE')].index
value_dict_FR = valuenames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
value_dict_FR
# Replace all original value names by the OPSD value names
FR_re_df.replace(value_dict_FR, inplace=True)
no_name_aliases = ['Agrégation des installations de moins de 36KW', 'Confidentiel', 'confidentiel']
no_name_mask = FR_re_df['site_name'].isin(no_name_aliases)
FR_re_df.loc[no_name_mask, 'site_name'] = np.nan
mask = (FR_re_df['commissioning_date'].dt.year <= 1900) &\
((FR_re_df['technology'].isin(['Photovoltaics', 'Onshore']) |\
(FR_re_df['energy_source_level_2'] == 'Solar')))
FR_re_df.loc[mask, 'commissioning_date'] = np.nan
#for x in FR_re_df[FR_re_df['commissioning_date'].dt.year <= 1980]['technology']:
# print(x)
# Check the columns
FR_re_df.isnull().all()
# Check the rows
print('There is a row containing all the null values?')
FR_re_df.isnull().all(axis=1).any()
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
# Define the mask for selecting rows with unusable info on electrical capacity
ec_mask = (FR_re_df['electrical_capacity'] == 0) | (FR_re_df['electrical_capacity'].isna())
# Define the mask for selecting the rows with non-renewable energy_source_level_2
non_renewable_esl2 = ['Non-renewable thermal', 'Non-hydraulic storage', 'Nuclear']
non_renewable_esl2_mask = FR_re_df['energy_source_level_2'].isin(non_renewable_esl2)
# Define the mask to select the rows with non-renewable technology
non_renewable_technologies = ['Steam turbine', 'Combustion cogeneration', 'Combustion engine',
'Combined cycle', 'Pumped storage', 'Piston motor', 'Nuclear fission']
non_renewable_technology_mask = FR_re_df['technology'].isin(non_renewable_technologies)
# Define the mask to select the rows without specified energy type and technology
other_mask = (FR_re_df['energy_source_level_2'] == 'Other') & \
((FR_re_df['technology'] == 'Other') | (pd.isnull(FR_re_df['technology'])))
# Combine the masks
drop_mask = ec_mask | non_renewable_esl2_mask | non_renewable_technology_mask | other_mask
# Show how many rows are going to be dropped
print('Dropping', drop_mask.sum(), 'rows out of', FR_re_df.shape[0])
# Keep all the rows not selected by the drop mask
keep_mask = ~drop_mask
FR_re_df = FR_re_df[keep_mask].reindex()
# Show some rows
print("A sample of the kept data:")
FR_re_df.sample(5)
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
# Make sure that the proper string is used to indicate other or unspecified technology
FR_re_df['technology'].replace('Other', 'Other or unspecified technology', inplace=True)
# Define a function that will deal with other cases
def standardize(row):
level_2 = row['energy_source_level_2']
technology = row['technology']
if level_2 in ['Marine', 'Geothermal', 'Bioenergy']:
technology = np.nan
elif level_2 in ['Solar', 'Hydro', 'Other'] and pd.isna(technology):
technology = 'Other or unspecified technology'
elif level_2 == 'Wind' and (pd.isna(technology) or technology == 'Other or unspecified technology'):
technology = 'Onshore'
if level_2 == 'Hydro' and technology in ['Lake', 'Closed']:
technology = 'Other or unspecified technology'
elif level_2 == 'Solar' and technology == 'Thermodynamic':
technology = 'Other or unspecified technology'
elif level_2 == 'Other' and technology == 'Photovoltaics':
level_2 = 'Solar'
return [level_2, technology]
# Apply the rules coded in function standardize
FR_re_df[['energy_source_level_2', 'technology']] = FR_re_df.apply(standardize, axis=1, result_type='expand')
# Show the existing level 2 types and technologies
FR_re_df[['energy_source_level_2', 'technology']].drop_duplicates()
FR_re_df[['energy_source_level_2', 'energy_source_level_3']].drop_duplicates()
index = (pd.isna(FR_re_df['energy_source_level_3']) & \
(FR_re_df['energy_source_level_2'] == 'Bioenergy'))
FR_re_df.loc[index, 'energy_source_level_3'] = 'Other or unspecified'
index = FR_re_df['energy_source_level_3'] == 'Wood'
FR_re_df.loc[index, 'energy_source_level_3'] = 'Biomass and biogas'
# Assign energy_source_level_1 to the dataframe
FR_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
FR_re_df[energy_columns].drop_duplicates()
# Get the local path of the downloaded georeferencing data
FR_geo_filepath = filepaths['Opendatasoft']
# Read INSEE Code Data
FR_geo = pd.read_csv(FR_geo_filepath,
sep=';',
header=0,
converters={'Code_postal': str})
# Drop possible duplicates of the same INSEE code
FR_geo.drop_duplicates('INSEE_COM', keep='last', inplace=True)
# create columns for latitude/longitude
lat = []
lon = []
# split in latitude/longitude
for row in FR_geo['Geo Point']:
try:
# Split tuple format
# into the column lat and lon
row = row.lstrip('(').rstrip(')')
lat.append(row.split(',')[0])
lon.append(row.split(',')[1])
except:
# set NAN
lat.append(np.NaN)
lon.append(np.NaN)
# add these columns to the INSEE DataFrame
FR_geo['lat'] = pd.to_numeric(lat)
FR_geo['lon'] = pd.to_numeric(lon)
# Column names of merge key have to be named identically
FR_re_df.rename(columns={'municipality_code': 'INSEE_COM'}, inplace=True)
# Merge longitude and latitude columns by the Code INSEE
FR_re_df = FR_re_df.merge(FR_geo[['INSEE_COM', 'lat', 'lon']],
on=['INSEE_COM'],
how='left')
# Translate Code INSEE column back to municipality_code
FR_re_df.rename(columns={'INSEE_COM': 'municipality_code'}, inplace=True)
#import importlib
#importlib.reload(util.nuts_converter)
#from util.nuts_converter import NUTSConverter
#nuts_converter = NUTSConverter(downloader, eurostat_eu_directory_path)
FR_postcode2nuts_path = filepaths['Eurostat']
FR_re_df = nuts_converter.add_nuts_information(FR_re_df, 'FR', FR_postcode2nuts_path,
lau_name_type='NATIONAL',
closest_approximation=True,
how=['municipality_code', 'latlon'])
# Report the number of facilites whose NUTS codes were successfully determined
determined = FR_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', FR_re_df.shape[0], 'facilities in FR.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = FR_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', FR_re_df.shape[0], 'facilities in FR.')
# Check the facilities without NUTS classification
no_nuts = FR_re_df['nuts_1_region'].isnull()
# Find the masks where some information for finding the proper NUTS code is present
lat_or_lon_present = ~(FR_re_df['lat'].isna() & FR_re_df['lon'].isna())
municipality_code_present = ~(FR_re_df['municipality_code'].isnull())
municipality_name_present = ~(FR_re_df['municipality'].isnull())
# Show the cases where NUTS classification failed even though it shouldn't have
print('1. No NUTS code but latitude/longitude info present')
problematic_lat_lon = FR_re_df[no_nuts & lat_or_lon_present][['lat', 'lon']]
display(problematic_lat_lon)
print('2. No NUTS code but municipality code info present')
problematic_municipality_codes = FR_re_df[no_nuts & municipality_code_present]['municipality_code'].unique()
display(problematic_municipality_codes)
print('3. No NUTS code but municipality name info present')
problematic_municipality_names = FR_re_df[no_nuts & municipality_name_present]['municipality'].unique()
display(problematic_municipality_names)
# Check if the any problematic code is actually present in the translation table
present_any = False
for code in problematic_municipality_codes:
mask = nuts_converter.municipality2nuts_df['municipality_code'].str.match(code)
present_any = present_any or mask.any()
print(present_any)
# Print only the names of those problematic municipalities, which appear in the translation table only once.
for name in problematic_municipality_names:
mask = nuts_converter.municipality2nuts_df['municipality'].str.match(name)
if mask.sum() == 1:
print(name)
FR_re_df['electrical_capacity'] = FR_re_df['electrical_capacity'] / 1000
# Load the data
FR_re_filepath = filepaths['gouv.fr']
FR_re_df_old = pd.read_excel(FR_re_filepath,
sheet_name='Commune',
encoding='UTF8',
thousands='.',
decimals=',',
header=[3, 4],
skipfooter=9, # skip the summary rows
index_col=[0, 1], # required for MultiIndex
converters={'Code officiel géographique': str})
FR_re_df_old.tail()
# Rearrange data
FR_re_df_old.index.rename(['insee_com', 'municipality'], inplace=True)
FR_re_df_old.columns.rename(['energy_source_level_2', None], inplace=True)
FR_re_df_old = (FR_re_df_old
.stack(level='energy_source_level_2', dropna=False)
.reset_index(drop=False))
# Choose the translation terms for France, create dictionary and show dictionary
idx_FR = columnnames[(columnnames['country'] == 'FR') & (columnnames['data_source'] == 'gouv.fr')].index
column_dict_FR = columnnames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
column_dict_FR
# Translate columnnames
FR_re_df_old.rename(columns=column_dict_FR, inplace=True)
# Drop all rows that contain NA
FR_re_df_old = FR_re_df_old.dropna()
FR_re_df_old.head(10)
FR_re_df_old['data_source'] = 'Ministry for the Ecological and Inclusive Transition'
FR_re_df_old['as_of_year'] = 2017 # Year for which the dataset has been compiled by the data source
# Choose the translation terms for France, create dictionary and show dictionary
idx_FR = valuenames[(valuenames['country'] == 'FR') & (valuenames['data_source'] == 'gouv.fr')].index
value_dict_FR = valuenames.loc[idx_FR].set_index('original_name')['opsd_name'].to_dict()
value_dict_FR
# Replace all original value names by the OPSD value names
FR_re_df_old.replace(value_dict_FR, inplace=True)
energy_source_dict_FR = valuenames.loc[idx_FR].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
display(energy_source_dict_FR)
display(FR_re_df_old[['energy_source_level_2']].drop_duplicates())
(FR_re_df_old['energy_source_level_2'].replace(energy_source_dict_FR).unique())
# Create dictionnary in order to assign energy_source to its subtype
energy_source_dict_FR = valuenames.loc[idx_FR].set_index(
'opsd_name')['energy_source_level_2'].to_dict()
# Column energy_source partly contains subtype information, thus this column is copied
# to new column for energy_source_subtype.
FR_re_df_old['technology'] = FR_re_df_old['energy_source_level_2']
# Only Photovoltaics should be kept as technology. Hydro should be changed to 'Other or unspecified technology',
# Geothermal to NaN, and Wind to Onshore.
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Solar' : 'Photovoltaics',
'Wind': 'Onshore',
'Hydro': 'Other or unspecified technology',
'Geothermal': np.nan
}
FR_re_df_old['technology'].replace(technology_translation_dictionary, inplace=True)
# The energy source subtype values in the energy_source column are replaced by
# the higher level classification
FR_re_df_old['energy_source_level_2'].replace(energy_source_dict_FR, inplace=True)
# Assign energy_source_level_1 to the dataframe
FR_re_df_old['energy_source_level_1'] = 'Renewable energy'
FR_re_df_old.reset_index(drop=True, inplace=True)
# Choose energy source level 2 entries where energy source level 2 is Bioenergy in order to
# seperate Bioenergy subtypes to energy source level 3 and subtypes for the rest to technology
idx_FR_Bioenergy = FR_re_df_old[FR_re_df_old['energy_source_level_2'] == 'Bioenergy'].index
# Assign technology to energy source level 3 for all entries where energy source level 2 is
# Bioenergy and delete those entries from technology
FR_re_df_old[['energy_source_level_3']] = FR_re_df_old.iloc[idx_FR_Bioenergy][['technology']]
FR_re_df_old.loc[idx_FR_Bioenergy,'technology'] = np.nan
FR_re_df_old[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
# Column names of merge key have to be named identically
FR_re_df_old.rename(columns={'municipality_code': 'INSEE_COM'}, inplace=True)
# Merge longitude and latitude columns by the Code INSEE
FR_re_df_old = FR_re_df_old.merge(FR_geo[['INSEE_COM', 'lat', 'lon']],
on=['INSEE_COM'],
how='left')
# Translate Code INSEE column back to municipality_code
FR_re_df_old.rename(columns={'INSEE_COM': 'municipality_code'}, inplace=True)
FR_postcode2nuts_path = filepaths['Eurostat']
FR_re_df_old = nuts_converter.add_nuts_information(FR_re_df_old, 'FR', FR_postcode2nuts_path,
how=['municipality_code', 'latlon'])
# how=['municipality', 'municipality_code', 'latlon']
# Report the number of facilites whose NUTS codes were successfully determined
determined = FR_re_df_old['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', FR_re_df_old.shape[0], 'facilities in FR.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = FR_re_df_old['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', FR_re_df_old.shape[0], 'facilities in FR.')
# Show the facilities without NUTS classification
FR_re_df_old[FR_re_df_old['nuts_1_region'].isnull()]
# For each column present in the new data's column space, but not the old,
# add an empty column to the old data.
for new_column in FR_re_df.columns:
if new_column not in FR_re_df.columns:
FR_re_df_old[new_column] = np.nan
# Define the mask to select the municipalities from the old data, that are not covered
# by the new.
not_included = ~(FR_re_df_old['municipality_code'].isin(FR_re_df['municipality_code']))
FR_re_df_old[not_included]
# Add a dummy column to the new data frame
# representing the number of power plants (always 1)
FR_re_df['number_of_installations'] = 1
# Mark the old data rows as aggregations on municipality level.
FR_re_df_old['site_name'] = 'Aggregated data for ' + FR_re_df_old['municipality']
# Concatenate the new data with the old rows referring to the municipalities
# not covered by the new.
FR_re_df = pd.concat([FR_re_df, FR_re_df_old[not_included]], ignore_index=True, axis='index', sort=True)
columns_to_keep = ['EIC_code', 'municipality_group_code', 'IRIS_code', 'as_of_year',
'commissioning_date', 'connection_date', 'data_source', 'departement',
'departement_code', 'disconnection_date',
'electrical_capacity', 'energy_source_level_1', 'energy_source_level_2',
'energy_source_level_3', 'lat', 'lon',
'municipality', 'municipality_code',
'municipality_group', 'number_of_installations', 'nuts_1_region',
'nuts_2_region', 'nuts_3_region', 'region', 'region_code', 'site_name',
'source_station_code', 'technology']
FR_re_df = FR_re_df[columns_to_keep]
FR_re_df.reset_index(drop=True, inplace=True)
visualize_points(FR_re_df['lat'],
FR_re_df['lon'],
'France',
categories=FR_re_df['energy_source_level_2']
)
FR_re_df.to_pickle('intermediate/FR_renewables.pickle')
del FR_re_df
# Download the data
filepaths = downloader.download_data_for_country('PL')
# Get the local paths to the data files
PL_re_filepath = filepaths['Urzad Regulacji Energetyki']
PL_postcode2nuts_filepath = filepaths['Eurostat']
PL_geo_filepath = filepaths['Geonames']
# Read the data into a pandas dataframe
PL_re_df = pd.read_excel(PL_re_filepath,
encoding='latin',
header=2,
skipfooter=14
)
# Show 5 random rows
PL_re_df.sample(n=5)
# Get the mask for selecting the WS plants
ws_mask = PL_re_df['Rodzaj_OZE'] == 'WS'
# Drop them
print('Dropping', ws_mask.sum(), 'out of', PL_re_df.shape[0], 'power plants.')
PL_re_df.drop(PL_re_df.index[ws_mask], axis=0, inplace=True)
PL_re_df.reset_index(drop=True, inplace=True)
# Choose the translation terms for Poland, create and show the dictionary
columnnames = pd.read_csv(os.path.join('input', 'column_translation_list.csv'))
idx_PL = columnnames[(columnnames['country'] == 'PL') &
(columnnames['data_source'] == 'Urzad Regulacji Energetyki')].index
column_dict_PL = columnnames.loc[idx_PL].set_index('original_name')['opsd_name'].to_dict()
column_dict_PL
# Translate column names
PL_re_df.rename(columns=column_dict_PL, inplace=True)
# Show a couple of rows
PL_re_df.head(2)
print('The number of missing values in the data:', PL_re_df.isna().sum().sum())
print('Are all capacities proper numbers?', PL_re_df['electrical_capacity'].dtype == 'float64')
print('What about the energy codes?', PL_re_df['energy_type'].unique())
# Check the voivodeships
print('Show the names of the voivodeships.')
PL_re_df['region'].unique()
PL_re_df['region'] = PL_re_df['region'].str.strip().str.capitalize()
PL_re_df['region'].unique()
districts = PL_re_df['district'].unique()
districts.sort()
districts
# Correct the typos
PL_re_df.loc[PL_re_df['district'] == 'lipowski', 'district'] = 'lipnowski'
PL_re_df.loc[PL_re_df['district'] == 'hojnowski', 'district'] = 'hajnowski'
# Choose the translation terms for Poland, create dictionary
idx_PL = valuenames[valuenames['country'] == 'PL'].index
value_dict_PL = valuenames.loc[idx_PL].set_index('original_name')['opsd_name'].to_dict()
# Set energy source level 3
PL_re_df['energy_source_level_3'] = PL_re_df['energy_type'].replace(value_dict_PL)
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_PL = valuenames[valuenames['country'] == 'PL'].index
energy_source_dict_PL = valuenames.loc[idx_PL].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
PL_re_df['energy_source_level_2'] = PL_re_df['energy_type'].replace(energy_source_dict_PL)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'BG': np.nan,
'BM': np.nan,
'PVA': 'Other or unspecified technology', # Photovoltaics?
'WIL': 'Other or unspecified technology', # Onshore?
'WO': 'Other or unspecified technology', # Run-of-river
}
PL_re_df['technology'] = PL_re_df['energy_type'].replace(technology_translation_dictionary)
# Add energy_source_level_1
PL_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
PL_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates().sort_values(by='energy_source_level_2')
# Define the function to standardize district names from the original data
def standardize_districts(original_string):
if original_string[-1] == ',': # there is one district whose name ends with ','; that's a typo in the data
original_string = original_string[:-1]
if original_string.startswith('m. st. '):
return original_string[7:]
elif original_string.startswith('m. '):
return original_string[3:]
elif any([original_string.endswith(suffix) for suffix in ['ski', 'cki', 'zki']]):
return 'Powiat ' + original_string
else:
return original_string
# Get geo-information
zip_PL_geo = zipfile.ZipFile(PL_geo_filepath)
# Read generated postcode/location file
PL_geo = pd.read_csv(zip_PL_geo.open('PL.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
PL_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
PL_geo.drop_duplicates('postcode', keep='last', inplace=True)
PL_geo['postcode'] = PL_geo['postcode'].astype(str)
# Get the names
geonames_districts = PL_geo['admin_name2'].unique()
# Show them
geonames_districts
# Standardize the district names from the original data
PL_re_df['standardized_district'] = PL_re_df['district'].apply(standardize_districts)
standardized_districts = PL_re_df['standardized_district'].unique()
# Check which districts could not be found in the GeoNames data
#print(len([x for x in semi if x in geopowiats]), len([x for x in semi if x not in geopowiats]))
not_found = set(standardized_districts).difference(set(geonames_districts))
number_of_not_found = len(not_found)
total = len(standardized_districts)
print('{}/{} names could not be found. Those are:'.format(number_of_not_found, total))
print(not_found)
# We define the similarity between two strings, string1 and string2,
# as the length of the longest prefix of string1 that appears in string2.
# Note 1: this measure of similarity is not necessarily symmetrical.
# Note 2: a prefix of a string is its substring that starts from the beginning of the string.
def calculate_similarity(string1, string2):
for n in range(len(string1), 1, -1):
prefix = string1[0:(n-1)]
if prefix in string2:
return len(prefix)
return 0
# Define a function to find, among a group of candidate strings,
# the most similar string to the one given as the reference string.
def find_the_most_similar(reference_string, candidate_strings):
the_most_similar = None
maximal_similarity = 0
for candidate_string in candidate_strings:
similarity = calculate_similarity(reference_string, candidate_string)
if similarity > maximal_similarity:
maximal_similarity = similarity
the_most_similar = candidate_string
return the_most_similar, maximal_similarity
already_mapped = PL_re_df[['district', 'standardized_district']].drop_duplicates().to_dict(orient='records')
already_mapped = {mapping['district'] : mapping['standardized_district'] for mapping in already_mapped
if mapping['standardized_district'] in geonames_districts}
# Make a dictionary to map each district from the original data to its GeoNames equivalent.
# The districts whose standardized versions have been found in the GeoNames data to their standardizations.
# The mappings for other districts will be found using the previously defined similarity measures.
districts_map = PL_re_df[['district', 'standardized_district']].drop_duplicates().to_dict(orient='records')
districts_map = {mapping['district'] : mapping['standardized_district'] for mapping in districts_map}
# Override the mappings for the 49 districts whose standardized names have not been found in the GeoNames data.
for district, standardized_district in districts_map.items():
#standardized_district = ['standardized_district']
if standardized_district not in geonames_districts:
#print('---------')
if standardized_district.startswith('Powiat'):
standardized_district = standardized_district[7:]
#print(district)
capitalized = standardized_district.capitalize()
lowercase = standardized_district.lower()
candidate1, similarity1 = find_the_most_similar(capitalized, geonames_districts)
candidate2, similarity2 = find_the_most_similar(lowercase, geonames_districts)
if similarity1 > similarity2:
districts_map[district] = candidate1
#print('\t', candidate1, similarity1)
elif similarity2 > similarity1:
districts_map[district] = candidate2
#print('\t', candidate2, similarity2)
else:
# Break the ties by mapping to the shorter string
if len(candidate1) < len(candidate2):
districts_map[district] = candidate1
#print('\t', candidate1, '|', candidate2, similarity1)
else:
districts_map[district] = candidate2
#print('\t', candidate2, '|', candidate1, similarity2)
# Apply the override to PL_re_df
PL_re_df['standardized_district'] = PL_re_df['district'].apply(lambda district: districts_map[district])
# Show the results
PL_re_df[['district', 'standardized_district']].drop_duplicates()
# Clear the mappings for wołowski, Nowy Sącz, rzeszowski, hojnowski.
for district in ['wołowski', 'm. Nowy Sącz', 'rzeszowski', 'hojnowski']:
districts_map[district] = ''
PL_re_df.loc[PL_re_df['district'] == district, 'standardized_district'] = ''
# For each mapping, select a postcode from the GeoNames data
df_dict = {'original' : [], 'geonames' : []}
for original_name in districts_map:
geonames_name = districts_map[original_name]
df_dict['original'].append(original_name)
df_dict['geonames'].append(geonames_name)
mapping_df = pd.DataFrame.from_dict(df_dict)
# To make sure that the selected postcodes do appear in the NUTS table,
# we drop, from PL_geo, all rows with the postcodes not in the postcode-to-NUTS table for Poland.
PL_table = nuts_converter.open_postcode2nuts(filepaths['Eurostat'])['CODE']
PL_geo = pd.merge(PL_geo, PL_table, how='inner', left_on='postcode', right_on='CODE')
PL_geo.drop(['CODE'], axis='columns', inplace=True)
#
merged = pd.merge(mapping_df,
PL_geo[['admin_name2', 'postcode']],
how='left',
left_on='geonames',
right_on='admin_name2')
# Rename the column postcode to make its meaning straightforward
merged.rename(columns={'postcode' : 'random_postcode'}, inplace=True)
merged = merged.drop_duplicates(['geonames'])
print(PL_re_df.shape)
PL_re_df = pd.merge(PL_re_df,
merged[['geonames', 'random_postcode']],
how='left',
left_on='standardized_district',
right_on='geonames')
# Show results
PL_re_df.head(2)
display(PL_re_df[PL_re_df['random_postcode'].isnull()])
PL_re_df['random_postcode'].isnull().sum()
PL_postcode2nuts_path = filepaths['Eurostat']
PL_re_df = nuts_converter.add_nuts_information(PL_re_df, 'PL', PL_postcode2nuts_path,
postcode_column='random_postcode', how=['postcode'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = PL_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', PL_re_df.shape[0], 'facilities in PL.')
# Manual assignments
manual_nuts3_map = {
'wołowski' : 'PL518',
'm. Nowy Sącz' : 'PL218',
'rzeszowski' : 'PL325'
}
for district in manual_nuts3_map:
nuts3 = manual_nuts3_map[district]
nuts2 = nuts3[:-1]
nuts1 = nuts3[:-2]
mask = (PL_re_df['district'] == district)
PL_re_df.loc[mask, ['nuts_1_region', 'nuts_2_region', 'nuts_3_region']] = [nuts1, nuts2, nuts3]
# Report the number of facilites whose NUTS codes could not be determined
not_determined = PL_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', PL_re_df.shape[0], 'facilities in PL.')
PL_re_df['data_source'] = 'Urzad Regulacji Energetyki'
PL_re_df['as_of_year'] = 2019 # The year for which the dataset has been compiled by the data source
# Choose which column to keep
PL_re_df = PL_re_df.loc[:, [ 'URE_id', 'region', 'district',
'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'electrical_capacity',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3',
'technology',
'data_source', 'as_of_year']]
PL_re_df.to_pickle('intermediate/PL_renewables.pickle')
del PL_re_df
# Download the data and get the local paths of the downloaded files
filepaths = downloader.download_data_for_country('CH')
CH_re_filepath = filepaths['BFE']
CH_geo_filepath = filepaths['Geonames']
CH_postcode2nuts_filepath = filepaths['Eurostat']
# Get data of renewables per municipality
CH_re_df = pd.read_excel(CH_re_filepath,
sheet_name='KEV Bezüger 2018',
encoding='UTF8',
thousands='.',
decimals=','
#header=[0]
#skipfooter=9, # contains summarized values
#index_col=[0, 1], # required for MultiIndex
#converters={'Code officiel géographique':str}
)
# Choose the translation terms for Switzerland, create dictionary and show dictionary
idx_CH = columnnames[columnnames['country'] == 'CH'].index
column_dict_CH = columnnames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
column_dict_CH
# Translate columnnames
CH_re_df.columns = [column_name.replace("\n", "") for column_name in CH_re_df.columns]
CH_re_df.rename(columns=column_dict_CH, inplace=True)
CH_re_df['data_source'] = 'BFE'
# Choose the translation terms for Switzerland, create dictionary
idx_CH = valuenames[valuenames['country'] == 'CH'].index
value_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
# Assign energy_source_level_1 to the dataframe
CH_re_df['energy_source_level_1'] = 'Renewable energy'
# Create dictionnary in order to assign energy_source to its subtype
#energy_source_dict_CH = valuenames.loc[idx_CH].set_index('opsd_name')['energy_source_level_2'].to_dict()
#
# ...and the energy source subtype values in the energy_source column are replaced by
# the higher level classification
#CH_re_df['energy_source_level_2'].replace(energy_source_dict_CH, inplace=True)
CH_re_df['energy_source_level_3'] = CH_re_df['technology']
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_CH = valuenames[valuenames['country'] == 'CH'].index
energy_source_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
CH_re_df['energy_source_level_2'] = CH_re_df['energy_source_level_2'].replace(energy_source_dict_CH)
# Translate values in order to standardize energy_source_level_3
value_dict_CH = valuenames.loc[idx_CH].set_index('original_name')['opsd_name'].to_dict()
CH_re_df['energy_source_level_3'].replace(value_dict_CH, inplace=True)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Klärgasanlage': np.nan,
'Dampfprozess': 'Steam turbine',
'übrige Biomasse - WKK-Anlage': 'Other or unspecified technology',
'übrige Biomasse - Dampfprozess': 'Steam turbine',
'Schlammverbrennungsanlage': 'Combustion engine',
'WKK-Prozess': 'Other or unspecified technology',
'Kehrrichtverbrennungsanlage': 'Combustion engine',
'Integrierte Anlage': 'Photovoltaics',
'Angebaute Anlage': 'Photovoltaics',
'Freistehende Anlage': 'Photovoltaics',
'Trinkwasserkraftwerk': 'Other or unspecified technology',
'Durchlaufkraftwerk': 'Run-of-river',
'Dotierwasserkraftwerk': 'Other or unspecified technology',
'Ausleitkraftwerk': 'Other or unspecified technology',
'Wind Offshore': 'Other or unspecified technology',
'Abwasserkraftwerk': 'Other or unspecified technology',
'Unbekannt': 'Other or unspecified technology',
np.nan: 'Onshore',
None: 'Onshore'
}
CH_re_df['technology'].replace(technology_translation_dictionary, inplace=True)
# Add energy_source_level_1
CH_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
energy_columns = ['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']
CH_re_df[energy_columns].drop_duplicates().sort_values(by='energy_source_level_2')
drop_mask = (CH_re_df['energy_source_level_3'] == 'Biomass and biogas') & \
(CH_re_df['technology'] == 'Steam turbine')
drop_indices = drop_mask[drop_mask].index
CH_re_df.drop(drop_indices, axis='index', inplace=True)
CH_re_df.reset_index(drop=True, inplace=True)
CH_re_df.replace(value_dict_CH, inplace=True)
# Get geo-information
zip_CH_geo = zipfile.ZipFile(CH_geo_filepath)
# Read generated postcode/location file
CH_geo = pd.read_csv(zip_CH_geo.open('CH.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
CH_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
CH_geo.drop_duplicates('postcode', keep='last', inplace=True)
CH_geo['postcode'] = CH_geo['postcode'].astype(str)
# harmonise data class
CH_geo.postcode = CH_geo.postcode.astype(int)
# Add longitude/latitude infomation assigned by municipality code
CH_re_df = pd.merge(CH_re_df,
CH_geo[['lat', 'lon', 'postcode']],
left_on='municipality_code',
right_on='postcode',
how='left'
)
zip_CH_geo.close()
CH_postcode2nuts_path = filepaths['Eurostat']
# Use the string versions of postcode and municipality code columns
CH_re_df['postcode_str'] = CH_re_df['postcode'].astype(str).str[:-2]
CH_re_df['municipality_code_str'] = CH_re_df['municipality_code'].astype(str)
CH_re_df = nuts_converter.add_nuts_information(CH_re_df, 'CH', CH_postcode2nuts_path,
postcode_column='postcode_str',
municipality_code_column='municipality_code_str',
lau_name_type='NATIONAL', how=['postcode', 'municipality'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = CH_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', CH_re_df.shape[0], 'facilities in CH.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = CH_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', CH_re_df.shape[0], 'facilities in CH.')
CH_re_df[CH_re_df['nuts_1_region'].isnull()][['postcode', 'municipality']]
# Check the facilities without NUTS classification
no_nuts = CH_re_df['nuts_1_region'].isnull()
# Find the masks where some information for finding the proper NUTS code is present
municipality_name_present = ~(CH_re_df['municipality'].isnull())
# Show the cases where NUTS classification failed even though it shouldn't have
problematic_municipality_names = CH_re_df[no_nuts & municipality_name_present]['municipality'].unique()
print('Problematic municipalities:', ', '.join(list(problematic_municipality_names)) + '.')
print('Are those names present in the official NUTS tables for CH?')
if nuts_converter.municipality2nuts_df['municipality'].isin(problematic_municipality_names).any():
print('At least one is.')
else:
print('No, none is.')
# kW to MW
CH_re_df['electrical_capacity'] /= 1000
# kWh to MWh
CH_re_df['production'] /= 1000
columns_to_keep = ['project_name', 'energy_source_level_2','energy_source_level_3', 'technology',
'electrical_capacity', 'production', 'tariff', 'commissioning_date', 'contract_period_end',
'address', 'municipality_code', 'municipality', 'nuts_1_region', 'nuts_2_region',
'nuts_3_region', 'canton', 'company', 'title', 'surname', 'first_name', 'data_source',
'energy_source_level_1', 'lat', 'lon', 'postcode']
CH_re_df = CH_re_df.loc[:, columns_to_keep]
CH_re_df.reset_index(drop=True, inplace=True)
visualize_points(CH_re_df['lat'],
CH_re_df['lon'],
'Switzerland',
categories=CH_re_df['energy_source_level_2']
)
CH_re_df.to_pickle('intermediate/CH_renewables.pickle')
del CH_re_df
# Download the data and get the local paths to the corresponding files
filepaths = downloader.download_data_for_country('UK')
UK_re_filepath = filepaths['BEIS']
UK_geo_filepath = filepaths['Geonames']
UK_postcode2nuts_filepath = filepaths['Eurostat']
# Read the renewable powerplants data into a dataframe
UK_re_df = pd.read_csv(UK_re_filepath,
header=2,
encoding='latin1',
parse_dates=['Record Last Updated (dd/mm/yyyy)','Operational'],
infer_datetime_format=True,
thousands=','
)
# Drop empty columns and rows
UK_re_df.dropna(axis='index', how='all', inplace=True)
UK_re_df.dropna(axis='columns', how='all', inplace=True)
# Keep only operational facilities in the dataset
UK_re_df = UK_re_df.loc[UK_re_df["Development Status"] == "Operational"]
UK_re_df.reset_index(inplace=True, drop=True)
# Standardize string columns
strip_and_lower = ['CHP Enabled']
strip_only = ['Country', 'County', 'Operator (or Applicant)', 'Mounting Type for Solar']
for column in strip_and_lower:
util.helper.standardize_column(UK_re_df, column, lower=True)
for column in strip_only:
util.helper.standardize_column(UK_re_df, column, lower=False)
# Drop Flywheels, Battery and Liquid Air Energy Storage
UK_re_df = UK_re_df[~UK_re_df['Technology Type'].isin(['Flywheels', 'Battery', 'Liquid Air Energy Storage'])]
UK_re_df.reset_index(drop=True, inplace=True)
# Copy the column "Technology Type" to a new column named "technology"
UK_re_df['technology'] = UK_re_df['Technology Type']
# Choose the translation terms for the UK and create the translation dictionary
idx_UK = columnnames[columnnames['country'] == 'UK'].index
column_dict_UK = columnnames.loc[idx_UK].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
column_dict_UK
# Translate column names
UK_re_df.rename(columns=column_dict_UK, inplace=True)
UK_re_df['data_source'] = 'BEIS'
# Create dictionnary in order to assign energy_source_level_2 to its subtype
idx_UK = valuenames[valuenames['country'] == 'UK'].index
energy_source_dict_UK = valuenames.loc[idx_UK].set_index('original_name')['energy_source_level_2'].to_dict()
# Add energy_source_level_2
UK_re_df['energy_source_level_2'] = UK_re_df['energy_source_level_3'].replace(energy_source_dict_UK)
# Translate values in order to standardize energy_source_level_3
value_dict_UK = valuenames.loc[idx_UK].set_index('original_name')['opsd_name'].to_dict()
UK_re_df['energy_source_level_3'].replace(value_dict_UK, inplace=True)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_translation_dictionary = {
'Biomass (co-firing)': 'Other or unspecified technology',
'Biomass (dedicated)': 'Other or unspecified technology',
'Advanced Conversion Technologies': 'Other or unspecified technology',
'Anaerobic Digestion': 'Other or unspecified technology',
'EfW Incineration': np.nan,
'Large Hydro': 'Other or unspecified technology',
'Small Hydro': 'Other or unspecified technology',
'Landfill Gas': np.nan,
'Solar Photovoltaics': 'Photovoltaics',
'Sewage Sludge Digestion': np.nan,
'Tidal Barrage and Tidal Stream': np.nan,
'Shoreline Wave': np.nan,
'Wind Offshore': 'Offshore',
'Wind Onshore': 'Onshore',
'Pumped Storage Hydroelectricity': 'Pumped storage'
}
UK_re_df['technology'].replace(technology_translation_dictionary, inplace=True)
# Add energy_source_level_1
UK_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
UK_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
# Define a wrapper for bng_to_latlon for handling None values
def to_lat_lon(easting, northing):
if pd.isnull(easting) or pd.isnull(northing):
return (None, None)
else:
return bng_to_latlon.OSGB36toWGS84(easting, northing)
# Convert easting and northing columns to numbers
UK_re_df['X-coordinate'] = pd.to_numeric(
UK_re_df['X-coordinate'].astype(str).str.replace(',', ''),
errors='coerce'
)
UK_re_df['Y-coordinate'] = pd.to_numeric(
UK_re_df['Y-coordinate'].astype(str).str.replace(',', ''),
errors='coerce'
)
# Convert easting and northing coordinates to standard latitude and longitude
latlon = UK_re_df.apply(lambda row: to_lat_lon(row["X-coordinate"], row["Y-coordinate"]),
axis=1
)
# Split a column of (latitude, longitude) pairs into two separate coordinate columns
latitude = latlon.apply(lambda x: x[0])
longitude = latlon.apply(lambda x: x[1])
# Add them to the dataframe
UK_re_df['latitude'] = latitude
UK_re_df['longitude'] = longitude
# Get geo-information
zip_UK_geo = zipfile.ZipFile(UK_geo_filepath)
# Read generated postcode/location file
UK_geo = pd.read_csv(zip_UK_geo.open('GB_full.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
UK_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
UK_geo.drop_duplicates('postcode', keep='last', inplace=True)
UK_geo['postcode'] = UK_geo['postcode'].astype(str)
# Find the rows where latitude and longitude are unknown
missing_latlon_mask = UK_re_df['latitude'].isna() | UK_re_df['longitude'].isna()
missing_latlon = UK_re_df[missing_latlon_mask]
# Add longitude/latitude infomation assigned by post code
updated_latlon = pd.merge(missing_latlon,
UK_geo[['lat', 'lon', 'postcode']],
left_on='postcode',
right_on='postcode',
how='left'
)
# Return the updated rows to the original frame
UK_re_df = pd.merge(UK_re_df,
updated_latlon[['uk_beis_id', 'lat', 'lon']],
on='uk_beis_id',
how='left'
)
# Use the bng_to_latlon coordinates (columns: 'latitude' and 'longitude') if present,
# otherwise, use those obtained with UK_geo (columns: 'lat' and 'lon').
UK_re_df['longitude'] = UK_re_df.apply(lambda row: row['longitude'] if not pd.isnull(row['longitude'])
else row['lon'],
axis=1
)
UK_re_df['latitude'] = UK_re_df.apply(lambda row: row['latitude'] if not pd.isnull(row['latitude'])
else row['lat'],
axis=1
)
# Drop the UK_geo columns (lat/lon)
# as the information was moved to the 'latitude' and 'longitude' columns.
UK_re_df.drop(['lat', 'lon'], axis='columns', inplace=True)
zip_UK_geo.close()
# Find the rows where latitude and longitude are unknown
missing_latlon_mask = UK_re_df['latitude'].isna() | UK_re_df['longitude'].isna()
missing_latlon = UK_re_df[missing_latlon_mask].copy()
missing_latlon = missing_latlon.reset_index()
# Determine their post code prefixes
prefixes = missing_latlon.apply(lambda row: str(row['postcode']).split(' ')[0],
axis=1
)
missing_latlon['Prefix'] = prefixes
# Determine the centroids of the areas covered by the prefixes
grouped_UK_geo=UK_geo.groupby(by=lambda i: str(UK_geo['postcode'].loc[i]).split(' ')[0])
# Assing the centroid coordinates to the facilities with unknown coordinates
updated_latlon = pd.merge(missing_latlon,
grouped_UK_geo.mean(),
left_on="Prefix",
right_index=True,
how="left"
)
# Return the updated rows to the original frame
UK_re_df = pd.merge(UK_re_df,
updated_latlon[['uk_beis_id', 'lat', 'lon']],
on='uk_beis_id',
how='left'
)
# Keep the already known coordinates (columns: 'latitude' and 'longitude') if present,
# otherwise, use those obtained by approximation (columns: 'lat' and 'lon').
UK_re_df['longitude'] = UK_re_df.apply(lambda row: row['longitude'] if not pd.isnull(row['longitude'])
else row['lon'],
axis=1
)
UK_re_df['latitude'] = UK_re_df.apply(lambda row: row['latitude'] if not pd.isnull(row['latitude'])
else row['lat'],
axis=1
)
# Drop the UK_geo columns (lat/lon)
# as the information was moved to the 'latitude' and 'longitude' columns.
UK_re_df.drop(['lat', 'lon'], axis='columns', inplace=True)
UK_postcode2nuts_filepath = filepaths['Eurostat']
UK_re_df = nuts_converter.add_nuts_information(UK_re_df, 'UK', UK_postcode2nuts_filepath,
latitude_column='latitude',
longitude_column='longitude', closest_approximation=True,
lau_name_type='NATIONAL', how=['latlon', 'municipality'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = UK_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', UK_re_df.shape[0], 'facilities in UK.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = UK_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', UK_re_df.shape[0], 'facilities in UK.')
UK_re_df[UK_re_df['nuts_1_region'].isnull()]
visualize_points(UK_re_df['latitude'],
UK_re_df['longitude'],
'United Kingdom',
categories=UK_re_df['energy_source_level_2']
)
max_X = UK_re_df['X-coordinate'].max()
min_X = UK_re_df['X-coordinate'].min()
max_Y = UK_re_df['Y-coordinate'].max()
min_Y = UK_re_df['Y-coordinate'].min()
figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
ax = plt.axes(projection=ccrs.OSGB())
ax.coastlines('10m')
ax.scatter(UK_re_df['X-coordinate'], UK_re_df['Y-coordinate'],s=0.5)
plt.show()
# Rename 'longitude' and 'latitude' to 'lon' and 'lat' to conform to the naming convention
# used for other countries.
UK_re_df.rename(columns={'longitude': 'lon', 'latitude': 'lat'}, inplace=True)
# Define the columns to keep
columns_of_interest = ['commissioning_date', 'uk_beis_id', 'operator', 'site_name',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology',
'electrical_capacity', 'chp', 'support_robranding', 'support_fit', 'support_cfd',
'capacity_individual_turbine', 'number_of_turbines', 'solar_mounting_type',
'status', 'address', 'municipality', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region',
'region', 'country', 'postcode', 'lon', 'lat', 'data_source'
]
for col in columns_of_interest:
if col not in UK_re_df.columns:
print(col)
# Clean the dataframe from columns other than those specified above
UK_re_df = UK_re_df.loc[:, columns_of_interest]
UK_re_df.reset_index(drop=True, inplace=True)
UK_re_df.columns
UK_re_df.to_pickle('intermediate/UK_renewables.pickle')
# Download the data and get the local paths to the corresponding files
filepaths = downloader.download_data_for_country('SE')
print(filepaths)
SE_re_filepath = filepaths['Vindbrukskollen']
SE_geo_filepath = filepaths['Geonames']
SE_postcode2nuts_filepath = filepaths['Eurostat']
# Define the function for converting the column "Senast sparads" to date type
#def from_int_to_date(int_date):
# print(int_date)
# str_date =str(int_date)
# year = str_date[:4]
# month = str_date[4:6]
# day = str_date[6:8]
# str_date = '{}/{}/{}'.format(year, month, day)
# return pd.to_datetime(str_date, format='%Y/%m/%d')
# Read the data
SE_re_df = pd.read_excel(SE_re_filepath,
sheet_name='Vindkraftverk',
na_values='-',
parse_dates=['Uppfört', 'Senast sparad'],
infer_datetime_format=True,
#converters={'Senast sparad' : from_int_to_date}
)
# Show 5 rows from the beginning
SE_re_df.head(5)
# Drop empty rows and columns
SE_re_df.dropna(axis='index', how='all', inplace=True)
SE_re_df.dropna(axis='columns', how='all', inplace=True)
# Make sure that the column Uppfört is of the date type and correctly formatted
SE_re_df['Uppfört'] = pd.to_datetime(SE_re_df['Uppfört'], format='%Y-%m-%d')
# Keep only operational wind farms
subset_mask = SE_re_df['Status'].isin(['Beviljat', 'Uppfört'])
SE_re_df.drop(SE_re_df[~subset_mask].index, axis='index', inplace=True)
# Remove the farms whose capacity is not known.
subset_mask = SE_re_df['Maxeffekt (MW)'].isna()
SE_re_df.drop(SE_re_df[subset_mask].index, axis='index', inplace=True)
# Standardize string columns
string_columns = ['Modell', 'Fabrikat', 'Elområde', 'Kommun', 'Län', 'Handlingstyp', 'Placering']
for col in string_columns:
util.helper.standardize_column(SE_re_df, col, lower=False)
# Choose the translation terms for the UK and create the translation dictionary
idx_SE = columnnames[columnnames['country'] == 'SE'].index
column_dict_SE = columnnames.loc[idx_SE].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
display(column_dict_SE)
# Translate column names
SE_re_df.rename(columns=column_dict_SE, inplace=True)
SE_re_df.loc[(SE_re_df['commissioning_date'].dt.year == 1900), 'commissioning_date'] = np.nan
SE_re_df['data_source'] = 'Vindbrukskollen'
# Choose the translation terms for Sweden
idx_SE = valuenames[valuenames['country'] == 'SE'].index
value_dict_SE = valuenames.loc[idx_SE].set_index('original_name')['opsd_name'].to_dict()
value_dict_SE
# Replace all original value names by the OPSD value names
SE_re_df.replace(value_dict_SE, inplace=True)
# Set nans in the technology column to 'Unknown or unspecified technology'
SE_re_df['technology'].fillna('Unknown or unspecified technology', inplace=True)
# Add energy level 2
SE_re_df['energy_source_level_2'] = 'Wind'
# Add energy_source_level_1
SE_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
SE_re_df[['energy_source_level_1', 'energy_source_level_2', 'technology']].drop_duplicates()
# Get latitude and longitude columns
lat, lon = util.helper.sweref99tm_latlon_transform(SE_re_df['sweref99tm_north'], SE_re_df['sweref99tm_east'])
# Include them in the dataframe
SE_re_df['lat'] = lat
SE_re_df['lon'] = lon
SE_postcode2nuts_filepath = filepaths['Eurostat']
SE_re_df = nuts_converter.add_nuts_information(SE_re_df, 'SE', SE_postcode2nuts_filepath,
lau_name_type='NATIONAL', how=['municipality', 'latlon'])
# Report the number of facilites whose NUTS codes were successfully sudetermined
determined = SE_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', SE_re_df.shape[0], 'facilities in SE.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = SE_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', SE_re_df.shape[0], 'facilities in SE.')
# Define which columns should be kept
columns_to_keep = ['municipality', 'county', 'nuts_1_region', 'nuts_2_region', 'nuts_3_region', 'lat', 'lon',
'energy_source_level_1', 'energy_source_level_2', 'technology', 'se_vindbrukskollen_id',
'site_name', 'manufacturer',
'electrical_capacity', 'commissioning_date', 'data_source']
# Keep only the selected columns
SE_re_df = SE_re_df.loc[:, columns_to_keep]
visualize_points(SE_re_df['lat'],
SE_re_df['lon'],
'Sweden',
categories=SE_re_df['technology']
)
SE_re_df.reset_index(inplace=True, drop=True)
SE_re_df.to_pickle('intermediate/SE_renewables.pickle')
del SE_re_df
# Download the data and get the local paths to the corresponding files
print('Start:', datetime.datetime.now())
downloader = Downloader(version, input_directory_path, source_list_filepath, download_from)
filepaths = downloader.download_data_for_country('CZ')
print('End:', datetime.datetime.now())
CZ_re_filepath = filepaths['ERU']
CZ_geo_filepath = filepaths['Geonames']
CZ_postcode2nuts_filepath = filepaths['Eurostat']
# Define a converter for CZ postcode strings
def to_cz_postcode_format(postcode_str):
return postcode_str[:3] + ' ' + postcode_str[3:]
# Read the data from the csv file
CZ_re_df = pd.read_csv(CZ_re_filepath,
escapechar='\\',
dtype = {
'number_of_sources' : int,
},
parse_dates=['licence_approval_date'],
infer_datetime_format=True,
converters = {
'site_postcode' : to_cz_postcode_format,
'holder_postcode' : to_cz_postcode_format
}
)
# Show a few rows
CZ_re_df.head(5)
CZ_re_df.dtypes
mwe_columns = [col for col in CZ_re_df.columns if 'megawatts_electric' in col and col != 'megawatts_electric_total']
mwt_columns = [col for col in CZ_re_df.columns if 'megawatts_thermal' in col and col != 'megawatts_thermal_total']
def count_types(row):
global mwe_columns
different_types = sum([row[col] > 0 for col in mwe_columns])
return different_types
CZ_re_df.apply(count_types, axis=1).value_counts()
# Drop empty columns and rows
CZ_re_df.dropna(axis='index', how='all', inplace=True)
CZ_re_df.dropna(axis='columns', how='all', inplace=True)
# Drop rows with no data on electrical capacity and the rows where total electrical capacity is 0
empty_mask = (CZ_re_df['megawatts_electric_total'] == 0) | (CZ_re_df['megawatts_electric_total'].isnull())
CZ_re_df = CZ_re_df.loc[~empty_mask]
CZ_re_df.reset_index(inplace=True, drop=True)
# Replace NANs with zeroes in mwe and mwt columns
replacement_dict = {col : 0 for col in mwe_columns + mwt_columns}
CZ_re_df.fillna(replacement_dict, inplace=True)
# Drop the rows where renewable-energy share of the total capacity is equal to zero
conventional_mask = (CZ_re_df['megawatts_electric_hydro'] +
CZ_re_df['megawatts_electric_solar'] +
CZ_re_df['megawatts_electric_biogas_and_biomass'] +
CZ_re_df['megawatts_electric_wind'] +
CZ_re_df['megawatts_electric_unspecified']) == 0
CZ_re_df = CZ_re_df.loc[~conventional_mask]
CZ_re_df.reset_index(inplace=True, drop=True)
# Define the function which will extract the data about the type of energy specified by the given column
# and return it as a dataframe in the "long format"
def select_and_reformat(df, column):
# Use the mwe and mwt columns defined above
global mwe_columns
global mwt_columns
# Declare the given column and its mwt counterpart as exceptions
mwt_exception = column.replace('electric', 'thermal')
exceptions = [column, mwt_exception]
# Exclude all the mwe and mwt columns which do not correspond to the given energy type
columns_to_skip = [col for col in mwe_columns + mwt_columns if col not in exceptions]
# Keep all the other columns
columns_to_keep = [col for col in df.columns if col not in columns_to_skip]
# Find the stations which use the given type of energy
selection_mask = (df[column] > 0)
# Keep them and select the columns we decided to keep
selection_df = df[selection_mask][columns_to_keep]
# Create a new column which will indicate the energy type
selection_df['energy_type'] = " ".join(column.split('_')[2:])
# Remove the energy type name from the columns representing electrical capacity
# and megawatts thermal
selection_df.rename(columns = {column : 'electrical_capacity',
mwt_exception : 'megawatts_thermal'},
inplace=True)
selection_df.drop(columns=['megawatts_electric_total', 'megawatts_thermal_total'],
inplace=True)
# Ensure the rows are properly indexed as 0,1,2,...
selection_df.reset_index(inplace=True, drop=True)
return selection_df
# Create a dataframe for each energy type
dataframes = []
for column in mwe_columns:
selection = select_and_reformat(CZ_re_df, column)
energy_type = selection['energy_type'].unique()[0]
dataframes.append(selection)
# Concatenate the dataframes
CZ_re_df = pd.concat(dataframes, ignore_index=False)
CZ_re_df.reset_index(inplace=True, drop=True)
CZ_re_df
# Choose the translation terms for CZ and create the translation dictionary
idx_CZ = columnnames[columnnames['country'] == 'CZ'].index
column_dict_CZ = columnnames.loc[idx_CZ].set_index('original_name')['opsd_name'].to_dict()
# Show the dictionary
column_dict_CZ
# Translate column names
CZ_re_df.rename(columns=column_dict_CZ, inplace=True)
# Choose the translation terms for Czech Republic
idx_CZ = valuenames[valuenames['country'] == 'CZ'].index
# Choose the translation terms for energy source level 3
energy3_dict_CZ = valuenames.loc[idx_CZ].set_index('original_name')['opsd_name'].to_dict()
energy3_dict_CZ
# Add energy source level 3
CZ_re_df['energy_source_level_3'] = CZ_re_df['technology'].replace(energy3_dict_CZ)
# Choose the terms for energy source level 2
energy2_dict_CZ = valuenames.loc[idx_CZ].set_index('original_name')['energy_source_level_2'].to_dict()
CZ_re_df['energy_source_level_2'] = CZ_re_df['technology'].replace(energy2_dict_CZ)
# Standardize the values for technology
# 1. np.nan means that technology should not be specified for the respective kind of sources
# according to the hierarchy (http://open-power-system-data.org/2016-10-25-opsd_tree.svg)
# 2. 'Other or unspecified technology' means that technology should be specified
# but it was unclear or missing in the original dataset.
technology_dict = {
'biogas and biomass' : np.nan,
'wind' : 'Onshore',
'solar' : 'Other or unspecified technology',
'hydro' : 'Run-of-river',
'unspecified' : np.nan
}
CZ_re_df['technology'] = CZ_re_df['technology'].replace(technology_dict)
# Add energy_source_level_1
CZ_re_df['energy_source_level_1'] = 'Renewable energy'
# Show the hierarchy of sources present in the dataset
CZ_re_df[['energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology']].drop_duplicates()
CZ_re_df['data_source'] = 'ERU'
# Get geo-information
zip_CZ_geo = zipfile.ZipFile(CZ_geo_filepath)
# Read generated postcode/location file
CZ_geo = pd.read_csv(zip_CZ_geo.open('CZ.txt'), sep='\t', header=None)
# add column names as defined in associated readme file
CZ_geo.columns = ['country_code', 'postcode', 'place_name', 'admin_name1',
'admin_code1', 'admin_name2', 'admin_code2', 'admin_name3',
'admin_code3', 'lat', 'lon', 'accuracy']
# Drop rows of possible duplicate postal_code
CZ_geo.drop_duplicates('postcode', keep='last', inplace=True)
# Add longitude/latitude infomation assigned by postcode
CZ_re_df = pd.merge(CZ_re_df,
CZ_geo[['lat', 'lon', 'postcode']],
left_on='postcode',
right_on='postcode',
how='left'
)
CZ_postcode2nuts_filepath = filepaths['Eurostat']
CZ_re_df = nuts_converter.add_nuts_information(CZ_re_df, 'CZ', CZ_postcode2nuts_filepath, how=['postcode'])
# Report the number of facilites whose NUTS codes were successfully determined
determined = CZ_re_df['nuts_1_region'].notnull().sum()
print('NUTS successfully determined for', determined, 'out of', CZ_re_df.shape[0], 'facilities in CZ.')
# Report the number of facilites whose NUTS codes could not be determined
not_determined = CZ_re_df['nuts_1_region'].isnull().sum()
print('NUTS could not be determined for', not_determined, 'out of', CZ_re_df.shape[0], 'facilities in CZ.')
# Define which columns should be kept
columns_to_keep = ['site_name', 'region', 'municipality', 'locality', 'postcode',
'nuts_1_region', 'nuts_2_region', 'nuts_3_region', 'lat', 'lon',
'energy_source_level_1', 'energy_source_level_2', 'energy_source_level_3', 'technology',
'owner', 'electrical_capacity', 'data_source']
# Keep only the selected columns
CZ_re_df = CZ_re_df.loc[:, columns_to_keep]
CZ_re_df.drop_duplicates(inplace=True)
CZ_re_df.reset_index(drop=True, inplace=True)
visualize_points(CZ_re_df['lat'],
CZ_re_df['lon'],
'Czechia',
categories=CZ_re_df['energy_source_level_2']
)
CZ_re_df.reset_index(inplace=True, drop=True)
CZ_re_df.to_pickle('intermediate/CZ_renewables.pickle')
del CZ_re_df
zip_archive = zipfile.ZipFile(input_directory_path + '.zip', 'w', zipfile.ZIP_DEFLATED)
print("Zipping the raw files...")
for filename in os.listdir(input_directory_path):
print("Adding", filename, "to the zip.")
filepath = os.path.join(input_directory_path, filename)
zip_archive.write(filepath)
zip_archive.close()
print("Done!")
#shutil.rmtree(input_directory_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Script setup
Step2: Settings
Step3: Update the download links
Step4: Note that, as of August 25, 2020, the following sources are available only from the OPSD server and the data will be downloaded from it even if download_from is set to 'original_sources'
Step5: Set up the NUTS converter
Step6: Setup translation dictionaries
Step7: Download and process per country
Step8: Translate column names
Step9: Add information and choose columns
Step10: Merge DataFrames
Step11: Translate values and harmonize energy source level 2
Step12: Separate and assign energy source level 1 - 3 and technology
Step13: According to the OPSD energy hierarchy, the power plants whose energy_source_level_2 is either Storage or Other fossil fuels do not belong to the class of renewable-energy facilities. Therefore, we can remove them.
Step14: Summary of DataFrame
Step15: Transform electrical capacity from kW to MW
Step16: Georeferencing
Step17: Merge geometry information by using the postcode
Step18: Transform geoinformation
Step19: Remove the utm_zone "32" from the utm_east value
Step20: Conversion UTM to latitude and longitude
Step21: Check
Step22: Remove temporary columns
Step23: Save temporary Pickle (to have a point to quickly return to if things break after this point)
Step24: Clean data
Step25: Assign NUTS codes
Step26: Visualize
Step27: Save
Step28: Denmark DK
Step29: The function for reading the data on the wind turbines.
Step30: Translate column names
Step31: Add data source and missing information
Step32: Correct the dates
Step33: Translate values and harmonize energy source level 2
Step34: Georeferencing
Step35: Postcode to lat/lon (WGS84)
Step36: Merge DataFrames, add NUTS information and choose columns
Step37: Let us check geoinformation on the facilities for which NUTS codes could not be determined.
Step38: As we see, no information on municipality and latitude/longitude coordinates are present for those power plants, so there was no possibility to assign them their NUTS codes.
Step39: Remove duplicate rows
Step40: Transform electrical_capacity from kW to MW
Step41: Visualize
Step42: Save
Step43: France FR
Step44: ODRE data
Step45: Translate column names
Step46: Add data source
Step47: Translate values
Step48: Correct site names
Step49: Replace suspicious dates with N/A
Step50: Check missing values
Step51: As we see above, no column contains only the null value, so we do not need to drop any.
Step52: No row contains only the null values, so no need to for filtering on that basis.
Step53: In order to facilitate further processing, we can remove the rows that we know for sure we won't need.
Step54: Standardize source levels 1-3 and technology
Step55: First, let us standardize the values for energy source level 2 and technology.
Step56: Let us now deal with the third level of the energy hierarchy. Only Bioenergy has the third level. Information on it can be found in the column energy_source_level_3 (whose original name was combustible).
Step57: We see that only the following two corrections are needed
Step58: Finally, we declare all the plants as renewable and show the final hierarchy.
Step59: Georeferencing
Step60: Determine NUTS codes
Step61: Let us now check the facilities without NUTS classification.
Step62: We see that no row with known longitude and latitude was left unclassified.
Step63: We also see that problematic municipality names are either not present in the official translation tables or more than one municipality in the tables bears them.
Step64: Therefore, we can confirm that NUTS classification codes were determined with the highest precision possible.
Step65: Old data
Step66: This French data source contains number of installations and sum of installed capacity per energy source per municipality. The list is limited to the plants which are covered by article 10 of february 2000 by an agreement to a purchase commitment.
Step67: Add data source
Step68: Translate values and harmonize energy source level 2
Step69: Separate and assign energy source level 1-3 and technology
Step70: Show the hierarchy of the energy types present in the data.
Step71: Georeferencing
Step72: Determine NUTS codes
Step73: As we can see, the NUTS codes were determined successfully for all the facilities in the dataset.
Step74: Select the columns
Step75: Visualize
Step76: Save
Step77: Poland PL
Step78: Load and explore the data
Step79: There are only five columns
Step80: To ease the work, we can translate the columns' names to English using the OPSD translation tables.
Step81: Inspect the data
Step82: We can see that each name comes in two forms
Step83: Now, let us check the strings for districts (powiats).
Step84: As we see in the list, the same district can be referred to by more than one string. We identify the following ways a district is referred to in the dataset
Step85: Harmonising energy levels
Step86: Georeferencing (NUTS classification)
Step87: We can now apply a heuristic method for finding the corresponding name in the GeoNames data. It is based on similarity between strings. It turns out that it works fine, except for a couple of cases, which we deal with manually.
Step88: The following districts have not been mapped correctly
Step89: Show the rows for which we could not find postcodes.
Step90: There are only 17 such power plants and all of them are placed in the districts which we deliberately left out for manual classification.
Step91: Add data source and year
Step92: Select columns
Step93: Save
Step94: Switzerland CH
Step95: Translate column names
Step96: Add data source
Step97: Harmonize energy source hierarchy and translate values
Step98: Separate and assign energy source level 1-3 and technology
Step99: The power plants with energy_source_level_3=Biomass and biogas and technology=Steam turbine do not belong to the renewable energy power plants, so we can remove them.
Step100: Replace the rest of the original terms with their OPSD equivalents
Step101: Georeferencing
Step102: Add NUTS information
Step103: Let us check the stations for which NUTS codes could not be determined.
Step104: We see that the municipalities of only plants for which we could not determine the NUTS codes cannot be found in the official translation tables, so there was no possibility to assign them their NUTS classification codes.
Step105: Select columns to keep
Step106: Visualize
Step107: Save
Step108: Check and validation of the renewable power plants list as well as the creation of CSV/XLSX/SQLite files can be found in Part 2 of this script. It also generates a daily time series of cumulated installed capacities by energy source.
Step109: Clean the data
Step110: Translate column names
Step111: Add data source
Step112: Translate values and harmonise energy source levels 1-3 and technology
Step113: Georeferencing
Step114: Cases with unknown Easting and Northing coordinates
Step115: Cases for approximation
Step116: Add NUTS information
Step117: Let us see the facilities for which the NUTS codes could not be determined.
Step118: There are two such rows only. The langitude and longitude coordinates, as well as municipality codes, are missing from the data set, so NUTS codes could not have been determined.
Step119: We see that some facilities appear to be located in the sea. Let us plot the original OSGB coordinates to see if translation to the standard longitude and latitude coordinates failed for some locations.
Step120: As we can see, the maps are basically the same, which confirms that translation to the longitude and latitude coordinates is done correctly and that they reflect the positions specified by the original X and Y OSGB coordinates.
Step121: Save
Step122: Sweden
Step123: Load the data
Step124: Clean the data
Step125: Translate column names
Step126: Correct the dates
Step127: Add source
Step128: Translate values and harmonize energy source levels
Step129: Georeferencing
Step130: Assigning NUTS codes
Step131: Select the columns to keep
Step132: Visualize
Step133: Save
Step134: Czech Republic
Step135: Let's inspect the dataframe's columns
Step136: It contains 30 columns
Step137: As of April 2020, as we can see in the output above, there are only 4 sites which use more than one type of renewable energy, and there are 193 sites which do not use renewable energy at all.
Step138: Reformat the data
Step139: Let us see what is this restructured dataframe like.
Step140: The number of columns has been reduced as we have transformed the data to the long format. The rows representning conventional power plants have been excluded. Since only few sites use multiple types of energy, the total number of rows has not increased.
Step141: Translate values and harmonize energy levels
Step142: Add data source
Step143: Georeferencing
Step144: Assign NUTS codes
Step145: Select the columns to keep
Step146: Drop duplicates
Step147: Visualuze
Step148: Save
Step149: Zip the raw data
|
11,272
|
<ASSISTANT_TASK:>
Python Code:
# @title Install
!pip install --upgrade --no-cache-dir recsim
#@title Generic imports
import numpy as np
from gym import spaces
import matplotlib.pyplot as plt
from scipy import stats
#@title RecSim imports
from recsim import document
from recsim import user
from recsim.choice_model import MultinomialLogitChoiceModel
from recsim.simulator import environment
from recsim.simulator import recsim_gym
class LTSDocument(document.AbstractDocument):
def __init__(self, doc_id, kaleness):
self.kaleness = kaleness
# doc_id is an integer representing the unique ID of this document
super(LTSDocument, self).__init__(doc_id)
def create_observation(self):
return np.array([self.kaleness])
@staticmethod
def observation_space():
return spaces.Box(shape=(1,), dtype=np.float32, low=0.0, high=1.0)
def __str__(self):
return "Document {} with kaleness {}.".format(self._doc_id, self.kaleness)
class LTSDocumentSampler(document.AbstractDocumentSampler):
def __init__(self, doc_ctor=LTSDocument, **kwargs):
super(LTSDocumentSampler, self).__init__(doc_ctor, **kwargs)
self._doc_count = 0
def sample_document(self):
doc_features = {}
doc_features['doc_id'] = self._doc_count
doc_features['kaleness'] = self._rng.random_sample()
self._doc_count += 1
return self._doc_ctor(**doc_features)
sampler = LTSDocumentSampler()
for i in range(5): print(sampler.sample_document())
d = sampler.sample_document()
print("Documents have observation space:", d.observation_space(), "\n"
"An example realization is: ", d.create_observation())
class LTSUserState(user.AbstractUserState):
def __init__(self, memory_discount, sensitivity, innovation_stddev,
choc_mean, choc_stddev, kale_mean, kale_stddev,
net_kaleness_exposure, time_budget, observation_noise_stddev=0.1
):
## Transition model parameters
##############################
self.memory_discount = memory_discount
self.sensitivity = sensitivity
self.innovation_stddev = innovation_stddev
## Engagement parameters
self.choc_mean = choc_mean
self.choc_stddev = choc_stddev
self.kale_mean = kale_mean
self.kale_stddev = kale_stddev
## State variables
##############################
self.net_kaleness_exposure = net_kaleness_exposure
self.satisfaction = 1 / (1 + np.exp(-sensitivity * net_kaleness_exposure))
self.time_budget = time_budget
# Noise
self._observation_noise = observation_noise_stddev
def create_observation(self):
User's state is not observable.
clip_low, clip_high = (-1.0 / (1.0 * self._observation_noise),
1.0 / (1.0 * self._observation_noise))
noise = stats.truncnorm(
clip_low, clip_high, loc=0.0, scale=self._observation_noise).rvs()
noisy_sat = self.satisfaction + noise
return np.array([noisy_sat,])
@staticmethod
def observation_space():
return spaces.Box(shape=(1,), dtype=np.float32, low=-2.0, high=2.0)
# scoring function for use in the choice model -- the user is more likely to
# click on more chocolatey content.
def score_document(self, doc_obs):
return 1 - doc_obs
class LTSStaticUserSampler(user.AbstractUserSampler):
_state_parameters = None
def __init__(self,
user_ctor=LTSUserState,
memory_discount=0.9,
sensitivity=0.01,
innovation_stddev=0.05,
choc_mean=5.0,
choc_stddev=1.0,
kale_mean=4.0,
kale_stddev=1.0,
time_budget=60,
**kwargs):
self._state_parameters = {'memory_discount': memory_discount,
'sensitivity': sensitivity,
'innovation_stddev': innovation_stddev,
'choc_mean': choc_mean,
'choc_stddev': choc_stddev,
'kale_mean': kale_mean,
'kale_stddev': kale_stddev,
'time_budget': time_budget
}
super(LTSStaticUserSampler, self).__init__(user_ctor, **kwargs)
def sample_user(self):
starting_nke = ((self._rng.random_sample() - .5) *
(1 / (1.0 - self._state_parameters['memory_discount'])))
self._state_parameters['net_kaleness_exposure'] = starting_nke
return self._user_ctor(**self._state_parameters)
sampler = LTSStaticUserSampler()
starting_nke = []
for i in range(1000):
sampled_user = sampler.sample_user()
starting_nke.append(sampled_user.net_kaleness_exposure)
_ = plt.hist(starting_nke)
class LTSResponse(user.AbstractResponse):
# The maximum degree of engagement.
MAX_ENGAGEMENT_MAGNITUDE = 100.0
def __init__(self, clicked=False, engagement=0.0):
self.clicked = clicked
self.engagement = engagement
def create_observation(self):
return {'click': int(self.clicked), 'engagement': np.array(self.engagement)}
@classmethod
def response_space(cls):
# `engagement` feature range is [0, MAX_ENGAGEMENT_MAGNITUDE]
return spaces.Dict({
'click':
spaces.Discrete(2),
'engagement':
spaces.Box(
low=0.0,
high=cls.MAX_ENGAGEMENT_MAGNITUDE,
shape=tuple(),
dtype=np.float32)
})
def user_init(self,
slate_size,
seed=0):
super(LTSUserModel,
self).__init__(LTSResponse,
LTSStaticUserSampler(LTSUserState,
seed=seed), slate_size)
self.choice_model = MultinomialLogitChoiceModel({})
def simulate_response(self, slate_documents):
# List of empty responses
responses = [self._response_model_ctor() for _ in slate_documents]
# Get click from of choice model.
self.choice_model.score_documents(
self._user_state, [doc.create_observation() for doc in slate_documents])
scores = self.choice_model.scores
selected_index = self.choice_model.choose_item()
# Populate clicked item.
self._generate_response(slate_documents[selected_index],
responses[selected_index])
return responses
def generate_response(self, doc, response):
response.clicked = True
# linear interpolation between choc and kale.
engagement_loc = (doc.kaleness * self._user_state.choc_mean
+ (1 - doc.kaleness) * self._user_state.kale_mean)
engagement_loc *= self._user_state.satisfaction
engagement_scale = (doc.kaleness * self._user_state.choc_stddev
+ ((1 - doc.kaleness)
* self._user_state.kale_stddev))
log_engagement = np.random.normal(loc=engagement_loc,
scale=engagement_scale)
response.engagement = np.exp(log_engagement)
def update_state(self, slate_documents, responses):
for doc, response in zip(slate_documents, responses):
if response.clicked:
innovation = np.random.normal(scale=self._user_state.innovation_stddev)
net_kaleness_exposure = (self._user_state.memory_discount
* self._user_state.net_kaleness_exposure
- 2.0 * (doc.kaleness - 0.5)
+ innovation
)
self._user_state.net_kaleness_exposure = net_kaleness_exposure
satisfaction = 1 / (1.0 + np.exp(-self._user_state.sensitivity
* net_kaleness_exposure)
)
self._user_state.satisfaction = satisfaction
self._user_state.time_budget -= 1
return
def is_terminal(self):
Returns a boolean indicating if the session is over.
return self._user_state.time_budget <= 0
LTSUserModel = type("LTSUserModel", (user.AbstractUserModel,),
{"__init__": user_init,
"is_terminal": is_terminal,
"update_state": update_state,
"simulate_response": simulate_response,
"_generate_response": generate_response})
slate_size = 3
num_candidates = 10
ltsenv = environment.Environment(
LTSUserModel(slate_size),
LTSDocumentSampler(),
num_candidates,
slate_size,
resample_documents=True)
def clicked_engagement_reward(responses):
reward = 0.0
for response in responses:
if response.clicked:
reward += response.engagement
return reward
lts_gym_env = recsim_gym.RecSimGymEnv(ltsenv, clicked_engagement_reward)
observation_0 = lts_gym_env.reset()
print('Observation 0')
print('Available documents')
doc_strings = ['doc_id ' + key + " kaleness " + str(value) for key, value
in observation_0['doc'].items()]
print('\n'.join(doc_strings))
print('Noisy user state observation')
print(observation_0['user'])
# Agent recommends the first three documents.
recommendation_slate_0 = [0, 1, 2]
observation_1, reward, done, _ = lts_gym_env.step(recommendation_slate_0)
print('Observation 1')
print('Available documents')
doc_strings = ['doc_id ' + key + " kaleness " + str(value) for key, value
in observation_1['doc'].items()]
print('\n'.join(doc_strings))
rsp_strings = [str(response) for response in observation_1['response']]
print('User responses to documents in the slate')
print('\n'.join(rsp_strings))
print('Noisy user state observation')
print(observation_1['user'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The main imports we use from RecSim are user and document -- they provide the abstract classes needed to instantiate all components of the environment.
Step2: A Document Model
Step3: Having implemented a document template, we now need a document sampler. A document sampler is a generative distribution for documents, which will get invoked to regenerate the corpus either after every step or after every session (depending on runner_lib settings). The object of interest is the sample_document() function required by the base class. It must return a document sampled from our desired distribution. In our case, it returns a document with a kaleness score sampled from a uniform distribution.
Step4: With this we can now simulate documents.
Step6: In RecSim, we never have to manually sample documents---the simulator does this, so a document sampler is a required argument of the simulation.
Step7: Also similarly to our document model, we have need a starting state sampler, that sets the starting user state for every session. For this tutoral, we will just sample the starting $\text{nke}_0$ and keep all the static parameters the same, meaning that we essentially deal with the same user at different levels of satisfaction. One can, of course, extend this easily to generate also users with different parameters by randomizing the values.
Step8: Let's try this out!
Step9: Response model
Step10: User model
Step11: The simulate_response() method takes in a slate (list) of recommended (i.e., produced by the agent) LTSDocuments and must output a slate of user responses. The k-th response in the slate of responses corresponds to the k-th document in the recommendation slate. In this case, we pick one document to click on based on our choice model, and produce an engagement value. We will let the responses to the unclicked documents be vacuous, however, one might use them in more subtle ways (e.g., recording whether the user inspected that document, etc.).
Step12: The update_state() method implements our state transition kernel. It consumes the recommended slate, as well as the actual choices (responses) to induce a state transition. The state is modified in-place, so the function does not return anything.
Step14: Finally, the session expires when the time budget goes to 0.
Step15: We have all the components to instantiate a user model, so let's wrap them up in a class.
Step16: Finally, we assemble all components into an Environment.
Step17: Recap
Step18: Now, we simply use the OpenAI gym wrapper, which essentially provides a familiar step-based API.
|
11,273
|
<ASSISTANT_TASK:>
Python Code:
#a = 2
#b = 1
a, b, c = 2, 3, "Hello World!"
#print a # works in python2 but not python3
print( a )
print ("Hello World!")
print (3*3)
print (3**2)
print (2+2)
myint = 7
myfloat1 = 7.0; myfloat2 = float(7)
print (myint/2); print (myfloat1/2); print (myfloat2*2); print (myfloat2**2)
mystring1 = 'Hello'
# in R mystring1 <- 'Hello'
mystring2 = "World"
mystring12 = mystring1+" "+mystring2+"!"+"?"+"!"
#add a comment
print (mystring12)
dir()
del mystring1, mystring2
dir()
print (mystring12)
print (len(mystring12))
print (mystring12 * 2)
print (3.0 / 2)
print (3 / 2.)
print (3 / 2)
print (mystring12)
print (mystring12[0])
print (mystring12[1:2])
# 1:n [1:n]
print (mystring12[:])
mylist = [1, 2, 3.]
print (mylist)
print (mylist[0])
mylist = [1, 2, 3, "2", "Hello World", [1, 2, 3] ]
print (mylist)
print (mylist[-1])
print (mylist[2]*3)
firstname= ["Mansoore","Shakib", "Mahdi","Somayeh"]
lastname=["Razmkhah", "Panah", "Alehdaghi","Noshadi"]
print (firstname)
print (lastname)
#The "zip" function pairs several lists of the same size.
firstlast = zip(firstname, lastname)
print (list(firstlast))
x = 2
print (x == 2)
print (x < 2)
print (not x < 2)
print( (x == 2)*(x > 1) )
a = 0
if a == 0:
print( "a equals zero" ) # do not use a is equal to zero (bad English)
else:
print( "a is non-zero" ) # do not use a is not equal to zero (bad English)
if x==2:
print( "x equals two" )
else:
print( "x differs" )
print( firstname )
if "John" in firstname:
print "John is in the list"
print( range(10) )
print( range(2,6) )
print( range(2,14,4) )
print( range(len(firstname)))
for i in range(len(firstname)):
print( firstname[i] )
for name in firstname:
print( name )
def sq(x):
return x**2
print( sq(75) )
def sqlist1(X):
return [sq(x) for x in X]
sqlist1(range(6))
def sqlist2(X):
Y=[0]*len(X)
for i in range(len(X)):
Y[i]=sq(X[i])
return Y
sqlist2(range(6))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can find more examples at LearnPython & PythonTutorial.
Step2: You may separate commands using ";"
Step3: print myfloat1
Step4: This is a box of comments
Step5: 3. Lists
Step6: Here is a simple example that shows how a list is different with an array.
Step7: Exercise <br>
Step8: You may insert several other elements to the list using "append", "insert", "index" and several others. See below for for more details
Step9: Lines and Indentation
Step10: 5. For Loop
Step11: Exercise <br>
Step12: Exercise <br>
|
11,274
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage('biosemi128')
info = mne.create_info(montage.ch_names, sfreq, 'eeg').set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data from the publication
Step2: Create and fit a receptive field model
Step3: Investigate model coefficients
Step4: Create and fit a stimulus reconstruction model
Step5: Visualize stimulus reconstruction
Step6: Investigate model coefficients
|
11,275
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display, HTML
from nxpd import draw
import networkx as nx
def draw_graph(
graph, labels=None
):
# create networkx graph
G = nx.DiGraph()
G.graph['dpi'] = 120
G.add_nodes_from(set([
graph[k1][k2]
for k1 in range(len(graph))
for k2 in range(len(graph[k1]))
]))
G.add_edges_from(graph)
return draw(G, show='ipynb')
from lea import *
flip1 = Lea.fromValFreqs(('head',67),('tail',33))
flip1
P(flip1=='head')
Pf(flip1=='head')
flip1.random(10)
flip2 = flip1.clone()
flips = flip1 + '-' + flip2
flips
P(flip1==flip2)
P(flip1!=flip2)
flip1.upper()
flip1.upper()[0]
def toInt(flip):
return 1 if flip=='head' else 0
headCount1 = flip1.map(toInt)
headCount1
headCount2 = flip2.map(toInt)
headCounts = headCount1 + headCount2
headCounts
headCounts.given(flip1==flip2)
headCounts.given(flip1!=flip2)
flip1.given(headCounts==0)
flip1.given(headCounts==1)
flip1.given(headCounts==2)
rain = B(20,100)
sprinkler = Lea.if_(rain, B(1,100), B(40,100))
grassWet = Lea.buildCPT(
( ~sprinkler & ~rain, False ),
( ~sprinkler & rain, B(80,100)),
( sprinkler & ~rain, B(90,100)),
( sprinkler & rain, B(99,100))
)
grassWet
Sunny = B(70,100)
Raise = B(1,100)
print(Sunny)
print(Raise)
# independence check
assert Sunny.given(Raise).p(True) == Sunny.p(True)
Happy = Lea.buildCPT(
(Sunny & Raise, B(100,100)),
(~Sunny & Raise, B(90,100)),
(Sunny & ~Raise, B(70,100)),
(~Sunny & ~Raise, B(10,100))
)
Happy
# Evidences
# P(H|S)=0.703
# P(H|R)=0.97
# P(H)=P(H|S)P(S)+P(H|¬S)P(¬S)=…≈0.5245
assert Happy.given(Sunny).pmf(True) == 0.703
assert Happy.given(Raise).pmf(True) == 0.97
assert Happy.pmf(True) == 0.5245
Raise.given(Happy & Sunny).pmf(True)
Raise.given(Happy).pmf(True)
Raise.given(Happy & ~Sunny).pmf(True)
graph = [
('Burglary', 'Alarm'),
('Earthquake', 'Alarm'),
('Alarm', 'John'),
('Alarm', 'Mary')
]
draw_graph(graph)
Burglary = B(1,1000)
Earthquake = B(2,1000)
Alarm = Lea.buildCPT(
(Burglary & Earthquake, B(95,100)),
(Burglary & ~Earthquake, B(94,100)),
(~Burglary & Earthquake, B(29,100)),
(~Burglary & ~Earthquake, B(1,1000)),
)
John = Lea.if_(Alarm, B(90,100), B(5,100))
Mary = Lea.if_(Alarm, B(70,100), B(1,100))
# P(+b) P(e) P(a|+b,e) P(+j|a) P(+m|a)
print('P(+b):', Burglary.pmf(True))
print('P(e):', Earthquake.pmf(True))
print('P(a|+b,e):', Alarm.given(Burglary & Earthquake).pmf(True))
print('P(+j|a):', John.given(Alarm).pmf(True))
print('P(+m|a):', Mary.given(Alarm).pmf(True))
print('=', 0.001 * 0.002 * 0.95 * 0.9 * 0.7)
graph = [
('A', 'X1'),
('A', 'X2'),
('A', 'X3'),
]
draw_graph(graph)
A = Lea.boolProb(1,2)
# using variable elimination
# X1_A = Lea.if_(A, (2,10), (6,10))
# X2_A = Lea.if_(A, (2,10), (6,10))
# X3_A = Lea.if_(A, (2,10), (6,10))
X1 = B(4, 10)
X2 = B(4, 10)
X3 = B(4, 10)
A.given(X1 & X2 & ~X3).pmf(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flip coin
Step2: "Rain-Sprinkler-Grass" bayesian network
Step3: Happiness Test
Step4: $P(of='H', given=['S', 'R'], value=1)$
Step5: Alarm test
Step6: Test #1
|
11,276
|
<ASSISTANT_TASK:>
Python Code:
layers = [0.23, 0.34, 0.45, 0.25, 0.23, 0.35]
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
rcs
# Exercise
def compute_rc(layers):
Computes reflection coefficients given
a list of layer impedances.
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
return rcs
compute_rc(layers)
import numpy as np # Just like importing file
biglog = np.random.random(10000000)
%timeit compute_rc(biglog)
# Exercise
def compute_rc_vector(layers):
uppers = layers[:-1]
lowers = layers[1:]
return (lowers - uppers) / (uppers + lowers)
%timeit compute_rc_vector(biglog)
from numba import jit
@jit
def compute_rc_numba(layers):
uppers = layers[:-1]
lowers = layers[1:]
return (lowers - uppers) / (uppers + lowers)
%timeit compute_rc_numba(biglog)
def compute_rc_slow(layers):
uppers = layers[:-1]
lowers = layers[1:]
rcs = np.zeros_like(uppers)
for i in range(rcs.size):
rcs[i] = (lowers[i] - uppers[i]) / (uppers[i] + lowers[i])
return rcs
%timeit compute_rc_slow(biglog)
@jit
def compute_rc_faster(layers):
uppers = layers[:-1]
lowers = layers[1:]
rcs = np.zeros_like(uppers)
for i in range(rcs.size):
rcs[i] = (lowers[i] - uppers[i]) / (uppers[i] + lowers[i])
return rcs
%timeit compute_rc_faster(biglog)
@jit
def compute_rc_hopeful(layers):
Computes reflection coefficients given
a list of layer impedances.
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
return rcs
%timeit compute_rc_hopeful(biglog)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(biglog[:500])
fig = plt.figure(figsize=(15,2))
ax = fig.add_subplot(111)
ax.plot(biglog[:500])
ax.set_title("big log")
plt.show()
class Layers(object):
def __init__(self, layers, label=None):
# Just make sure we end up with an array
self.layers = np.array(layers)
self.label = label or "My log"
self.length = self.layers.size # But storing len in an attribute is unexpected...
def __len__(self): # ...better to do this.
return len(self.layers)
def rcs(self):
uppers = self.layers[:-1]
lowers = self.layers[1:]
return (lowers-uppers) / (uppers+lowers)
def plot(self, lw=0.5, color='#6699ff'):
fig = plt.figure(figsize=(2,6))
ax = fig.add_subplot(111)
ax.barh(range(len(self.layers)), self.layers, color=color, lw=lw, align='edge', height=1.0, alpha=1.0, zorder=10)
ax.grid(zorder=2)
ax.set_ylabel('Layers')
ax.set_title(self.label)
ax.set_xlim([-0.5,1.0])
ax.set_xlabel('Measurement (units)')
ax.invert_yaxis()
#ax.set_xticks(ax.get_xticks()[::2]) # take out every second tick
ax.spines['right'].set_visible(False) # hide the spine on the right
ax.yaxis.set_ticks_position('left') # Only show ticks on the left and bottom spines
plt.show()
l = Layers(layers, label='Well # 1')
l.rcs()
len(l)
l.plot()
rel_interval = np.cumsum(l.rcs(), dtype=float)
len(rel_interval)
relative_layers = np.insert(rel_interval, 0, 0)
relative_layers
relative = Layers(relative_layers, "relative")
relative.layers
relative.plot()
url = "http://en.wikipedia.org/wiki/Jurassic"
import requests
r = requests.get(url)
r.text[:500]
import re
s = re.search(r'<i>(.+?million years ago)</i>', r.text)
text = s.group(1)
def get_age(period):
url = "http://en.wikipedia.org/wiki/" + period
r = requests.get(url)
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+) million years ago</i>', r.text).groups()
return float(start), float(end)
period = "Jurassic"
get_age(period)
def duration(period):
t0, t1 = get_age(period)
duration = t0 - t1
response = "According to Wikipedia, the {0} period was {1:.2f} Ma long.".format(period, duration)
return response
duration('Cretaceous')
l = [0.001, 1, 3, 51, 41 , 601]
sorted(l)
def strlen(n):
return len(str(n))
sorted(l, key=strlen)
sorted(l, key=lambda n: len(str(n)))
def sq(n):
return n**2
# In Python 3, map produces an iterator, not a list.
# So we must cast to list to inspect its contents.
list(map(sq, l))
list(map(lambda n: n**2, l))
[n**2 for n in l]
def runsum(a, b):
return a + b
# For some reason reduce is not in the main namespace like map
from functools import reduce
reduce(runsum, l)
def runmult(a, b):
return a * b
reduce(runmult, l)
def power(a, b):
return a**b
def cuber(a):
return power(a, 3)
cuber(2)
from functools import partial
cuber = partial(power, b=3)
cuber(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Functions
Step3: Put in a file and import into a new notebook
Step4: Note that the log has to be fairly big for the benchmarking to work properly, because otherwise the CPU caches the computation and this skews the results.
Step5: 60 times faster on my machine!
Step6: OK, we'll make a fake example.
Step8: However, you can't speed up our original list-based function this way.
Step9: Plotting basics
Step10: Not we can only plot part of biglog because it contains too many points for matplotlib (and for our screen!). If we really wanted to plot it, we'd have to find a way to upscale it.
Step11: Objected oriented basics
Step12: Web scraping basics
Step13: Use View Source in your browser to figure out where the age range is on the page, and what it looks like.
Step14: Using a regular expression
Step15: Exercise
Step16: Functional programming basics
Step17: What if we want to sort based on the number of characters? Don't ask why, we just do. Then we write a function that returns a key which, when sorted, will give the ordering we want.
Step18: We could rewrite that tiny function as a lambda, which is basically a little unnamed function
Step19: When would you make a named function vs a lambda? It all depends on whether you want to use it again or not.
Step20: We can get around defining that tiny function sq() with a lambda, which you can think of as a temporary, 'throwaway' function
Step21: In practice, we'd often write this as a list comprehension. Then we can skip the creation of the funciton or lambda entirely
Step22: One of the advantages of map is that it is 'lazy', so if you map a function to a giant list, you don't get a giant list back, you get an iterator. A list-comp would give you a giant list, possibly jamming the memory on your box.
Step23: partial for making curry
Step24: But some people might find it more inuitive to do it this way
|
11,277
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy import stats
import seaborn as sns
sns.set(rc={"axes.labelsize": 15});
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5;
plt.rcParams['axes.grid'] = True;
plt.gray();
#Reading the dataset in a dataframe using Pandas
df = pd.read_csv("data.csv")
#Print first observations
df.head()
df.columns
from sklearn.feature_extraction.text import CountVectorizer
# Turn the text documents into vectors
vectorizer = CountVectorizer(min_df=1, stop_words="english")
X = vectorizer.fit_transform(df['Restaurant_Name']).toarray()
y = df['Letter_Grade']
target_names = y.unique()
# Train/Test split and cross validation:
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)
X_train.shape
from sklearn.decomposition import TruncatedSVD
svd_two = TruncatedSVD(n_components=2, random_state=42)
X_train_svd = svd_two.fit_transform(X_train)
pc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame
sns.pairplot(pc_df, diag_kind='kde');
# Percentage of variance explained for each component
def pca_summary(pca):
return pd.DataFrame([np.sqrt(pca.explained_variance_),
pca.explained_variance_ratio_,
pca.explained_variance_ratio_.cumsum()],
index = ["Standard deviation", "Proportion of Variance", "Cumulative Proportion"],
columns = (map("PC{}".format, range(1, len(pca.components_)+1))))
pca_summary(svd_two)
# Only 3.5% of the variance is explained in the data
svd_two.explained_variance_ratio_.sum()
from itertools import cycle
def plot_PCA_2D(data, target, target_names):
colors = cycle('rgbcmykw')
target_ids = range(len(target_names))
plt.figure()
for i, c, label in zip(target_ids, colors, target_names):
plt.scatter(data[target == i, 0], data[target == i, 1],
c=c, label=label)
plt.legend()
plot_PCA_2D(X_train_svd, y_train, target_names)
# Now, let's try with 100 components to see how much it explains
svd_hundred = TruncatedSVD(n_components=100, random_state=42)
X_train_svd_hundred = svd_hundred.fit_transform(X_train)
# 43.7% of the variance is explained in the data for 100 dimensions
# This is mostly due to the High dimension of data and sparcity of the data
svd_hundred.explained_variance_ratio_.sum()
plt.figure(figsize=(10, 7))
plt.bar(range(100), svd_hundred.explained_variance_)
svd_sparta = TruncatedSVD(n_components=300, random_state=42)
X_train_svd_sparta = svd_sparta.fit_transform(X_train)
X_test_svd_sparta = svd_sparta.fit_transform(X_test)
svd_sparta.explained_variance_ratio_.sum()
plt.figure(figsize=(10, 7))
plt.bar(range(300), svd_sparta.explained_variance_)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import cross_validation
from sklearn.naive_bayes import MultinomialNB
# Fit a classifier on the training set
classifier = MultinomialNB().fit(np.absolute(X_train_svd_sparta), y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train_svd_sparta, y_train) * 100))
# Evaluate the classifier on the testing set
print("Testing score: {0:.1f}%".format(
classifier.score(X_test_svd_sparta, y_test) * 100))
streets = df['Geocode'].apply(pd.Series)
streets = df['Geocode'].tolist()
split_streets = [i.split(' ', 1)[1] for i in streets]
split_streets = [i.split(' ', 1)[1] for i in split_streets]
split_streets = [i.split(' ', 1)[0] for i in split_streets]
split_streets[0]
import re
shortword = re.compile(r'\W*\b\w{1,3}\b')
for i in range(len(split_streets)):
split_streets[i] = shortword.sub('', split_streets[i])
# Create a new column with the street:
df['Street_Words'] = split_streets
from sklearn.feature_extraction.text import CountVectorizer
# Turn the text documents into vectors
vectorizer = CountVectorizer(min_df=1, stop_words="english")
X = vectorizer.fit_transform(df['Street_Words']).toarray()
y = df['Letter_Grade']
target_names = y.unique()
# Train/Test split and cross validation:
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)
X_train.shape
from sklearn.decomposition import TruncatedSVD
svd_two = TruncatedSVD(n_components=2, random_state=42)
X_train_svd = svd_two.fit_transform(X_train)
pc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame
sns.pairplot(pc_df, diag_kind='kde');
pca_summary(svd_two)
# 25% of the variance is explained in the data when we use only TWO principal components!
svd_two.explained_variance_ratio_.sum()
# Now, let's try with 10 components to see how much it explains
svd_ten = TruncatedSVD(n_components=10, random_state=42)
X_train_svd_ten = svd_ten.fit_transform(X_train)
# 53.9% of the variance is explained in the data for 10 dimensions
# This is mostly due to the High dimension of data and sparcity of the data
svd_ten.explained_variance_ratio_.sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Import dataset
Step2: Our first collection of feature vectors will come from the Restaurant_Name column. We are still trying to predict whether a restaurant falls under the "pristine" category (Grade A, score greater than 90) or not. We could also try to see whether we could predict a restaurant's grade (A, B, C or F)
Step3: Even though we do not have more features (3430) than rows of data (14888), we can still attempt to reduce the feature space by using Truncated SVD
Step4: This must be the most uninformative plot in the history of plots. Obviously 2 principal components aren't enough. Let's try with 100
Step5: Is it worth it to keep adding dimensions? Recall that we started with a 3430-dimensional feature space which we have already reduced to 100 dimensions, and according to the graph above each dimension over the 100th one will be adding less than 0.5% in our explanation of the variance. Let us try once more with 300 dimensions, to see if we can get something respectably over 50% (so we can be sure we are doing better than a coin toss)
Step6: 66.2% of the variance is explained through our model. This is quite respectable.
Step7: Restaurant Streets as a Bag-of-words model
|
11,278
|
<ASSISTANT_TASK:>
Python Code:
from datetime import date
from openfisca_france import init_country
from openfisca_france.model.base import *
import functools
from openfisca_core.formulas import make_reference_formula_decorator
from openfisca_france.entities import entity_class_by_symbol
reference_formula = make_reference_formula_decorator(entity_class_by_symbol = entity_class_by_symbol, update = True)
reference_input_variable = functools.partial(reference_input_variable, update = True)
TaxBenefitSystem = init_country()
tax_benefit_system = TaxBenefitSystem()
# age = 60
# ages = [12, 37, 28]
# jean_eric = {
# 'name': u'Jean-Éric',
# 'age': 33,
# }
# aurelie = dict(
# name = u'Aurélie',
# age = 33,
# )
# personnes = [aurelie, jean_eric]
simulation = tax_benefit_system.new_scenario().init_single_entity(
period = 2014,
parent1 = dict(
birth = date(1980, 1, 1),
salaire_de_base = 12000,
),
parent2 = dict(
birth = date(1980, 1, 1),
salaire_de_base = 6000,
),
enfants = [
dict(
birth = date(2014, 1, 1),
),
],
).new_simulation(debug = True)
simulation.calculate("salsuperbrut")
simulation.calculate("revdisp")
from openfisca_core import reforms
Reform = reforms.make_reform(
key = 'cotisations_revenu_de_base',
name = u"Réforme des cotisations pour un Revenu de base",
reference = tax_benefit_system,
)
@Reform.formula
class cotisations_contributives(SimpleFormulaColumn):
column = FloatCol
entity_class = Individus
label = u"Nouvelles cotisations contributives"
def function(self, simulation, period):
ags = simulation.calculate('ags', period)
agff_tranche_a_employeur = simulation.calculate('agff_tranche_a_employeur', period)
apec_employeur = simulation.calculate('apec_employeur', period)
arrco_tranche_a_employeur = simulation.calculate('arrco_tranche_a_employeur', period)
assedic_employeur = simulation.calculate('assedic_employeur', period)
cotisation_exceptionnelle_temporaire_employeur = simulation.calculate('cotisation_exceptionnelle_temporaire_employeur', period)
fonds_emploi_hospitalier = simulation.calculate('fonds_emploi_hospitalier', period)
ircantec_employeur = simulation.calculate('ircantec_employeur', period)
pension_civile_employeur = simulation.calculate('pension_civile_employeur', period)
prevoyance_obligatoire_cadre = simulation.calculate('prevoyance_obligatoire_cadre', period)
rafp_employeur = simulation.calculate('rafp_employeur', period)
vieillesse_deplafonnee_employeur = simulation.calculate('vieillesse_deplafonnee_employeur', period)
vieillesse_plafonnee_employeur = simulation.calculate('vieillesse_plafonnee_employeur', period)
allocations_temporaires_invalidite = simulation.calculate('allocations_temporaires_invalidite', period)
accident_du_travail = simulation.calculate('accident_du_travail', period)
agff_tranche_a_employe = simulation.calculate('agff_tranche_a_employe', period)
agirc_tranche_b_employe = simulation.calculate('agirc_tranche_b_employe', period)
apec_employe = simulation.calculate('apec_employe', period)
arrco_tranche_a_employe = simulation.calculate('arrco_tranche_a_employe', period)
assedic_employe = simulation.calculate('assedic_employe', period)
cotisation_exceptionnelle_temporaire_employe = simulation.calculate('cotisation_exceptionnelle_temporaire_employe', period)
ircantec_employe = simulation.calculate('ircantec_employe', period)
pension_civile_employe = simulation.calculate('pension_civile_employe', period)
rafp_employe = simulation.calculate('rafp_employe', period)
vieillesse_deplafonnee_employe = simulation.calculate('vieillesse_deplafonnee_employe', period)
vieillesse_plafonnee_employe = simulation.calculate('vieillesse_plafonnee_employe', period)
cotisations_contributives = (
# cotisations patronales contributives dans le prive
ags +
agff_tranche_a_employeur +
apec_employeur +
arrco_tranche_a_employeur +
assedic_employeur +
cotisation_exceptionnelle_temporaire_employeur +
prevoyance_obligatoire_cadre + # TODO contributive ou pas
vieillesse_deplafonnee_employeur +
vieillesse_plafonnee_employeur +
# cotisations patronales contributives dans le public
fonds_emploi_hospitalier +
ircantec_employeur +
pension_civile_employeur +
rafp_employeur +
# anciennes cot patronales non-contributives classées ici comme contributives
allocations_temporaires_invalidite +
accident_du_travail +
# anciennes cotisations salariales contributives dans le prive
agff_tranche_a_employe +
agirc_tranche_b_employe +
apec_employe +
arrco_tranche_a_employe +
assedic_employe +
cotisation_exceptionnelle_temporaire_employe +
vieillesse_deplafonnee_employe +
vieillesse_plafonnee_employe +
# anciennes cotisations salariales contributives dans le public
ircantec_employe +
pension_civile_employe +
rafp_employe
)
return period, cotisations_contributives
@Reform.formula
class nouv_salbrut(SimpleFormulaColumn):
reference = tax_benefit_system.column_by_name['salaire_de_base']
# Le salaire brut se définit dans la réforme comme le salaire super-brut auquel
# on retranche les cotisations contributives
def function(self, simulation, period):
period = period.start.period('month').offset('first-of')
salsuperbrut = simulation.calculate('salsuperbrut', period)
cotisations_contributives = simulation.calculate('cotisations_contributives', period)
nouv_salbrut = (
salsuperbrut -
cotisations_contributives
)
return period, nouv_salbrut
@Reform.formula
class nouv_csg(SimpleFormulaColumn):
reference = tax_benefit_system.column_by_name['csg_imposable_salaire']
# On applique une CSG unique à 22,5% qui finance toutes les prestations non-contributives
def function(self, simulation, period):
period = period.start.period('month').offset('first-of')
nouv_salbrut = simulation.calculate('nouv_salbrut', period)
nouv_csg = (
-0.225 * nouv_salbrut
)
return period, nouv_csg
@Reform.formula
class salnet(SimpleFormulaColumn):
reference = tax_benefit_system.column_by_name['salaire_net']
# On retire la nouvelle CSG (pas celle qui finance le RDB) pour trouver le nouveau salaire net
def function(self, simulation, period):
period = period.start.period('month').offset('first-of')
nouv_salbrut = simulation.calculate('nouv_salbrut', period)
nouv_csg = simulation.calculate('nouv_csg', period)
salnet = (
nouv_salbrut +
nouv_csg
)
return period, salnet
@Reform.formula
class sal(SimpleFormulaColumn):
reference = tax_benefit_system.column_by_name['salaire_de_base']
# Nous sommes partis du nouveau salaire net et par rapport au salaire imposable actuel,
# nous avons supprimé : les heures sup, la déductibilité de CSG
def function(self, simulation, period):
period = period
salnet = simulation.calculate('salnet', period)
primes_fonction_publique = simulation.calculate('primes_fonction_publique', period)
indemnite_residence = simulation.calculate('indemnite_residence', period)
supp_familial_traitement = simulation.calculate('supp_familial_traitement', period)
rev_microsocial_declarant1 = simulation.calculate('rev_microsocial_declarant1', period)
return period, (
salnet +
primes_fonction_publique +
indemnite_residence +
supp_familial_traitement +
hsup +
rev_microsocial_declarant1
)
reform = Reform()
reform_simulation = reform.new_scenario().init_single_entity(
period = 2014,
parent1 = dict(
birth = date(1980, 1, 1),
salaire_de_base = 12000,
),
parent2 = dict(
birth = date(1980, 1, 1),
salaire_de_base = 6000,
),
enfants = [
dict(
birth = date(2014, 1, 1),
),
],
).new_simulation(debug = True)
reform_simulation.calculate('salsuperbrut')
reform_simulation.calculate('cotisations_contributives')
reform_simulation.calculate('nouv_salbrut')
reform_simulation.calculate('salbrut')
reform_simulation.calculate('salnet')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adaptation pour faciliter l'usage de ce notebook
Step2: Variable avec formule
Step3: Simulation
Step4: Réforme
|
11,279
|
<ASSISTANT_TASK:>
Python Code:
import re
lines = ['4008','4008a','4009','1','9']
sorted(lines)
sorted(lines,key=int) # this raises an error
linenoRegex = re.compile('(\d+)(.*)')
def splitId(id):
Splits @id value like 4008a into parts, for sorting
results = linenoRegex.match(id).groups()
return (int(results[0]),results[1])
sorted(lines,key=splitId)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We initialize a lines list of strings and demonstrate how the default alphabetic sort gives the wrong results
Step2: In Python 3, the key parameter specifies a function that should be applied to the list items before sorting them. If we use int to convert each of the string values to an integer so that we can perform a numerical sort, we raise an error because the 'a' can’t be converted to an integer
Step4: We create our own sort function, for which we define linenoRegex, which includes two capture groups, both of which are strings by default. The first captures all digits from the beginning of the line number value. The second captures anything after the numbers. The regex splits the input into a tuple that contains the two values as strings, and we convert the first value to an integer before we return it. For example, the input value '4008a' will return (4008,'a'), where the '4008' is an integer and the 'a' is a string.
Step5: If we now specify our splitId function as the value of the key parameter in the sorted() function, the values will be split into two parts before sorting. Tuples are sorted part by part from start to finish, so we don’t have to tell the function explicitly how to sort once we’ve defined the two parts of our tuple
|
11,280
|
<ASSISTANT_TASK:>
Python Code:
# defining lists
sport_list = [ 'cycling', 'football', 'fitness' ]
first_prime_numbers = [ 2, 3, 5, 7, 11, 13, 17, 19 ]
# getting contents
sport = sport_list[ 2 ]
third_prime = first_prime_numbers[ 2 ]
# printing
print( 'All sports:', sport_list )
print( 'Sport to be done:', sport )
print( '\nFirst primes:', first_prime_numbers )
print( 'Third prime number:', third_prime )
# adapt entries and append new entries
sport_list[ 1 ] = 'swimming'
sport_list.append( 'running' )
first_prime_numbers.append( 23 )
# printing
print( 'All sports:', sport_list )
print( 'First primes:', first_prime_numbers )
# defining tuple
sport_tuple = ( 'cycling', 'football', 'fitness' )
# getting contents
sport = sport_tuple[ 2 ]
# printing
print( 'All sports:', sport_tuple )
print( 'Sport to be done:', sport )
# append new entries
sport_tuple += ( 'running', )
# printing
print( 'All sports:', sport_tuple )
print()
# changing entries will fail
# --> ERROR is being generated on purpose
# --> NOTE: Error is handled by 'try: ... except: ...' statement
try:
sport_tuple[ 1 ] = 'swimming'
except:
print('ERROR: Entries within tuples cannot be adapted!')
# defining dictionaries
sports_days = { 'Monday': 'pause', 'Tuesday': 'fitness', 'Wednesday' : 'running',
'Thursday' : 'fitness', 'Friday' : 'swimming', 'Saturday' : 'cycling',
'Sunday' : 'cycling' }
print( 'Sport by day:', sports_days )
print( '\nOn Tuesday:', sports_days[ 'Tuesday' ])
# Changes are made by using the key as identifier
sports_days[ 'Tuesday' ] = 'running'
print( 'Sport by day:', sports_days )
# defining sets
sports_set = { 'fitness', 'running', 'swimming', 'cycling'}
print( sports_set )
print()
# indexing will fail
# --> ERROR is being generated on purpose
try:
print( sports_set[0] )
except:
print('ERROR: No indexing of sets!')
# adding elements (or not)
sports_set.add( 'pause' )
print(sports_set)
sports_set.add( 'fitness' )
print(sports_set)
# union of sets (also: intersection, complement, ...)
all_stuff_set = set( sports_set )
union_of_sets = all_stuff_set.union( first_prime_numbers)
print( union_of_sets )
# looping in lists simply parsing along the list
for s in sport_list:
print( s )
print()
# looping in dictionaries happens along keys
for s in sports_days:
print( '{}: \t{}'.format( s, sports_days[ s ] ) )
# initialize variables
sum_primes = 0
_n = 0
# sum primes up to sum-value of 20
while sum_primes < 20:
# add prime of according index
sum_primes += first_prime_numbers[ _n ]
# increase index
_n += 1
print( 'Sum of first {} primes is {}.'.format( _n, sum_primes ) )
def get_n_th_prime( n, first_prime_numbers ):
'''
DOC String
IN: index of prime number, list of prime numbers
OUT: n-th prime number
'''
# do something smart as, e.g., checking that according index really exists
# "assert" does the job by checking first arg and--if not being TRUE--providing text given as second arg
try:
val = first_prime_numbers[ n - 1 ]
except:
return '"ERROR: Index not feasible!"'
# NOTE: since counting starts at 0, (n-1)st number is returned
# Furthermore, there is no need for a function here; a simple reference would have done the job!
return first_prime_numbers[ n - 1 ]
# show doc string
print( help( get_n_th_prime ) )
# apply functions
N = 3
print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )
print()
N = 30
print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tuples
Step2: Dictionaries
Step3: Sets
Step4: Flow Control
Step5: While Loops
Step6: Functions
|
11,281
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import structcol as sc
import structcol.refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
from structcol import phase_func_sphere as pfs
import matplotlib.pyplot as plt
import seaborn as sns
import os
# Properties of source
wavelengths = sc.Quantity(np.arange(400., 801.,15),'nm') # wavelengths at which to calculate reflectance
# Geometric properties of sample
particle_radius = sc.Quantity('0.110 um') # radius of the sphere particles
volume_fraction_particles = sc.Quantity(0.5, '') # volume fraction of the particles in the sphere boundary
volume_fraction_bulk = sc.Quantity(0.55,'') # volume fraction of the spheres in the bulk film
sphere_boundary_diameter = sc.Quantity(10,'um') # radius of the sphere boundary
bulk_thickness = sc.Quantity('50 um') # thickness of the bulk film
boundary = 'sphere' # geometry of sample
boundary_bulk = 'film' # geometry of the bulk sample
# Refractive indices
n_particle = ri.n('vacuum', wavelengths) # refractive index of particle
n_matrix = ri.n('fused silica', wavelengths)+ 9e-4*ri.n('vacuum', wavelengths)*1j # refractive index of matrix
n_matrix_bulk = ri.n('vacuum', wavelengths) # refractive index of the bulk matrix
n_medium = ri.n('vacuum', wavelengths) # refractive index of medium outside the bulk sample.
# Monte Carlo parameters
ntrajectories = 2000 # number of trajectories to run with a spherical boundary
nevents = 300 # number of scattering events for each trajectory in a spherical boundary
ntrajectories_bulk = 2000 # number of trajectories to run in the bulk film
nevents_bulk = 300 # number of events to run in the bulk film
# plot settings
sns.set_style('white') # sets white plotting background
# initialize quantities we want to save as a function of wavelength
reflectance_sphere = np.zeros(wavelengths.size)
mu_scat_bulk = sc.Quantity(np.zeros(wavelengths.size),'1/um')
mu_abs_bulk = sc.Quantity(np.zeros(wavelengths.size),'1/um')
p_bulk = np.zeros((wavelengths.size, 200))
# loop through wavelengths
for i in range(wavelengths.size):
# print wavlengths to keep track of where we are in calculation
print('wavelength: ' + str(wavelengths[i]))
# caculate the effective index of the sample
n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction_particles)
# Calculate the phase function and scattering and absorption coefficients from the single scattering model
# (this absorption coefficient is of the scatterer, not of an absorber added to the system)
p, mu_scat, mu_abs = mc.calc_scat(particle_radius, n_particle[i], n_sample,
volume_fraction_particles, wavelengths[i])
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_matrix_bulk[i], n_sample, boundary,
sample_diameter = sphere_boundary_diameter)
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Generate a matrix of all the randomly sampled angles first
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)
# Create step size distribution
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Run photons
trajectories.absorb(mu_abs, step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# Calculate reflection and transmition
(refl_indices,
trans_indices,
_, _, _,
refl_per_traj, trans_per_traj,
_,_,_,_,
reflectance_sphere[i],
_,_, norm_refl, norm_trans) = det.calc_refl_trans(trajectories, sphere_boundary_diameter,
n_matrix_bulk[i], n_sample, boundary,
p=p, mu_abs=mu_abs, mu_scat=mu_scat,
run_fresnel_traj = False,
return_extra = True)
### Calculate phase function and lscat ###
# use output of calc_refl_trans to calculate phase function, mu_scat, and mu_abs for the bulk
p_bulk[i,:], mu_scat_bulk[i], mu_abs_bulk[i] = pfs.calc_scat_bulk(refl_per_traj, trans_per_traj,
trans_indices,
norm_refl, norm_trans,
volume_fraction_bulk,
sphere_boundary_diameter,
n_matrix_bulk[i],
wavelengths[i],
plot=False, phi_dependent=False)
plt.figure()
plt.plot(wavelengths, reflectance_sphere, linewidth = 3)
plt.ylim([0,1])
plt.xlim([400,800])
plt.xlabel('Wavelength (nm)')
plt.ylabel('Reflectance')
plt.title('Single sphere reflectance')
plt.figure()
plt.plot(wavelengths, p_bulk[:,100], linewidth = 3)
plt.xlim([400,800])
plt.xlabel('Wavelength (nm)')
plt.ylabel('Probability')
plt.title('Phase function value at backscattering angle')
plt.figure()
ax = plt.subplot(111, projection='polar')
theta = np.linspace(0, np.pi, 200)
ind = 4
print(wavelengths[ind])
ax.plot(theta, p_bulk[ind,:], color = 'b')
ax.plot(-theta, p_bulk[ind,:], color = 'b')
plt.savefig('phase_1sphere_1.pdf')
# initialize some quantities we want to save as a function of wavelength
reflectance_bulk = np.zeros(wavelengths.size)
# loop through wavelengths
for i in range(wavelengths.size):
# print the wavelength keep track of where we are in calculation
print('wavelength: ' + str(wavelengths[i]))
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents_bulk, ntrajectories_bulk, n_medium[i], n_matrix_bulk[i], boundary_bulk)
r0 = sc.Quantity(r0, 'um')
W0 = sc.Quantity(W0, '')
k0 = sc.Quantity(k0, '')
# Sample angles
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents_bulk, ntrajectories_bulk,
p_bulk[i,:])
# Calculate step size
# note: in future versions, mu_abs will be removed from step size sampling, so 0 is entered here
step = mc.sample_step(nevents_bulk, ntrajectories_bulk, mu_scat_bulk[i])
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Run photons
#trajectories.absorb(mu_abs_bulk[i], step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
# calculate reflectance
reflectance_bulk[i], transmittance = det.calc_refl_trans(trajectories, bulk_thickness,
n_medium[i], n_matrix_bulk[i], boundary_bulk)
plt.figure()
plt.plot(wavelengths, reflectance_bulk, linewidth = 3)
plt.ylim([0,1])
plt.xlim([400,800])
plt.xlabel('Wavelength (nm)')
plt.ylabel('Reflectance')
plt.title('Bulk Reflectance');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start by running Monte Carlo code for a single sphere
Step2: Run Monte Carlo for sphere geometry
Step3: Plot results
Step4: The first plot shows us the reflectance of a single sphere, which should give us a qualitative guide for the features we expect to see in the bulk film reflectance.
Step5: Plot results
|
11,282
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
client_info = pd.read_csv('data/client_info.csv')
demographic_info = pd.read_csv('data/demographic_data.csv')
transaction_info = pd.read_csv('data/transction_info.csv')
order_info = pd.read_csv('data/order_info.csv')
transaction_info['bank_account'] = transaction_info['bank'] + transaction_info['account'].map(str)
G = nx.Graph()
G.add_nodes_from(transaction_info['account_id'].unique().tolist())
G.add_nodes_from(transaction_info['bank_account'][transaction_info['bank_account'].notnull()].unique())
nonEmpty = transaction_info[transaction_info['bank_account'].notnull()]
edges = zip(nonEmpty['account_id'],nonEmpty['bank_account'], nonEmpty['amount'])
G.add_weighted_edges_from(edges)
giant = max(nx.connected_component_subgraphs(G), key=len)
nx.draw_circular(giant)
Gdi = nx.DiGraph()
Gdi.add_nodes_from(giant.nodes())
Gdi.add_edges_from(giant.edges())
nx.draw_circular(Gdi)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data.
Step2: Creata a unique bank account (bank + account)
Step3: Build the graph.
Step4: Add non-empty edges.
Step5: Look at the largest connected components.
Step6: Look at it in a directed graph.
|
11,283
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
# Load data returns dari sektor2
ind = pd.read_csv("ind30_m_vw_rets.csv", header=0, index_col=0)/100
# Ubah index jadi perioda bulanan
ind.index = pd.to_datetime(ind.index, format="%Y%m").to_period('M')
# Hilangkan spasi pada kolom
ind.columns = ind.columns.str.strip()
# Batasi lingkup agar sama dengan di MOOC
ind = ind["1996":"2000"]
# Konversi returns menjadi tahunan. "er" adalah expected return
compounded_growth = (1+ind).prod()
er = compounded_growth ** (12/ind.shape[0]) -1
print('Expected returns:')
print(er)
# Covariance matrix
cov = ind.cov()
assets = ['Food', 'Beer', 'Smoke', 'Coal']
er[assets]
cov.loc[assets, assets]
def portfolio_return(weights, returns):
return weights.T @ returns
def portfolio_vol(weights, covmat):
return (weights.T @ covmat @ weights)**0.5
weights = np.repeat(1/4, 4)
weights
portfolio_return(weights, er[assets])
portfolio_vol(weights, cov.loc[assets, assets])
# Pilih 2 aset
assets = ['Games', 'Fin']
# Generate kombinasi alokasi untuk dua aset
N_POINTS = 20
weights = [np.array([w, 1-w]) for w in np.linspace(0, 1, N_POINTS)]
weights
rets = [portfolio_return(w, er[assets]) for w in weights]
rets
vols = [portfolio_vol(w, cov.loc[assets,assets]) for w in weights]
vols
ef = pd.DataFrame(data={'Return': rets,
'Volatility': vols})
ef
ef.plot.line(x='Volatility', y='Return',
title='Efficient Frontier dengan Dua Aset ({} dan {})'.format(assets[0], assets[1]),
figsize=(15,6), style='.-')
from scipy.optimize import minimize
def minimize_vol(target_return, er, cov):
Returns the optimal weights that achieve the target return
given a set of expected returns and a covariance matrix
n = er.shape[0]
init_guess = np.repeat(1/n, n)
bounds = ((0.0, 1.0),) * n # an N-tuple of 2-tuples!
# construct the constraints
weights_sum_to_1 = {'type': 'eq',
'fun': lambda weights: np.sum(weights) - 1
}
return_is_target = {'type': 'eq',
'args': (er,),
'fun': lambda weights, er: target_return - portfolio_return(weights,er)
}
weights = minimize(portfolio_vol, init_guess,
args=(cov,), method='SLSQP',
options={'disp': False},
constraints=(weights_sum_to_1,return_is_target),
bounds=bounds)
return weights.x
def optimal_weights(n_points, er, cov):
target_rs = np.linspace(er.min(), er.max(), n_points)
weights = [minimize_vol(target_return, er, cov) for target_return in target_rs]
return weights
def plot_ef(n_points, er, cov):
Plots the multi-asset efficient frontier
weights = optimal_weights(n_points, er, cov)
rets = [portfolio_return(w, er) for w in weights]
vols = [portfolio_vol(w, cov) for w in weights]
ef = pd.DataFrame({
"Returns": rets,
"Volatility": vols
})
ax = ef.plot.line(x="Volatility", y="Returns", style='.-',
label='Efficient Frontier', legend=True, figsize=(15,6))
ax.set_ylabel('Returns')
ax.set_xlim(left=0)
ax.set_ylim(bottom=0)
return ax
assets = ['Smoke', 'Fin', 'Games', 'Coal']
plot_ef(25, er[assets], cov.loc[assets, assets])
# Credit: Vijay Vaidyanathan
# https://www.coursera.org/learn/introduction-portfolio-construction-python
def msr(riskfree_rate, er, cov):
Returns the weights of the portfolio that gives you the maximum sharpe ratio
given the riskfree rate and expected returns and a covariance matrix
n = er.shape[0]
init_guess = np.repeat(1/n, n)
bounds = ((0.0, 1.0),) * n # an N-tuple of 2-tuples!
# construct the constraints
weights_sum_to_1 = {'type': 'eq',
'fun': lambda weights: np.sum(weights) - 1
}
def neg_sharpe(weights, riskfree_rate, er, cov):
Returns the negative of the sharpe ratio
of the given portfolio
r = portfolio_return(weights, er)
vol = portfolio_vol(weights, cov)
return -(r - riskfree_rate)/vol
weights = minimize(neg_sharpe, init_guess,
args=(riskfree_rate, er, cov), method='SLSQP',
options={'disp': False},
constraints=(weights_sum_to_1,),
bounds=bounds)
return weights.x
def plot_cml(ax, riskfree_rate, w_msr, er, cov):
r = portfolio_return(w_msr, er)
vol = portfolio_vol(w_msr, cov)
x = [0, vol]
y = [riskfree_rate, r]
ax.plot(x, y, color='green', marker='o', label='CML',
linestyle='-', linewidth=2, markersize=10)
ax.legend()
RISKFREE_RATE = 0.10
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_msr = msr(RISKFREE_RATE, er[assets], cov.loc[assets, assets])
plot_cml(ax, RISKFREE_RATE, w_msr, er[assets], cov.loc[assets, assets])
assets = ['Food', 'Steel']
er[assets]
msr(RISKFREE_RATE, er[assets], cov.loc[assets, assets])
msr(RISKFREE_RATE, np.array([0.11, 0.12]), cov.loc[assets, assets])
msr(RISKFREE_RATE, np.array([0.10, 0.13]), cov.loc[assets, assets])
msr(RISKFREE_RATE, np.array([0.13, 0.10]), cov.loc[assets, assets])
# Credit: Vijay Vaidyanathan
# https://www.coursera.org/learn/introduction-portfolio-construction-python
def gmv(cov):
Returns the weights of the Global Minimum Volatility portfolio
given a covariance matrix
n = cov.shape[0]
return msr(0, np.repeat(1, n), cov)
assets = ['Smoke', 'Fin', 'Games', 'Coal']
def plot_point(ax, weights, er, cov, label, color='C1'):
r = portfolio_return(weights, er)
vol = portfolio_vol(weights, cov)
x = [vol]
y = [r]
ax.plot([vol], [r], color=color, marker='o', label=label,
linestyle='-', linewidth=2, markersize=10)
ax.legend()
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_gmv = gmv(cov.loc[assets, assets])
plot_point(ax, w_gmv, er[assets], cov.loc[assets, assets], 'GMV')
n_assets = len(assets)
ax = plot_ef(25, er[assets], cov.loc[assets, assets])
w_ew = np.repeat(1/n_assets, n_assets)
plot_point(ax, w_ew, er[assets], cov.loc[assets, assets], 'Equal weights', color='C4')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kita buat covariance matrix-nya.
Step2: Agar lebih sederhana, kita pilih 4 industri saja dalam portfolio kita.
Step3: Expected returns dan covariance matrix untuk aset-aset ini
Step4: Mari membuat fungsi untuk menghitung expected return dan volatilitas dari portfolio dengan operasi matriks seperti dijelaskan di atas.
Step5: Misalkan 4 aset tadi kita alokasikan secara seimbang
Step6: Kita bisa hitung expected return dari portfolionya dengan alokasi ini
Step7: Dan juga volatilitasnya
Step8: Praktek Efficient Frontier dengan 2 Aset
Step9: Sekarang mari kita hitung returns dari kombinasi alokasi di atas, dan kita masukkan dalam list.
Step10: Kita lakukan hal yang sama untuk volatility.
Step11: Mari kita gabungkan return dan volatilitas menjadi dataframe agar mudah diplot.
Step12: Sekarang mari kita plot volatility vs return-nya.
Step16: Done! Kita telah menggambar efficient frontier untuk 2 aset.
Step19: Whoa! It works!
Step20: Kelemahan Efficient Frontier
Step21: Expected return dari kedua asset di atas adalah
Step22: MSR untuk dua asset di atas adalah
Step23: Jadi alokasi yang optimal adalah Food 75% dan Steel 25%.
Step24: Maka prosentase alokasinya sudah berubah jauh. Padahal perubahannya tidak sampai 1%.
Step25: Wow! Sekarang MSR mengalokasikan 100% ke aset Steel. Demikian juga kalau kita rubah sebaliknya
Step27: Maka MSR mengalokasikan 100% ke Food. Padahal perubahannya tidak sampai 2%.
Step28: Alokasi Seimbang
|
11,284
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.preprocessing import ICA, create_ecg_epochs
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True).load_data()
raw.filter(1, 30, fir_design='firwin')
# peak-to-peak amplitude rejection parameters
reject = dict(grad=4000e-13, mag=4e-12)
# longer + more epochs for more artifact exposure
events = mne.find_events(raw, stim_channel='STI 014')
epochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5,
reject=reject)
ica = ICA(n_components=0.95, method='fastica').fit(epochs)
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, threshold='auto')
ica.plot_components(ecg_inds)
ica.plot_properties(epochs, picks=ecg_inds)
ica.plot_sources(raw, picks=ecg_inds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read and preprocess the data. Preprocessing consists of
Step2: Fit ICA model using the FastICA algorithm, detect and plot components
Step3: Plot properties of ECG components
Step4: Plot the estimated source of detected ECG related components
|
11,285
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
import numpy as np
%matplotlib inline
# Read in the image
image = mpimg.imread(fname='images/curved_lane.jpg')
plt.imshow(X=image)
# Convert to grayscale for filtering
gray = cv2.cvtColor(src=image,
code=cv2.COLOR_RGB2GRAY)
plt.imshow(X=gray, cmap='gray')
# Create a custom kernel
# 3x3 array for edge detection
sobel_y = np.array([[ -1, -2, -1],
[ 0, 0, 0],
[ 1, 2, 1]])
## TODO: Create and apply a Sobel x operator
sobel_x = np.array([[ -1, 0, 1],
[ -2, 0, 2],
[ -1, 0, 1]])
# Filter the image using filter2D, which has inputs: (grayscale image, bit-depth, kernel)
filtered_image = cv2.filter2D(src=gray,
ddepth=-1,
kernel=sobel_y)
filtered_image2 = cv2.filter2D(src=gray,
ddepth=-1,
kernel=sobel_x)
plt.imshow(X=filtered_image,
cmap='gray')
plt.imshow(X=filtered_image2,
cmap='gray')
image2 = mpimg.imread(fname='images/white_lines.jpg')
gray2 = cv2.cvtColor(src=image2,
code=cv2.COLOR_RGB2GRAY)
sample_filter = np.array([[ 0, 0, 2, 0, 0],
[ 0, 0, 1, 0, 0],
[ 2, -1, 1, -1, 2],
[ 0, 0, 1, 0, 0],
[ 0, 0, 2, 0, 0]])
filtered_image3 = cv2.filter2D(src=gray2,
ddepth=-1,
kernel=sample_filter)
plt.imshow(X=gray2,
cmap='gray')
plt.imshow(X=filtered_image3,
cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convert the image to grayscale
Step2: TODO
Step3: Test out other filters!
|
11,286
|
<ASSISTANT_TASK:>
Python Code:
# IMPORT STATEMENTS.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns # data visualisation
import matplotlib.pyplot as plt # data visualisation
import random # Used to sample survival.
# DEFINE GLOBALS.
NUM_OF_ROLLS = 3
df = pd.read_csv("../input/train.csv", index_col=0)
df.head()
# First visualise the general case (i.e. no considerations)
total, survivors = df.shape[0], df[df.Survived==1].shape[0]
survival_rate = float(survivors)/float(total)*100
f, ax = plt.subplots(figsize=(7, 7))
ax.set_title("Proportion of People Who Died On The Titanic")
ax.pie(
[survival_rate, 100-survival_rate],
autopct='%1.1f%%',
labels=['Survived', 'Died']
)
None # Removes console output
sns.set_style('white')
f, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
ax=ax,
x='Pclass',
y='Survived',
hue='Sex',
data=df,
capsize=0.05
)
ax.set_title("Survival By Gender and Ticket Class")
ax.set_ylabel("Survival (%)")
ax.set_xlabel("")
ax.set_xticklabels(["First Class", "Second Class", "Third Class"])
None # Suppress console output
sns.set_style("whitegrid")
f, ax = plt.subplots(figsize=(12, 5))
ax = sns.distplot(
df.Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
total, classes_count = float(df['Pclass'].shape[0]), df['Pclass'].value_counts()
proportions = list(map(lambda x: classes_count.loc[x]/total*100, [1, 2, 3]))
f, ax = plt.subplots(figsize=(8, 8))
ax.set_title('Proportion of Passengers By Class')
ax.pie(proportions, autopct='%1.1f%%', labels=['First Class', 'Second Class', 'Third Class'])
None # Removes console output
def probability(df, key_list):
Finds the probability of surviving based on the parameters passed in key_list.
The key_list input is structured like so:
[Ticket Class, Sex]
So for example, an input could be [1, 'male'].
pclass, sex = key_list
filtered_df = df[(df.Sex == sex) & (df.Pclass == pclass)]
return filtered_df['Survived'].mean()
##############################################################################################
sexes = df.Sex.unique()
ticket_classes = df.Pclass.unique()
probability_dict = dict()
for x in ticket_classes:
for y in sexes:
key = [x, y]
probability_dict[str(key)] = probability(df, key)
##############################################################################################
def make_guesses(df):
Makes guesses on if the passengers survived or died.
guesses = list()
for passenger_index, row in df.iterrows():
# Find if the passenger survived.
survival_key = [row.Pclass, row.Sex]
survival_odds = probability_dict[str(survival_key)]
survived_rolls = list(map(lambda x: random.random() <= survival_odds, range(NUM_OF_ROLLS)))
survived = sum(survived_rolls) > NUM_OF_ROLLS/2
# Add the result to the guesses
guesses.append(survived)
return guesses
##############################################################################################
df['Guess'] = make_guesses(df)
df['CorrectGuess'] = df.Guess == df.Survived
df.head()
df.CorrectGuess.mean()
results = list()
for ii in range(10**2):
guesses = make_guesses(df)
correct_guesses = (df.Survived == guesses)
results.append(correct_guesses.mean())
sns.distplot(results, kde=False)
None
df.drop('Guess', axis=1, inplace=True)
df.drop('CorrectGuess', axis=1, inplace=True)
df.head()
f, ax = plt.subplots(figsize=(12, 8))
sns.distplot(
df.Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
sns.distplot(
df[(df.Survived == 1)].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
None # Suppress console output.
f, ax = plt.subplots(2, figsize=(12, 8))
# Plot both sexes on different axes
for ii, sex in enumerate(['male', 'female']):
sns.distplot(
df[df.Sex == sex].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
sns.distplot(
df[(df.Survived == 1)&(df.Sex == sex)].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
None # Suppress console output.
f, ax = plt.subplots(2, figsize=(12, 8))
# Plot both sexes on different axes
for ii, sex in enumerate(['male', 'female']):
sns.distplot(
df[df.Sex == sex].Age.dropna().values, bins=range(0, 81, 5), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
sns.distplot(
df[(df.Survived == 1)&(df.Sex == sex)].Age.dropna().values, bins=range(0, 81, 5), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
None # Suppress console output.
survival_rates, survival_labels = list(), list()
for x in range(0, 90+5, 5):
aged_df = df[(x <= df.Age)&(df.Age <= x+5)]
survival_rate = aged_df['Survived'].mean()
survival_rate = 0.5 if (survival_rate == 0.0 or survival_rate == 1.0) else survival_rate
survival_rates.append(survival_rate if (survival_rate != 0.0 or survival_rate != 1.0) else 0.5)
survival_labels.append('(%i, %i]' % (x, x+5))
f, ax = plt.subplots(figsize=(12, 8))
ax = sns.barplot(x=survival_labels, y=survival_rates, ax=ax)
ax.set_xticklabels(ax.get_xticklabels(), rotation=50)
None # Suppress console output
def getProbability(passengerId, df):
Finds the weighted probability of surviving based on the passenger's parameters.
This function finds the passenger's information by looking for their id in the dataframe
and extracting the information that it needs. Currently the probability is found using a
weighted mean on the following parameters:
- Pclass: Higher the ticket class the more likely they will survive.
- Sex: Women on average had a higher chance of living.
- Age: Infants and older people had a greater chance of living.
passenger = df.loc[passengerId]
# Survival rate based on sex and ticket class.
bySexAndClass = df[
(df.Sex == passenger.Sex) &
(df.Pclass == passenger.Pclass)
].Survived.mean()
# Survival rate based on sex and age.
byAge = df[
(df.Sex == passenger.Sex) &
((df.Age//5-1)*5 <= passenger.Age) & (passenger.Age <= (df.Age//5)*5)
].Survived.mean()
# Find the weighting for each of the rates.
parameters = [bySexAndClass, byAge]
rolls = [5, 4] # Roll numbers are hardcoded until I figure out the weighting system
probabilities = []
for Nrolls, prob in zip(rolls, parameters):
for _ in range(Nrolls):
probabilities += [prob]
return probabilities
##############################################################################################
def make_guesses(df):
Makes guesses on if the passengers survived or died.
guesses = list()
for passenger_index, _row in df.iterrows():
# Find if the passenger survived.
survival_odds = getProbability(passenger_index, df)
roll_outcomes = []
for prob in survival_odds:
roll_outcomes += [random.random() <= prob]
survived = sum(roll_outcomes) > len(roll_outcomes)/2
# Add the result to the guesses
guesses.append(survived)
return guesses
##############################################################################################
df['Guess'] = make_guesses(df)
df['CorrectGuess'] = df.Guess == df.Survived
df.head()
df.CorrectGuess.mean()
results = list()
for ii in range(10**2):
guesses = make_guesses(df)
correct_guesses = (df.Survived == guesses)
results.append(correct_guesses.mean())
if ii % 10 == 0: print("%i/%i" % (ii, 10**2))
sns.distplot(results, kde=False)
None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you should be able to see, there are a few different parameters we can consider in our models
Step2: From this visualisation you can see that only 40% of people survived on the Titanic. Thus if we assume that the testing data is similar to the training script (which is a very valid assumption) then we could simply predict that everyone died and we would still be right about 60% of the time. So already we can beat random chance! But we should be able to determine a better model then that anyway.
Step3: Wow! I expected some kind of a trend but nothing like this one. As you can see nearly all women survived in first class and the same trend is observed with second class. The women in third class survived more often then men but significantly less then the women in first and second class. The men in first class have a higher chance of surviving then the men the second and third class who have nearly identical chances.
Step4: Interestingly, there seems to be a non-normal distribution for the ages. There is a small spike for young children before having a right-skewed distribution centring around 25 years. A likely explanation for this behaviour would be that families bring their children on the voyage but teenagers are expected to be old enough to care for themselves if the parents went away.
Step7: 3. Building a Simple Model
Step8: 4. Evaluating the Model
Step9: Our guess is alright! On average I get around 75% guess accuracy but you might be seeing another number. My model works on probability so the number of correct guesses changes when rerun.
Step10: As you can see, the model is normally distributed about 75.5% and has a spread from 70% to 80% accuracy. That means we are getting the right answer 3/4 of the time! That isn't too bad but I know we can do better.
Step11: 5. Improving The Model By Including Age
Step12: Interestingly, it seems that children below 16 years have a really high chance of surviving as well as passengers above 50 years old. The worst survival rate is for passengers between 18 to 45 years.
Step13: This result supports what we found before, that females mostly survived over males, but it also provides some new insight. Notice that for male children their survival rate is still really high (<15 years) but is consistently low otherwise. As such you could tweak the model to say that children are much more likely to survived irregardless of gender.
Step14: Our conclusion is supported! Now we have to figure out if we can include this in the model.
Step17: Now with these results visualised it should be easier to see. The survival rate for infants (<5 years) is quite high, while for men between 20 to 25 years it is only 35%. Anytime there is a 50% reading this is because their isn't enough information and you can only conclude that the probability matches random chance.
Step18: Currently the execution time for the below cell is really long because I haven't bothered to optimise it.
|
11,287
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
xgrid = np.linspace(-3,3,50)
f1 = np.exp(-xgrid**2)
f2 = np.tanh(xgrid)
plt.figure(figsize=(8,6))
plt.plot(xgrid, f1, 'bo-')
plt.plot(xgrid, f2, 'ro-')
plt.title('Just a demo plot')
plt.grid()
plt.show()
from IPython.display import YouTubeVideo
YouTubeVideo('HrxX9TBj2zY')
# this will list all magic commands
%lsmagic
# also work in ls, cd, mkdir, etc
%pwd
%history
# this will execute and show the output of the program
%run ./hola_mundo.py
def naive_loop():
for i in range(100):
for j in range(100):
for k in range(100):
a = 1+1
return None
%timeit -n 10 naive_loop()
%time naive_loop()
%%bash
cd ..
ls
%%capture output
!ls
%%writefile myfile.txt
Holanda que talca!
!cat myfile.txt
!rm myfile.txt
from IPython.core.magic import register_cell_magic
@register_cell_magic
def cpp(line, cell):
Compile, execute C++ code, and return the
standard output.
# We first retrieve the current IPython interpreter instance.
ip = get_ipython()
# We define the source and executable filenames.
source_filename = '_temp.cpp'
program_filename = '_temp'
# We write the code to the C++ file.
with open(source_filename, 'w') as f:
f.write(cell)
# We compile the C++ code into an executable.
compile = ip.getoutput("g++ {0:s} -o {1:s}".format(
source_filename, program_filename))
# We execute the executable and return the output.
output = ip.getoutput('./{0:s}'.format(program_filename))
print('\n'.join(output))
%%cpp
#include<iostream>
int main()
{
std::cout << "Hello world!";
}
%load_ext cpp_ext
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: IPython also comes with a sophisticated display system that lets us insert rich web elements in the notebook. Here you can see an example of how to add Youtube videos in a notebook
Step3: <div id='inside' />
Step4: lets you capture the standard output and error output of some code into a Python variable.
Step5: Writting our own magics!
Step7: To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic.
Step8: This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython.
|
11,288
|
<ASSISTANT_TASK:>
Python Code:
MODEL_NAME = 'auto-encoder-01'
TRAIN_DATA_FILES_PATTERN = 'data/data-*.csv'
RESUME_TRAINING = False
MULTI_THREADING = True
FEATURE_COUNT = 64
HEADER = ['key']
HEADER_DEFAULTS = [[0]]
UNUSED_FEATURE_NAMES = ['key']
CLASS_FEATURE_NAME = 'CLASS'
FEATURE_NAMES = []
for i in range(FEATURE_COUNT):
HEADER += ['x_{}'.format(str(i+1))]
FEATURE_NAMES += ['x_{}'.format(str(i+1))]
HEADER_DEFAULTS += [[0.0]]
HEADER += [CLASS_FEATURE_NAME]
HEADER_DEFAULTS += [['NA']]
print("Header: {}".format(HEADER))
print("Features: {}".format(FEATURE_NAMES))
print("Class Feature: {}".format(CLASS_FEATURE_NAME))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(CLASS_FEATURE_NAME)
return features, target
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads)
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
df_params = pd.read_csv("data/params.csv", header=0, index_col=0)
len(df_params)
df_params['feature_name'] = FEATURE_NAMES
df_params.head()
def standard_scaler(x, mean, stdv):
return (x-mean)/stdv
def maxmin_scaler(x, max_value, min_value):
return (x-min_value)/(max_value-min_value)
def get_feature_columns():
feature_columns = {}
# feature_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
# for feature_name in FEATURE_NAMES}
for feature_name in FEATURE_NAMES:
feature_max = df_params[df_params.feature_name == feature_name]['max'].values[0]
feature_min = df_params[df_params.feature_name == feature_name]['min'].values[0]
normalizer_fn = lambda x: maxmin_scaler(x, feature_max, feature_min)
feature_columns[feature_name] = tf.feature_column.numeric_column(feature_name,
normalizer_fn=normalizer_fn
)
return feature_columns
print(get_feature_columns())
def autoencoder_model_fn(features, labels, mode, params):
feature_columns = list(get_feature_columns().values())
input_layer_size = len(feature_columns)
encoder_hidden_units = params.encoder_hidden_units
# decoder units are the reverse of the encoder units, without the middle layer (redundant)
decoder_hidden_units = encoder_hidden_units.copy()
decoder_hidden_units.reverse()
decoder_hidden_units.pop(0)
output_layer_size = len(FEATURE_NAMES)
he_initialiser = tf.contrib.layers.variance_scaling_initializer()
l2_regulariser = tf.contrib.layers.l2_regularizer(scale=params.l2_reg)
print("[{}]->{}-{}->[{}]".format(len(feature_columns)
,encoder_hidden_units
,decoder_hidden_units,
output_layer_size))
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
# input layer
input_layer = tf.feature_column.input_layer(features=features,
feature_columns=feature_columns)
# Adding Gaussian Noise to input layer
noisy_input_layer = input_layer + (params.noise_level * tf.random_normal(tf.shape(input_layer)))
# Dropout layer
dropout_layer = tf.layers.dropout(inputs=noisy_input_layer,
rate=params.dropout_rate,
training=is_training)
# # Dropout layer without Gaussian Nosing
# dropout_layer = tf.layers.dropout(inputs=input_layer,
# rate=params.dropout_rate,
# training=is_training)
# Encoder layers stack
encoding_hidden_layers = tf.contrib.layers.stack(inputs= dropout_layer,
layer= tf.contrib.layers.fully_connected,
stack_args=encoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Decoder layers stack
decoding_hidden_layers = tf.contrib.layers.stack(inputs=encoding_hidden_layers,
layer=tf.contrib.layers.fully_connected,
stack_args=decoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Output (reconstructed) layer
output_layer = tf.layers.dense(inputs=decoding_hidden_layers,
units=output_layer_size, activation=None)
# Encoding output (i.e., extracted features) reshaped
encoding_output = tf.squeeze(encoding_hidden_layers)
# Reconstruction output reshaped (for serving function)
reconstruction_output = tf.squeeze(tf.nn.sigmoid(output_layer))
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
# Convert predicted_indices back into strings
predictions = {
'encoding': encoding_output,
'reconstruction': reconstruction_output
}
export_outputs = {
'predict': tf.estimator.export.PredictOutput(predictions)
}
# Provide an estimator spec for `ModeKeys.PREDICT` modes.
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
export_outputs=export_outputs)
# Define loss based on reconstruction and regularization
# reconstruction_loss = tf.losses.mean_squared_error(tf.squeeze(input_layer), reconstruction_output)
# loss = reconstruction_loss + tf.losses.get_regularization_loss()
reconstruction_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.squeeze(input_layer), logits=tf.squeeze(output_layer))
loss = reconstruction_loss + tf.losses.get_regularization_loss()
# Create Optimiser
optimizer = tf.train.AdamOptimizer(params.learning_rate)
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Calculate root mean squared error as additional eval metric
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(
tf.squeeze(input_layer), reconstruction_output)
}
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
estimator_spec = tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return estimator_spec
def create_estimator(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=autoencoder_model_fn,
params=hparams,
config=run_config)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
TRAIN_SIZE = 2000
NUM_EPOCHS = 1000
BATCH_SIZE = 100
NUM_EVAL = 10
TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
encoder_hidden_units=[30,3],
learning_rate = 0.01,
l2_reg = 0.0001,
noise_level = 0.0,
max_steps = TOTAL_STEPS,
dropout_rate = 0.05
)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode = tf.contrib.learn.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
),
# exporters=[tf.estimator.LatestExporter(
# name="encode", # the name of the folder in which the model will be exported to under export
# serving_input_receiver_fn=csv_serving_input_fn,
# exports_to_keep=1,
# as_text=True)],
steps=None,
hooks=None
)
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(run_config, hparams)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
import itertools
DATA_SIZE = 2000
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.INFER,
num_epochs=1,
batch_size=500
)
estimator = create_estimator(run_config, hparams)
predictions = estimator.predict(input_fn=input_fn)
predictions = itertools.islice(predictions, DATA_SIZE)
predictions = list(map(lambda item: list(item["encoding"]), predictions))
print(predictions[:5])
y = pd.read_csv("data/data-01.csv", header=None, index_col=0)[65]
data_reduced = pd.DataFrame(predictions, columns=['c1','c2','c3'])
data_reduced['class'] = y
data_reduced.head()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=data_reduced.c2/1000000, ys=data_reduced.c3/1000000, zs=data_reduced.c1/1000000, c=data_reduced['class'], marker='o')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Define Dataset Metadata
Step2: 2. Define CSV Data Input Function
Step3: 3. Define Feature Columns
Step4: b. Create normalized feature columns
Step5: 4. Define Autoencoder Model Function
Step6: 5. Run Experiment using Estimator Train_And_Evaluate
Step7: b. Define TrainSpec and EvaluSpec
Step8: d. Run Experiment via train_and_evaluate
Step9: 6. Use the trained model to encode data (prediction)
Step10: Visualise Encoded Data
|
11,289
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'png'
import matplotlib
matplotlib.rcParams['figure.figsize'] = (8, 8)
import astropy.io.fits as fits
import numpy
spot50=fits.open('fits_data/electron_flux_fits/spot50.fits')
matplotlib.pyplot.imshow(numpy.log10(spot50[0].data),clim=(2,8))
matplotlib.pyplot.gca().invert_yaxis()
from httm.fits_utilities.electron_flux_fits import electron_flux_converter_from_fits
ideal_data = electron_flux_converter_from_fits('fits_data/electron_flux_fits/spot50.fits')
matplotlib.pyplot.imshow(numpy.log10(ideal_data.slices[0].pixels+1), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import introduce_smear_rows
smeared=introduce_smear_rows(ideal_data)
matplotlib.pyplot.imshow(numpy.log10(list(smeared.slices)[0].pixels+1), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import add_shot_noise
shot = add_shot_noise(smeared)
matplotlib.pyplot.imshow(numpy.log10(shot.slices[0].pixels+1), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import simulate_blooming
bloomed = simulate_blooming(shot)
matplotlib.pyplot.imshow(numpy.log10(bloomed.slices[0].pixels+1), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import add_readout_noise
noisy = add_readout_noise(bloomed)
matplotlib.pyplot.imshow(numpy.log10(noisy.slices[0].pixels+40), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import simulate_undershoot
undershot=simulate_undershoot(noisy)
matplotlib.pyplot.imshow(numpy.log10(undershot.slices[0].pixels+150), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import add_baseline
baselined = add_baseline(undershot)
matplotlib.pyplot.imshow(numpy.log10(baselined.slices[0].pixels-32000), clim=(0,8),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.transformations.electron_flux_converters_to_raw import convert_electrons_to_adu
raw = convert_electrons_to_adu(baselined)
matplotlib.pyplot.imshow(numpy.log10(raw.slices[0].pixels-5099), clim=(0,4),interpolation='none')
matplotlib.pyplot.gca().invert_yaxis()
from httm.fits_utilities.raw_fits import write_raw_converter_to_calibrated_fits
write_raw_converter_to_calibrated_fits(raw,'spot50_raw.fits')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bringing in a FITS File
Step2: Here, we've got a 50 x 200 image. HTTM will assume this is four slices of 50 x 50 by default. We have 1000000 electrons in one pixel in the center of slice 0.
|
11,290
|
<ASSISTANT_TASK:>
Python Code:
import os
os.chdir("/home/archimedeas/wrkspc/anaconda/major_1/datasets/1_the_senate_datasets")
import pandas as pd
import numpy as np
from bokeh._legacy_charts import output_notebook, show
import bokeh
df = pd.read_csv("1_age_group_5yr_span.csv", index_col = 0)
df
df.shape
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (10.0, 8.0)
import numpy as np
import matplotlib.pyplot as plt
N = 5
menMeans = (20, 35, 30, 35, 27)
menStd = (2, 3, 4, 1, 2)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, menMeans, width, color = 'red', alpha = 0.6)
womenMeans = (25, 32, 34, 20, 25)
womenStd = (3, 5, 2, 3, 3)
rects2 = ax.bar(ind + width, womenMeans, width )
# add some text for labels, title and axes ticks
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(('Frogs', 'Hogs', 'Bogs', 'Slogs', 'Kerebos'))
ax.legend((rects1[0], rects2[0]), ('Men', 'Women'))
plt.show()
import matplotlib.pyplot as plt
import pandas as pd
# The slices will be ordered and plotted counter-clockwise.
labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
sizes = [15, 30, 45, 10]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']
explode = (0.1, 0.1, 0.1, 0.1)
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=90)
# Set aspect ratio to be equal so that pie is drawn as a circle.
plt.axis('equal')
plt.show()
df_men = pd.read_csv("1_age_group_5yr_span.csv")
ls_labels_men = []
ls_values_men = []
for i in range(1,df_men.shape[0]):
ls_labels_men.append(str(df_men.iat[i,0]))
ls_values_men.append(float(df_men.iat[i,1]))
import numpy as np
import matplotlib.pyplot as plt
N = len(ls_labels_men)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, ls_values_men, width, color = 'red', alpha = 0.6)
#rects2 = ax.bar(ind + width, ls_values_men, width )
# add some text for labels, title and axes ticks
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(ls_labels_men)
#ax.legend((rects1[0], rects2[0]), ('Men', 'Women'))
plt.show()
df_women = pd.read_csv("8_women_age_group.csv")
ls_labels_women = []
ls_values_women = []
for i in range(1,df_women.shape[0]):
ls_labels_women.append(str(df_women.iat[i,0]))
ls_values_women.append(float(df_women.iat[i,1]))
import numpy as np
import matplotlib.pyplot as plt
N = len(ls_labels_women)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
#rects1 = ax.bar(ind, ls_values_women, width, color = 'red', alpha = 0.6)
rects2 = ax.bar(ind + width, ls_values_women, width )
# add some text for labels, title and axes ticks
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(ls_labels_women)
#ax.legend((rects1[0], rects2[0]), ('Men', 'Women'))
plt.show()
import pandas as pd
df_men = pd.read_csv("4_educational_background.csv")
df_men
ls_labels_men = []
ls_values_men = []
for i in range(1,df_men.shape[0]):
ls_labels_men.append(str(df_men.iat[i,0]))
ls_values_men.append(float(df_men.iat[i,1]))
len(ls_labels_men)
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (10.0, 10.0)
# The slices will be ordered and plotted counter-clockwise.
labels = ls_labels_men
sizes = ls_values_men
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', 'red', 'lightgreen']
#explode = (0.1, 0.1, 0.1, 0.1, 0, 0)
p, text = plt.pie(sizes, labels=labels, colors = colors, shadow=True, startangle=90 )
# Set aspect ratio to be equal so that pie is drawn as a circle.
plt.axis('equal')
#plt.title("Educational Background", fontsize = 50, loc = 'right')
plt.legend(p, labels, loc= 'lower right')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Style sheet for Matplotlib
Step2: Sample Bar Plot
Step3: Sample Pie Plot
Step4: Average age of Men and Women
Step5: Now we have read the data from the csv files
Step6: Producing the same graph for women members
Step7: Dealing with Occupational Backgrounds
|
11,291
|
<ASSISTANT_TASK:>
Python Code:
import qspectra as qs
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
electronic_fmo = np.array(np.mat(
12400 -87.7 5.5 -5.9 6.7 -13.7 -9.9;
-87.7 12520 30.8 8.2 0.7 11.8 4.3;
5.5 30.8 12200 -53.5 -2.2 -9.6 6.;
-5.9 8.2 -53.5 12310 -70.7 -17. -63.3;
6.7 0.7 -2.2 -70.7 12470 81.1 -1.3;
-13.7 11.8 -9.6 -17. 81.1 12620 39.7;
-9.9 4.3 6. -63.3 -1.3 39.7 12430
))
dipoles_fmo = np.array([d / np.linalg.norm(d) for d in
np.array([[3.019, 3.442, 0.797, 3.213, 2.969, 0.547, 1.983],
[2.284, -2.023, -3.871, 2.145, -2.642, 3.562, 2.837],
[1.506, 0.431, 0.853, 1.112, -0.661, -1.851, 2.015]]).T])
# Bath parameters for pseudomode bath -- fit to the Drude spectral density for FMO for 77K of Ishizaki and Fleming
# (each PM is represented by a Lorentzian at frequency Omega, with width gamma, and of strength huang
# in the bath correlation SPECTRUM, NOT spectral density)
Omega = [-500., -200., -90., 1., 21., 60., 80., 130., 200., 300., 400., 500., 600., 800., 1100., 1500.] # frequencies of PMs
huang = [-2.5133e-03, -7.5398e-03, -2.5133e-02, 5.0265e+01, 2.2619e+00, 4.5239e-02, 2.7646e-01,
9.2991e-03, 2.2619e-02, 1.5080e-02, 3.0159e-03,
3.5186e-03, 2.8274e-04, 1.7593e-03, 4.3982e-04, 4.3982e-04] # Huang-Rhys factors of PMs (couplings to PMs)
gamma = [500., 100., 50., 50., 50., 50., 80., 40., 80., 150., 200., 200., 80., 250., 200., 300.] # dampings of the PMs
n_sites = len(electronic_fmo)
numb_pm = len(Omega)
on = np.ones(n_sites, complex)
Omega = np.array([Omega[pm]*on for pm in range(numb_pm)])
huang = np.array([huang[pm]*on for pm in range(numb_pm)])
gamma = np.array([gamma[pm]*on for pm in range(numb_pm)])
fmo_hamiltonian = qs.ElectronicHamiltonian(electronic_fmo,
bath=qs.PseudomodeBath(numb_pm, Omega, gamma, huang),
dipoles=dipoles_fmo)
%%time
dynamical_model = qs.ZOFEModel(fmo_hamiltonian, hilbert_subspace='e', unit_convert=qs.CM_FS)
t, rho = qs.simulate_dynamics(dynamical_model, [1,0,0,0,0,0,0], 1000)
plt.title('Excitation from site 1')
plt.plot(t, np.einsum('tii->ti', rho.reshape(-1, 7, 7)).real)
plt.xlim(0, 1000)
plt.xlabel('Time (fs)')
plt.ylabel('Population')
plt.legend(['Site {}'.format(n) for n in xrange(1, 8)]);
%%time
pump = qs.GaussianPulse(12500, 50, scale=1e-3, freq_convert=qs.CM_FS)
dynamical_model = qs.ZOFEModel(fmo_hamiltonian, hilbert_subspace='ge', unit_convert=qs.CM_FS)
t, rho_oop_vec = qs.simulate_pump(dynamical_model, pump, 'x', 1000)
rho = np.array([dynamical_model.state_vec_to_operators(k)[0] for k in rho_oop_vec])
plt.title('Laser excitation')
plt.plot(t, 2 * pump(t, 12500).real, 'k--')
plt.plot(t, np.einsum('tii->ti', rho.reshape(-1, 8, 8)[:, 1:, 1:]).real)
plt.xlabel('Time (fs)')
plt.ylabel('Population')
plt.xlim(pump.t_init, 1000)
plt.legend(np.concatenate([['Laser (arb. units)'], ['Site {}'.format(n) for n in xrange(1, 8)]]));
%%time
f, X = qs.absorption_spectra(dynamical_model, 2000)
plt.plot(f, X)
plt.xlim(12000, 12800);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: FMO dynamics simulated with ZOFE master equation
Step2: Excited state dynamics
Step3: Gaussian pump pulse
Step4: Absorption spectra
|
11,292
|
<ASSISTANT_TASK:>
Python Code:
form_uid = 'HQnDRM'
#form_uid = 'iSEGWq'
typeform_api_key = '_API_KEY_'
url = 'https://api.typeform.com/v1/form/' + form_uid + '?key=' + typeform_api_key
import requests
response = requests.get(url)
results = response.json()
results
questions = results.get('questions')
questions
responses = results.get('responses')
responses
print('Total Responses: ' + str(len(responses)))
answers = [each['answers'] for each in responses]
for answer in answers:
for each in answer:
if 'rating_' in each:
answer[each] = int(answer[each])
print(answers)
print('Completion rate: ' + str(len(list(filter(lambda x: x, answers))) / len(answers)))
import pandas as pd
df = pd.DataFrame(answers)
df
df[df['list_53368385_choice'] == 'จตุจักร']
df[df['list_53368385_choice'] == 'คลองสาน']
df_questions = pd.DataFrame(questions)
df_questions
question_groups = list(filter(lambda x: type(x) == str, df_questions.group.unique()))
question_groups
for each_question_group in question_groups:
print('--- ' + df_questions[df_questions['id'] == each_question_group]['question'].iloc[0] + ' ---')
questions_in_group = df_questions[df_questions['group'] == each_question_group][['id', 'question']]
for idx, question in zip(questions_in_group.id, questions_in_group.question):
print(question + ': ' + str(df[idx].mean()))
print()
branch_column_name = 'list_53368385_choice'
branches = list(filter(lambda x: type(x) == str, df[branch_column_name].unique()))
branches
df[df[branch_column_name] == 'จตุจักร']
df_questions
questions_in_no_group = df_questions[
df_questions['group'].isnull() &
df_questions['id'].str.contains('rating_')
][['id', 'question']]
questions_in_no_group
df_questions[df_questions['group'] == 'group_53368461']
df_questions[df_questions['group'] == 'group_53368461'].shape[0]
charts = {}
for each_branch in branches:
branch = []
df_branch = df[df[branch_column_name] == each_branch]
for each_question_group in question_groups:
chart = {'data': []}
chart['id'] = each_question_group
chart['label'] = df_questions[df_questions['id'] == each_question_group]['question'].iloc[0] + '...'
questions_in_group = df_questions[df_questions['group'] == each_question_group][['id', 'question']]
for idx, question in zip(questions_in_group.id, questions_in_group.question):
chart['data'].append(
[
question,
df_branch[idx].mean()
]
)
branch.append(chart)
chart = {'data': []}
chart['id'] = 'group_others'
chart['label'] = 'Others'
questions_in_no_group = df_questions[df_questions['group'].isnull() & df_questions['id'].str.contains('rating_')][['id', 'question']]
for idx, question in zip(questions_in_no_group.id, questions_in_no_group.question):
chart['data'].append(
[
question,
df_branch[idx].mean()
]
)
branch.append(chart)
charts[each_branch] = branch
charts
#for each_question_group in question_groups:
# count = df_questions[df_questions['group'] == each_question_group].shape[0]
for each_branch in branches:
#for idx in range(count):
print(charts[each_branch][2]['data'])
print('------')
for idx, _ in enumerate(charts):
for each_branch in branches:
print(charts[each_branch][idx]['id'])
print('------')
scores = []
for each_branch in branches:
score_each_branch = []
for each in charts[each_branch][0]['data']:
score_each_branch.append(each[1])
scores.append(score_each_branch)
scores
list(zip(*scores))
list(map(list, zip(*scores)))
import numpy as np
np_scores = np.array(scores)
np_scores.T
a = [['xx', 8.0, 9.0],
['yy', 5.0, 7.0]]
a
b = list(zip(*a))
b
list(zip(*b))
x = [
('คำตอบคำถาม1-สาขา1', 'คำตอบคำถาม2-สาขา1', 'คำตอบคำถาม3-สาขา1'),
('คำตอบคำถาม1-สาขา2', 'คำตอบคำถาม2-สาขา2', 'คำตอบคำถาม3-สาขา2'),
('คำตอบคำถาม1-สาขา3', 'คำตอบคำถาม2-สาขา3', 'คำตอบคำถาม3-สาขา3'),
('คำตอบคำถาม1-สาขา4', 'คำตอบคำถาม2-สาขา4', 'คำตอบคำถาม3-สาขา4'),
]
x = [('คำถาม 1', 'คำถาม 2', 'คำถาม 3'), ] + x
list(map(list, zip(*x)))
df_answers = df
df_questions
df_answers
question_groups
branches
df_questions[df_questions['group'] == 'group_53368461'][['id', 'question']]
df_answers[df_answers[branch_column_name] == 'บางนา']
charts = []
# for each question group
# for questions in each question group
filtered_questions = df_questions[df_questions['group'] == 'group_53368461'][['id', 'question']]
for question_id, question in zip(filtered_questions.id, filtered_questions.question):
for idx, each in enumerate(branches):
if idx == 0:
chart = [question]
df_branch = df_answers[df_answers[branch_column_name] == each]
chart.append(df_branch[question_id].mean())
charts.append(chart)
charts
filtered_questions
df_branch
prefixes = ['list_', 'rating_', 'textfield_']
for each in questions:
if any([each['id'].startswith(prefix) for prefix in prefixes]):
try:
print(df[each['id']])
except KeyError:
print('No one answers..')
form_uid = 'iSEGWq'
typeform_api_key = '__API_KEY__'
url = 'https://api.typeform.com/v1/forms?key=' + typeform_api_key
response = requests.get(url)
results = response.json()
results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: เปลี่ยน rating ให้กลายเป็น integer เนื่องจากตอนที่ดึงข้อมูลจาก API ข้อมูลส่วนของ rating จะมาเป็น string
Step2: แสดงคำตอบของแต่ละคนเพื่อเช็คว่าค่า rating ที่ได้มาเป็น integer แล้ว
Step3: ใช้ Pandas เข้ามาช่วยจัดการข้อมูล
Step4: กลุ่มคำถามทั้งหมด
Step5: คะแนน "โดยรวมทุกสาขา" จากคำตอบของแต่ละกลุ่มคำถาม
Step6: สาขาทั้งหมด
Step7: คะแนน "แยกแต่ละสาขา" จากคำตอบของแต่ละกลุ่มคำถาม
Step8: ส่งข้อมูลออกไปเป็น group ของ questions จะ render ได้ง่ายกว่า
Step9: ถ้าจัดข้อมูลแบบตัวแปร b ได้ เราสามารถ transpose และส่งออกไป render ได้ง่าย
Step10: หลังจากา loop แต่ละ question group แล้ว ให้ loop แต่ละ สาขา (branch) เพื่อสร้าง list ของคำถามของสาขานั้นๆ มาก่อน
Step11: [IGNORE] ทดลองโค้ดบางอย่าง
|
11,293
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy.modflow as mf
import flopy.utils as fu
import flopy.plot as fp
modelname = 'swiex1'
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
nlay = 1
nrow = 1
ncol = 50
delr = 5.
delc = 1.
nper, perlen, nstp = 1, 400., 200
discret = mf.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol,
delr=delr, delc=delc,
top=0, botm=-40.0,
steady=True, nper=nper, perlen=perlen, nstp=nstp)
ibound = np.ones((nrow, ncol))
ibound[0, -1] = -1
bas = mf.ModflowBas(ml, ibound=ibound, strt=0.0)
lpf = mf.ModflowLpf(ml, hk=2., laytyp=0, layavg=0)
wel = mf.ModflowWel(ml, stress_period_data = {0:[[0, 0, 0, 1]]} )
spd = {}
for istp in xrange(49, nstp+1, 50):
spd[(0, istp)] = ['save head', 'print budget']
spd[(0, istp+1)] = []
oc = mf.ModflowOc(ml, stress_period_data=spd)
pcg = mf.ModflowPcg(ml)
z = np.zeros((nrow, ncol), np.float)
z[0, 16:24] = np.arange(-2.5, -40, -5)
z[0, 24:] = -40
z = [z] # zeta needs to be
isource = np.ones((nrow, ncol), np.int)
isource[0, 0] = 2
#
swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=0.2, tipslope=0.2, nu=[0, 0.025],
zeta=z, ssz=0.2, isource=isource, nsolver=1)
ml.write_input()
ml.run_model(silent=True)
#--read model heads
hfile = fu.HeadFile(os.path.join(ml.model_ws, modelname+'.hds'))
head = hfile.get_alldata()
#--read model zeta
zfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
plt.figure(figsize=(16,6))
# define x-values of xcells and plot interface
x = np.arange(0, ncol*delr, delr) + delr/2.
label = ['SWI2','_','_','_'] # labels with an underscore are not added to legend
for i in range(4):
zt = np.ma.masked_outside(zeta[i,0,0,:], -39.99999, -0.00001)
plt.plot(x, zt, 'r-', lw=1,
zorder=10, label=label[i])
# Data for the Wilson - Sa da Costa solution
k = 2.0
n = 0.2
nu = 0.025
H = 40.0
tzero = H * n / (k * nu) / 4.0
Ltoe = np.zeros(4)
v = 0.125
t = np.arange(100, 500, 100)
label = ['W & SD','_','_','_'] # labels with an underscore are not added to legend
for i in range(4):
Ltoe[i] = H * np.sqrt(k * nu * (t[i] + tzero) / n / H)
plt.plot([100 - Ltoe[i] + v * t[i], 100 + Ltoe[i] + v * t[i]], [0, -40], '0.75',
lw=8, zorder=0, label=label[i])
# Scale figure and add legend
plt.axis('scaled')
plt.xlim(0, 250)
plt.ylim(-40, 0)
plt.legend(loc='best');
X, Y = np.meshgrid(x, [0, -40])
zc = fp.SwiConcentration(model=ml)
conc = zc.calc_conc(zeta={0:zeta[3,:,:,:]}) / 0.025
print conc[0, 0, :]
v = np.vstack((conc[0, 0, :], conc[0, 0, :]))
plt.imshow(v, extent=[0, 250, -40, 0], cmap='Reds')
cb = plt.colorbar(orientation='horizontal')
cb.set_label('percent seawater');
plt.contour(X, Y, v, [0.75, 0.5, 0.25], linewidths=[2, 1.5, 1], colors='black');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model.
Step2: Define the number of layers, rows and columns, and the cell size along the rows (delr) and along the columns (delc). Then create a discretization file. Specify the top and bottom of the aquifer. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There is one stress period with a length of 400 days and 200 steps (so one step is 2 days).
Step3: All cells are active (ibound=1), while the last cell is fixed head (ibound=-1). The starting values of the head are not important, as the heads are computed every time with a steady run.
Step4: Define the hydraulic conductivity. The aquifer is confined (laytype=0) and the intercell hydraulic conductivity is the harmonic meand (layavg=0).
Step5: Inflow on the right side of the model is 1 m$^3$/d (layer 0, row 0, column 0, discharge 1)
Step6: Define the output control to save heads and interface every 50 steps, and define the pcg solver with default arguments.
Step7: The intial interface is straight. The isource is one everywhere (inflow and outflow is fresh (zone 1)) except for the first cell (index=0) which has saltwater inflow (zone 2).
Step8: Write the MODFLOW input files and run the model
Step9: Load the head and zeta data from the file
Step10: Make a graph and add the solution of Wilson and Sa da Costa
Step11: Convert zeta surfaces to relative seawater concentrations
|
11,294
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
trajectory = np.load('trajectory.npz')
x = trajectory['x']
y = trajectory['y']
t = trajectory['t']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
newt = np.linspace(min(t),max(t),200)
sin_approx = interp1d(t, x, kind='cubic')
sin_approx2 = interp1d(t, x, kind='cubic')
newx = sin_approx(newt)
newy = sin_approx(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
plt.plot(newx, newy, marker='o', linestyle='', label='original data')
plt.plot(newx, newy, marker='.', label='interpolated');
plt.legend(loc=4);
plt.xlabel('$x(t)$')
plt.ylabel('$y(t)$');
plt.xlim(-0.7, 0.9)
plt.ylim(-0.7,0.9)
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
11,295
|
<ASSISTANT_TASK:>
Python Code:
from pattern.en import parsetree
s = parsetree('The cat sat on the mat.')
for sentence in s:
for chunk in sentence.chunks:
print chunk.type, [(w.string, w.type) for w in chunk.words]
import cPickle as pickle
author1 = pickle.load( open( "author1.p", "rb" ) )
author1[0]
for story in author1:
story["title_length"] = len(story["title"])
story["title_chunks"] = [chunk.type for chunk in parsetree(story["title"])[0].chunks]
story["title_chunks_length"] = len(story["title_chunks"])
author1[0]
import pandas as pd
df1 = pd.DataFrame.from_dict(author1)
df1.describe()
import difflib
print "Similarity scores for...\n"
print "Two identical sequences: ", difflib.SequenceMatcher(None,["A","B","C"],["A","B","C"]).ratio()
print "Two similar sequences: ", difflib.SequenceMatcher(None,["A","B","C"],["A","B","D"]).ratio()
print "Two completely different sequences: ", difflib.SequenceMatcher(None,["A","B","C"],["X","Y","Z"]).ratio()
v1 = author1[3]["title_chunks"]
v2 = author1[1]["title_chunks"]
print v1, v2, difflib.SequenceMatcher(None,v1,v2).ratio()
import numpy as np
chunks = [author["title_chunks"] for author in author1]
m = np.zeros((700,700))
for i, chunkx in enumerate(chunks):
for j, chunky in enumerate(chunks):
m[i][j] = difflib.SequenceMatcher(None,chunkx,chunky).ratio()
from sklearn.manifold import TSNE
tsne_model = TSNE(n_components=2, verbose=1, random_state=0)
tsne = tsne_model.fit_transform(m)
from sklearn.cluster import MiniBatchKMeans
kmeans_model = MiniBatchKMeans(n_clusters=5, init='k-means++', n_init=1,
init_size=1000, batch_size=1000, verbose=False, max_iter=1000)
kmeans = kmeans_model.fit(m)
kmeans_clusters = kmeans.predict(m)
kmeans_distances = kmeans.transform(m)
import bokeh.plotting as bp
from bokeh.models import HoverTool, BoxSelectTool
from bokeh.plotting import figure, show, output_notebook
colormap = np.array([
"#1f77b4", "#aec7e8", "#ff7f0e", "#ffbb78", "#2ca02c",
"#98df8a", "#d62728", "#ff9896", "#9467bd", "#c5b0d5",
"#8c564b", "#c49c94", "#e377c2", "#f7b6d2", "#7f7f7f",
"#c7c7c7", "#bcbd22", "#dbdb8d", "#17becf", "#9edae5"
])
output_notebook()
plot_author1 = bp.figure(plot_width=900, plot_height=700, title="Author1",
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
plot_author1.scatter(x=tsne[:,0], y=tsne[:,1],
color=colormap[kmeans_clusters],
source=bp.ColumnDataSource({
"chunks": [x["title_chunks"] for x in author1],
"title": [x["title"] for x in author1],
"cluster": kmeans_clusters
}))
hover = plot_author1.select(dict(type=HoverTool))
hover.tooltips={"chunks": "@chunks (title: \"@title\")", "cluster": "@cluster"}
show(plot_author1)
author2 = pickle.load( open( "author2.p", "rb" ) )
for story in author2:
story["title_length"] = len(story["title"])
story["title_chunks"] = [chunk.type for chunk in parsetree(story["title"])[0].chunks]
story["title_chunks_length"] = len(story["title_chunks"])
pd.DataFrame.from_dict(author2).describe()
chunks_joint = [author["title_chunks"] for author in (author1+author2)]
m_joint = np.zeros((1400,1400))
for i, chunkx in enumerate(chunks_joint):
for j, chunky in enumerate(chunks_joint):
sm=difflib.SequenceMatcher(None,chunkx,chunky)
m_joint[i][j] = sm.ratio()
set1= [author["title_chunks"] for author in author1]
set2= [author["title_chunks"] for author in author2]
list_new = [itm for itm in set1 if itm in set2]
len(list_new)
tsne_joint = tsne_model.fit_transform(m_joint)
plot_joint = bp.figure(plot_width=900, plot_height=700, title="Author1 vs. Author2",
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
plot_joint.scatter(x=tsne_joint[:,0], y=tsne_joint[:,1],
color=colormap[([0] * 700 + [3] * 700)],
source=bp.ColumnDataSource({
"chunks": [x["title_chunks"] for x in author1] + [x["title_chunks"] for x in author2],
"title": [x["title"] for x in author1] + [x["title"] for x in author2]
}))
hover = plot_joint.select(dict(type=HoverTool))
hover.tooltips={"chunks": "@chunks (title: \"@title\")"}
show(plot_joint)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see an example
Step2: Loading the data
Step3: Parsing the data
Step4: Let's see what the numeric attributes for headlines written by this author look like. We're going to use Pandas for this.
Step5: From this information, we're going to extract the chunk type sequence of each headline (i.e. the first level of the parse tree) and use it as an indicator of the overall structure of the headline. So in the above example, we would extract and use the following sequence of chunk types in our analysis
Step6: Now let's see how that works with our chunk type sequences, for two randomly selected headlines from the first author
Step7: Pair-wise similarity matrix for the headlines
Step8: Visualization
Step9: And to add a bit of color to our visualization, let's use K-Means to identify 5 clusters of similar headlines, which we will use in our visualization
Step10: Finally let's plot the actual chart using Bokeh
Step11: The above interactive chart shows a number of dense groups of headlines, as well as some sparse ones. Some of the dense groups that stand out more are
Step12: The basic stats don't show a significant difference between the headlines written by the two authors.
Step13: Now that we have analyzed the headlines for the second author, let's see how many common patterns exist between the two authors
Step14: We observe that about 50% (347/700) of the headlines have a similar structure.
|
11,296
|
<ASSISTANT_TASK:>
Python Code::
import pandas as pd
import boto3
from io import StringIO
# Set your AWS credentials and region
aws_access_key_id = 'access_key_id'
aws_secret_access_key = 'secret_access_key'
aws_region = 'aws_region'
# Set the S3 bucket and file path
s3_bucket = 's3_bucket_name'
s3_file_path = 'data.csv'
# Create a session with boto3
session = boto3.Session(
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=aws_region
)
# Create an S3 client
s3 = session.client('s3')
# Read the CSV file from S3 into a pandas dataframe
s3_object = s3.get_object(Bucket=s3_bucket, Key=s3_file_path)
s3_data = s3_object['Body'].read().decode('utf-8')
df = pd.read_csv(StringIO(s3_data))
# Print the dataframe
print(df.head())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,297
|
<ASSISTANT_TASK:>
Python Code:
%%writefile hello.py
#!/usr/bin/env python
def printHello():
print "Hello World"
print "File Loaded"
cat mystuff.ipynb
2+4
%timeit range(1000)
%%timeit x = range(10000)
max(x)
%lsmagic
ls
%matplotlib inline
%%capture capt
from __future__ import print_function
import sys
print('Hello stdout')
print('and stderr', file=sys.stderr)
capt.stdout, capt.stderr
capt.show()
%%script python
import sys
print 'hello from Python %s' % sys.version
%%bash
echo "hello from $BASH"
%%script ./lnum.py
my first line
my second
more
a = 3
b = 4
a + b
a*b
a - b
_
___
_49
Out[62]
_i
In[50]
!python --version
!ping www.google.com
import time
time.sleep(10)
import sys
from ctypes import CDLL
# This will crash a Linux or Mac system; equivalent calls can be made on Windows
dll = 'dylib' if sys.platform == 'darwin' else 'so.6'
libc = CDLL("libc.%s" % dll)
libc.time(-1) # BOOM!!
!python --version
!ipython --version
%load http://matplotlib.sourceforge.net/mpl_examples/pylab_examples/integral_demo.py
%pdb
!ipython nbconvert mystuff.ipynb --to pdf
#Note that I had to install cython to get this to work.
# try doing 'conda update cython' if you get an error
%load_ext Cython
%%cython
cimport numpy
cpdef cysum(numpy.ndarray[double] A):
Compute the sum of an array
cdef double a=0
for i in range(A.shape[0]):
a += A[i]
return a
def pysum(A):
Compute the sum of an array
a = 0
for i in range(A.shape[0]):
a += A[i]
return a
import numpy as np
for sz in (100, 1000, 10000):
A = np.random.random(sz)
print("==>Python %i" % sz, end=' ')
%timeit pysum(A)
print("==>np.sum %i" % sz, end=' ')
%timeit A.sum()
print("==>Cython %i" % sz, end=' ')
%timeit cysum(A)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ipython shell
Step2: Ok, backup, how do I interact with the notebook and what is it?
Step3: So, you exectued this by hitting the 'play' button in the tool bar or you used 'shift-enter'. Some other ways
Step4: My favs...
Step5: MUCHS INFOS
Step6: Magics for running code under other interpreters
Step7: Exercise
Step8: Hints
Step9: Running Shell Commands
Step10: Managing the IPython Kernel
Step11: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via
Step12: Side note on versions
Step13: I promised you latex!
Step14: Debugging
Step15: Lots of useful features in the python debugger (and could probably do with a seperate lecture). We'll save that for another day...
Step18: The %cython magic
|
11,298
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
g = nx.karate_club_graph()
plt.figure(figsize=(6,4));
nx.draw(g)
from networkx.readwrite import json_graph
data = json_graph.node_link_data(g)
with open('graph.json', 'w') as f:
json.dump(data, f, indent=4)
%%html
<div id="d3-example"></div>
<style>
.node {stroke: #fff; stroke-width: 1.5px;}
.link {stroke: #999; stroke-opacity: .6;}
</style>
%%javascript
// We load the d3.js library from the Web.
require.config({paths: {d3: "http://d3js.org/d3.v3.min"}});
require(["d3"], function(d3) {
// The code in this block is executed when the
// d3.js library has been loaded.
// First, we specify the size of the canvas containing
// the visualization (size of the <div> element).
var width = 300,
height = 300;
// We create a color scale.
var color = d3.scale.category10();
// We create a force-directed dynamic graph layout.
var force = d3.layout.force()
.charge(-120)
.linkDistance(30)
.size([width, height]);
// In the <div> element, we create a <svg> graphic
// that will contain our interactive visualization.
var svg = d3.select("#d3-example").select("svg")
if (svg.empty()) {
svg = d3.select("#d3-example").append("svg")
.attr("width", width)
.attr("height", height);
}
// We load the JSON file.
d3.json("graph.json", function(error, graph) {
// In this block, the file has been loaded
// and the 'graph' object contains our graph.
// We load the nodes and links in the force-directed
// graph.
force.nodes(graph.nodes)
.links(graph.links)
.start();
// We create a <line> SVG element for each link
// in the graph.
var link = svg.selectAll(".link")
.data(graph.links)
.enter().append("line")
.attr("class", "link");
// We create a <circle> SVG element for each node
// in the graph, and we specify a few attributes.
var node = svg.selectAll(".node")
.data(graph.nodes)
.enter().append("circle")
.attr("class", "node")
.attr("r", 5) // radius
.style("fill", function(d) {
// The node color depends on the club.
return color(d.club);
})
.call(force.drag);
// The name of each node is the node number.
node.append("title")
.text(function(d) { return d.name; });
// We bind the positions of the SVG elements
// to the positions of the dynamic force-directed graph,
// at each time step.
force.on("tick", function() {
link.attr("x1", function(d) { return d.source.x; })
.attr("y1", function(d) { return d.source.y; })
.attr("x2", function(d) { return d.target.x; })
.attr("y2", function(d) { return d.target.y; });
node.attr("cx", function(d) { return d.x; })
.attr("cy", function(d) { return d.y; });
});
});
});
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load a famous social graph published in 1977, called Zachary's Karate club graph. This graph represents the friendships between members of a Karate Club. The club's president and the instructor were involved in a dispute, resulting in a split-up of this group. Here, we simply display the graph with matplotlib (using networkx.draw()).
Step2: Now, we're going to display this graph in the notebook with d3.js. The first step is to bring this graph to Javascript. We choose here to export the graph in JSON. Note that d3.js generally expects each edge to be an object with a source and a target. Also, we specify which side each member has taken (club attribute).
Step3: The next step is to create an HTML object that will contain the visualization. Here, we create a <div> element in the notebook. We also specify a few CSS styles for nodes and links (also called edges).
Step4: The last step is trickier. We write the Javascript code to load the graph from the JSON file, and display it with d3.js. Knowing the basics of d3.js is required here (see the documentation of d3.js). We also give detailled explanations in the code comments below. (http
|
11,299
|
<ASSISTANT_TASK:>
Python Code:
from IPython.parallel import Client
rc = Client()
print "Using cluster with %d engines." % len(rc.ids)
%%px --local
import os
import os.path
from PIL import Image
src_dir = "/kaggle/retina/sample" # source directory of images to resize
trg_dir = "/kaggle/retina/resized" # target directory of the resized images
prefix = "resized_" # string to prepend to the resized file name
hsize = 256 # horizontal size of the resized image
vsize = 256 # vertical size of the resized image
all_files = filter(lambda x: x.endswith(".jpeg"), os.listdir(src_dir))
filename = all_files[0]
filepath = os.path.join(src_dir, filename)
%timeit -n1 -r1 Image.open(filepath)
im = Image.open(filepath)
%timeit im.resize((hsize, vsize))
resized_im = im.resize((hsize, vsize), Image.NEAREST)
%timeit im.resize((hsize, vsize), Image.LANCZOS)
if not os.path.exists(trg_dir):
os.makedirs(trg_dir)
resized_filepath = os.path.join(trg_dir, prefix + filename)
%timeit -n1 -r1 resized_im.save(resized_filepath, "JPEG", quality = 100)
%%px --local
def resize_method(filename, method):
filepath = os.path.join(src_dir, filename)
im = Image.open(filepath)
resized_im = im.resize((hsize, vsize), method)
resized_filepath = os.path.join(trg_dir, prefix + filename)
resized_im.save(resized_filepath, "JPEG", quality = 100)
def resize_NEAREST(filename):
resize_method(filename, Image.NEAREST)
def resize_LANCZOS(filename):
resize_method(filename, Image.LANCZOS)
%timeit -n1 -r1 map(resize_NEAREST, all_files)
%timeit -n1 -r1 map(resize_LANCZOS, all_files)
v = rc[:]
%timeit -n1 -r1 v.map_sync(resize_NEAREST, all_files)
%timeit -n1 -r1 v.map_sync(resize_LANCZOS, all_files)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us set up imports and parameters in one place for the cluster and the local engine
Step2: Load an image
Step3: Resize the image with default downsampling
Step4: LANCZOS anti-aliasing method is recommended for downsampling by PIL tutorial, but is much slower
Step5: Save the resized image.
Step6: Let us define functions that do the above in one go for all files
Step7: For quick and dirty experiments we can use the default downsampling.
Step8: Since the processing is dominated by the CPU-bound resizing we can benefit from parallelization
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.