Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
2,900
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%matplotlib notebook
from astropy.time import Time
from threeML import *
get_available_plugins()
gbm_catalog = FermiGBMBurstCatalog()
gbm_catalog.cone_search(ra=0,dec=0,radius=10)
gbm_catalog.search_around_source('Crab',radius=5)
gbm_catalog.query("2<t90<10")
mjd = Time(['2010-01-01T00:00:00','2011-01-01T00:00:00']).mjd
gbm_catalog.query(' %f <trigger_time < %f' % tuple(mjd))
gbm_catalog.query("pflx_band_alpha < -2./3. & pflx_band_ergflux > 1E-5")
gbm_catalog.query_sources("GRB080916009",'GRB100204024')
gbm_catalog.result[['t90','t50']]
det_info = gbm_catalog.get_detector_information()
det_info['GRB080916009']
gbm_data = download_GBM_trigger_data(trigger_name=det_info['GRB080916009']['trigger'],
detectors=det_info['GRB080916009']['detectors'],
destination_directory='gbm/bn080916009',
compress_tte=True)
interval = 'fluence'
models = gbm_catalog.get_model(model=det_info['GRB080916009']['best fit model'][interval],interval=interval)
models['GRB080916009']
det = 'n3'
nai3 = FermiGBMTTELike(name=det,
source_intervals=det_info['GRB080916009']['source']['fluence'],
background_selections=det_info['GRB080916009']['background']['full'],
tte_file=gbm_data[det]['tte'],
rsp_file=gbm_data[det]['rsp'])
det = 'n4'
nai4 = FermiGBMTTELike(name=det,
source_intervals=det_info['GRB080916009']['source']['fluence'],
background_selections=det_info['GRB080916009']['background']['full'],
tte_file=gbm_data[det]['tte'],
rsp_file=gbm_data[det]['rsp'])
det = 'b0'
bgo0 = FermiGBMTTELike(name=det,
source_intervals=det_info['GRB080916009']['source']['fluence'],
background_selections=det_info['GRB080916009']['background']['full'],
tte_file=gbm_data[det]['tte'],
rsp_file=gbm_data[det]['rsp'])
nai3.set_active_measurements('8-900')
nai4.set_active_measurements('8-900')
bgo0.set_active_measurements('250-42000')
threeML_config['gbm']['selection color'] = 'r'
nai3.view_lightcurve(stop=100)
data_list = DataList(nai3,nai4,bgo0)
jl = JointLikelihood(models['GRB080916009'], data_list=data_list)
# Now we can fit
res = jl.fit()
_ = display_ogip_model_counts(jl,min_rate=5,step=False)
lle_catalog = FermiLLEBurstCatalog()
lle_catalog.cone_search(0,0,30)
lle_catalog.query_sources('GRB080916009')
lle_catalog.result.loc['GRB080916009']['trigger_name']
lle_data_info = download_LLE_trigger_data(lle_catalog.result.loc['GRB080916009']['trigger_name'],destination_directory='lat')
lle_data_info
lle = FermiLATLLELike('LLE',
lle_file=lle_data_info['lle'],
rsp_file=lle_data_info['rsp'],
ft2_file=lle_data_info['ft2'],
background_selections=det_info['GRB080916009']['background']['full'],
source_intervals=det_info['GRB080916009']['source']['fluence']
)
lle.view_lightcurve(stop=100)
lle.set_active_measurements('50000-100000')
data_list = DataList(nai3,nai4,bgo0,lle)
jl = JointLikelihood(models['GRB080916009'], data_list=data_list)
# Now we can fit
res = jl.fit()
_ = display_ogip_model_counts(jl,min_rate=5,step=False)
swift_catalog = SwiftGRBCatalog()
swift_catalog.cone_search(0,0,10)
swift_catalog.query('"2005-09-22T15:02:00.257060" < trigger_time < "2005-10-22T15:02:00.257060"')
swift_catalog.query('redshift > 7')
swift_catalog.get_redshift()
swift_catalog.get_redshift()['reference']
swift_catalog.other_observing_instruments
swift_catalog.query_other_observing_instruments('Integral','Chandra')
other_instruments = swift_catalog.get_other_instrument_information()
other_instruments
other_instruments['Fermi-GBM']
swift_catalog.get_other_observation_information()
cleanup_downloaded_GBM_data(gbm_data)
cleanup_downloaded_LLE_data(lle_data_info)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: GBM catalog
Step2: Searching cones on the sky
Step3: Or perhaps we want to know the GRBs around the Crab nebula
Step4: Searching on T90 and time
Step5: We can also search MJD or UTC. The GBM catalog uses MJD to index time. We can use astroy's time module to convert from UTC to MJD
Step6: Searching on parameter space
Step7: Searching for triggers
Step8: Let's look at what a search can give us
Step9: To automate or repeat an analysis, it is good to know the values of source/background selections that were used in the catalog. These are extracted into a dictionary.
Step10: Let's examine what we have for GRB080916C.
Step11: Downloading data
Step12: We now have our data locally downloaded from the HEASARC servers. This gets the most current version of the files and prefers RSP2 to RSP. You can of course use other methods to obtain earlier versions or differnt DRM types.
Step13: 3ML Analysis
Step14: Model setup
Step15: LLE Catalog
Step16: We can perform cone, time, and trigger searches just like as with the GBM catalog. Currently, no T90 searches are available.
Step17: Downloading LLE data
Step18: Analysis with GBM and LLE data
Step19: Swift Catalog
Step20: Cone, time, T90, and trigger seraches
Step21: Searching redshift (z) and other observations
Step22: Great! Now lets get those redshifts and learn more
Step23: We can even simply click on the links and pull up the reference when we call that column
Step24: Other observations
Step25: Let's look at all those that are also seen by Integral and/or Chandra
Step26: We can then obtain the informatrion from all the observing instruments from this (or any other) search
Step27: In particular, for Fermi-GBM, we can view the trigger number. In the future, other instruments will have this feature
Step28: Other observations
Step29: Clean up
|
2,901
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Demonstrando propriedades de vetores
# Ideia: coloque mais dimensoes nos vetores e veja o que acontece!
x = np.array([4, 3])
y = np.array([3, 4])
print x
print y
print x + y # Soma de vetores
print 10 * x # Multiplicacao por escalar
print np.linalg.norm(y), np.linalg.norm(x) # Modulo de vetores
print np.linalg.norm(y-x), np.linalg.norm(x-y) # Distancia euclidiana
# Demonstrando propriedades de vetores
# Ideia: coloque mais dimensoes nos vetores e veja o que acontece!
x = np.array([-40, -30, 30, 40])
y = np.array([3, 4, -3, -4])
z = np.array([0, 1, -6, -7])
print np.mean(x), np.mean(y), np.mean(z) # Medias
print np.var(x), np.var(y), np.var(z) # Variancias
x = np.array([4, 3, 2, 4, 3.2, 1, 2, 90, 1, 2, 3, 4])
x_norm = (x - np.mean(x))/np.sqrt(np.var(x))
print np.mean(x), np.mean(x_norm)
print np.var(x), np.var(x_norm)
import mir3.modules.tool.wav2spectrogram as spectrogram
import mir3.modules.features.flatness as flatness
fnames = ['audio/tabla.wav', 'audio/bbking.wav', 'audio/chorus.wav']
flat_samples = []
for fname in fnames:
wav2spec = spectrogram.Wav2Spectrogram() # Objeto que converte arquivos wav para espectrogramas
s = wav2spec.convert(open(fname, 'rb'), window_length=1024, window_step=512, spectrum_type='magnitude')
fness = flatness.Flatness()
f = fness.calc_track(s)
flat_samples.append(f.data)
plt.figure();
plt.hist(flat_samples, 15, normed=1, histtype='bar',
color=['red', 'blue', 'green'],
label=['Tabla', 'Guitarra', 'Coral']);
plt.xlabel('Flatness');
plt.ylabel('Quantidade de quadros');
plt.legend(loc=1);
m = []
s =[]
for a in xrange(3):
m.append(np.mean(flat_samples[a]))
s.append(np.var(flat_samples[a]))
print fnames[a], np.mean(flat_samples[a]), np.var(flat_samples[a])
color=['red', 'blue', 'green']
label=['Tabla', 'Guitarra', 'Coral']
plt.figure();
for a in xrange(len(m)):
plt.scatter(m[a], s[a], color=color[a], label=label[a])
plt.xlabel('Media de flatness')
plt.ylabel('Variancia de flatness')
plt.legend(label, loc=4);
plt.show()
wav2spec = spectrogram.Wav2Spectrogram() # Objeto que converte arquivos wav para espectrogramas
spec = wav2spec.convert(open('audio/testing.wav', 'rb'), window_length=1024, window_step=512, spectrum_type='magnitude')
fness = flatness.Flatness()
f = fness.calc_track(spec)
m.append(np.mean(f.data))
s.append(np.var(f.data))
color=['red', 'blue', 'green', 'black']
label=['Tabla', 'Guitarra', 'Coral', 'Teste']
plt.figure();
for a in xrange(len(m)):
plt.scatter(m[a], s[a], color=color[a], label=label[a])
plt.xlabel('Media de flatness')
plt.ylabel('Variancia de flatness')
plt.legend(label, loc=4);
plt.show()
for i in xrange(3):
print label[i], np.linalg.norm( np.array([s[-1]-s[i], m[-1]-m[i]]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Estatística e vetores
Step2: Normalizando vetores
Step3: Análise estatística de descritores de áudio
Step4: Podemos, neste momento, detectar alguns comportamentos interessantes
Step5: É importante perceber que este procedimento permite relacionar arquivos de áudio a um vetor, de tantas dimensões quanto se queira (poderíamos expandir este procedimento para incluir outros descritores, por exemplo).
Step6: Generalização
Step7: Podemos visualizar que o ponto de teste parece mais próximo do vetor da Tabla que dos demais. Podemos calcular essa distância
|
2,902
|
<ASSISTANT_TASK:>
Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
#use type casting on list genrator to store 0-9 in 8bit binary
TupleROM=tuple([bin(i, 8) for i in range(10)])
TupleROM
f'accesss location 6: {TupleROM[6]}, read contents of location 6 to dec:{int(TupleROM[6], 2)}'
#TupleROM[6]=bin(16,2)
@block
def ROMLoaded(addr, dout):
A ROM laoded with data already incoded in the structer
insted of using myHDL inchanced parmter loading
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
@always_comb
def readAction():
if addr==0:
dout.next=3
elif addr==1:
dout.next=2
elif addr==2:
dout.next=1
elif addr==3:
dout.next=0
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
DUT=ROMLoaded(addr, dout)
def ROMLoaded_TB():
Python Only Testbench for `ROMLoaded`
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMLoaded_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMLoaded');
@block
def ROMLoaded_TBV():
Verilog Only Testbench for `ROMLoaded`
clk = Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
DUT=ROMLoaded(addr, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(10)
@instance
def stimules():
for i in range(3+1):
addr.next=i
#yield delay(1)
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMLoaded_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMLoaded_TBV');
@block
def ROMParmLoad(addr, dout, CONTENT):
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
@always_comb
def readAction():
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
def ROMParmLoad_TB():
Python Only Testbench for `ROMParmLoad`
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoad_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Peeker.to_dataframe()
DUT.convert()
VerilogTextReader('ROMParmLoad');
@block
def ROMParmLoad_TBV():
Verilog Only Testbench for `ROMParmLoad`
clk=Signal(bool(0))
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoad(addr, dout, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
addr.next=i
yield clk.posedge
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout)
return instances()
#create instaince of TB
TB=ROMParmLoad_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoad_TBV');
@block
def ROMParmLoadSync(addr, dout, clk, rst, CONTENT):
A ROM laoded with data from CONTENT input tuple
I/O:
addr(Signal>4): addres; range is from 0-3
dout(Signal>4): data at each address
clk (bool): clock feed
rst (bool): reset
Parm:
CONTENT: tuple size 4 with contende must be no larger then 4bit
@always(clk.posedge)
def readAction():
if rst:
dout.next=0
else:
dout.next=CONTENT[int(addr)]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
def ROMParmLoadSync_TB():
Python Only Testbench for `ROMParmLoadSync`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, ROMParmLoadSync_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ROMData=Peeker.to_dataframe()
#keep only clock high
ROMData=ROMData[ROMData['clk']==1]
ROMData.drop(columns='clk', inplace=True)
ROMData.reset_index(drop=True, inplace=True)
ROMData
DUT.convert()
VerilogTextReader('ROMParmLoadSync');
@block
def ROMParmLoadSync_TBV():
Python Only Testbench for `ROMParmLoadSync`
addr=Signal(intbv(0)[4:])
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=ROMParmLoadSync(addr, dout, clk, rst, CONTENT)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(3+1):
yield clk.posedge
addr.next=i
for i in range(4):
yield clk.posedge
rst.next=1
addr.next=i
raise StopSimulation
@always(clk.posedge)
def print_data():
print(addr, dout, rst)
return instances()
#create instaince of TB
TB=ROMParmLoadSync_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('ROMParmLoadSync_TBV');
@block
def SeqROMEx(clk, rst, dout):
Seq Read Only Memory Ex
I/O:
clk (bool): clock
rst (bool): rst on counter
dout (signal >4): data out
Count=Signal(intbv(0)[3:])
@always(clk.posedge)
def counter():
if rst:
Count.next=0
elif Count==3:
Count.next=0
else:
Count.next=Count+1
@always(clk.posedge)
def Memory():
if Count==0:
dout.next=3
elif Count==1:
dout.next=2
elif Count==2:
dout.next=1
elif Count==3:
dout.next=0
return instances()
Peeker.clear()
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=SeqROMEx(clk, rst, dout)
def SeqROMEx_TB():
Python Only Testbench for `SeqROMEx`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
return instances()
sim = Simulation(DUT, SeqROMEx_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
SROMData=Peeker.to_dataframe()
#keep only clock high
SROMData=SROMData[SROMData['clk']==1]
SROMData.drop(columns='clk', inplace=True)
SROMData.reset_index(drop=True, inplace=True)
SROMData
DUT.convert()
VerilogTextReader('SeqROMEx');
@block
def SeqROMEx_TBV():
Verilog Only Testbench for `SeqROMEx`
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
rst=Signal(bool(0))
DUT=SeqROMEx(clk, rst, dout)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(5+1):
yield clk.posedge
for i in range(4):
yield clk.posedge
rst.next=1
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(clk, rst, dout)
return instances()
#create instaince of TB
TB=SeqROMEx_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('SeqROMEx_TBV');
@block
def RAMConcur(addr, din, writeE, dout, clk):
Random access read write memeory
I/O:
addr(signal>4): the memory cell arrdress
din (signal>4): data to write into memeory
writeE (bool): write enable contorl; false is read only
dout (signal>4): the data out
clk (bool): clock
Note:
this is only a 4 byte memory
#create the memeory list (1D array)
memory=[Signal(intbv(0)[4:]) for i in range(4)]
@always(clk.posedge)
def writeAction():
if writeE:
memory[addr].next=din
@always_comb
def readAction():
dout.next=memory[addr]
return instances()
Peeker.clear()
addr=Signal(intbv(0)[4:]); Peeker(addr, 'addr')
din=Signal(intbv(0)[4:]); Peeker(din, 'din')
writeE=Signal(bool(0)); Peeker(writeE, 'writeE')
dout=Signal(intbv(0)[4:]); Peeker(dout, 'dout')
clk=Signal(bool(0)); Peeker(clk, 'clk')
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
def RAMConcur_TB():
Python Only Testbench for `RAMConcur`
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
return instances()
sim = Simulation(DUT, RAMConcur_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
RAMData=Peeker.to_dataframe()
RAMData=RAMData[RAMData['clk']==1]
RAMData.drop(columns='clk', inplace=True)
RAMData.reset_index(drop=True, inplace=True)
RAMData
RAMData[RAMData['writeE']==1]
RAMData[RAMData['writeE']==0]
DUT.convert()
VerilogTextReader('RAMConcur');
@block
def RAMConcur_TBV():
Verilog Only Testbench for `RAMConcur`
addr=Signal(intbv(0)[4:])
din=Signal(intbv(0)[4:])
writeE=Signal(bool(0))
dout=Signal(intbv(0)[4:])
clk=Signal(bool(0))
CONTENT=tuple([i for i in range(4)][::-1])
DUT=RAMConcur(addr, din, writeE, dout, clk)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
# do nothing
for i in range(1):
yield clk.posedge
#write memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
# rewrite memory
for i in range(4):
yield clk.posedge
writeE.next=True
addr.next=i
din.next=CONTENT[-i]
#do nothing
for i in range(1):
yield clk.posedge
writeE.next=False
#read memory
for i in range(4):
yield clk.posedge
addr.next=i
raise StopSimulation()
@always(clk.posedge)
def print_data():
print(addr, din, writeE, dout, clk)
return instances()
#create instaince of TB
TB=RAMConcur_TBV()
#convert to verilog with reintilzed values
TB.convert(hdl="Verilog", initial_values=True)
#readback the testbench results
VerilogTextReader('RAMConcur_TBV');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: RTL and Implimentation Schamatics are from Xilinx Vivado 2016.1
Step2: And if we try writing to the tuple we will get an error
Step4: Random and Sequntial Access Memory
Step6: myHDL Testing
Step7: Verilog Code
Step9: ROMLoaded RTL
Step11: ROM Preloaded via Parmter
Step13: myHDL Testing
Step14: Verilog Code
Step16: ROMParmLoad RTL
Step19: we can also create rom that insted of being asynchronous is synchronous
Step23: ROMParmLoadSync RTL
Step25: SeqROMEx RTL
Step28: read and write memory
Step30: RAMConcur RTL
|
2,903
|
<ASSISTANT_TASK:>
Python Code::
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
model.fit(X_train, y_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,904
|
<ASSISTANT_TASK:>
Python Code:
from google.colab import auth
auth.authenticate_user()
!pip install -U earthengine-api --no-deps
import ee
ee.Authenticate()
ee.Initialize()
import tensorflow as tf
print(tf.__version__)
import folium
print(folium.__version__)
# REPLACE WITH YOUR CLOUD PROJECT!
PROJECT = 'your-project'
# Cloud Storage bucket with training and testing datasets.
DATA_BUCKET = 'ee-docs-demos'
# Output bucket for trained models. You must be able to write into this bucket.
OUTPUT_BUCKET = 'your-bucket'
# This is a good region for hosting AI models.
REGION = 'us-central1'
# Training and testing dataset file names in the Cloud Storage bucket.
TRAIN_FILE_PREFIX = 'Training_demo'
TEST_FILE_PREFIX = 'Testing_demo'
file_extension = '.tfrecord.gz'
TRAIN_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension
TEST_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TEST_FILE_PREFIX + file_extension
# The labels, consecutive integer indices starting from zero, are stored in
# this property, set on each point.
LABEL = 'landcover'
# Number of label values, i.e. number of classes in the classification.
N_CLASSES = 3
# Use Landsat 8 surface reflectance data for predictors.
L8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
# Use these bands for prediction.
BANDS = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
# These names are used to specify properties in the export of
# training/testing data and to define the mapping between names and data
# when reading into TensorFlow datasets.
FEATURE_NAMES = list(BANDS)
FEATURE_NAMES.append(LABEL)
# List of fixed-length features, all of which are float32.
columns = [
tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES
]
# Dictionary with feature names as keys, fixed-length features as values.
FEATURES_DICT = dict(zip(FEATURE_NAMES, columns))
print('Found training file.' if tf.io.gfile.exists(TRAIN_FILE_PATH)
else 'No training file found.')
print('Found testing file.' if tf.io.gfile.exists(TEST_FILE_PATH)
else 'No testing file found.')
# Create a dataset from the TFRecord file in Cloud Storage.
train_dataset = tf.data.TFRecordDataset([TRAIN_FILE_PATH, TEST_FILE_PATH],
compression_type='GZIP')
# Print the first record to check.
print(iter(train_dataset).next())
def parse_tfrecord(example_proto):
The parsing function.
Read a serialized example into the structure defined by FEATURES_DICT.
Args:
example_proto: a serialized Example.
Returns:
A tuple of the predictors dictionary and the LABEL, cast to an `int32`.
parsed_features = tf.io.parse_single_example(example_proto, FEATURES_DICT)
labels = parsed_features.pop(LABEL)
return parsed_features, tf.cast(labels, tf.int32)
# Map the function over the dataset.
parsed_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=4)
from pprint import pprint
# Print the first parsed record to check.
pprint(iter(parsed_dataset).next())
# Inputs as a tuple. Make predictors 1x1xP and labels 1x1xN_CLASSES.
def to_tuple(inputs, label):
return (tf.expand_dims(tf.transpose(list(inputs.values())), 1),
tf.expand_dims(tf.one_hot(indices=label, depth=N_CLASSES), 1))
input_dataset = parsed_dataset.map(to_tuple)
# Check the first one.
pprint(iter(input_dataset).next())
input_dataset = input_dataset.shuffle(128).batch(8)
from tensorflow import keras
# Define the layers in the model. Note the 1x1 kernels.
model = tf.keras.models.Sequential([
tf.keras.layers.Input((None, None, len(BANDS),)),
tf.keras.layers.Conv2D(64, (1,1), activation=tf.nn.relu),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv2D(N_CLASSES, (1,1), activation=tf.nn.softmax)
])
# Compile the model with the specified loss and optimizer functions.
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Fit the model to the training data. Lucky number 7.
model.fit(x=input_dataset, epochs=7)
MODEL_DIR = 'gs://' + OUTPUT_BUCKET + '/demo_pixel_model'
model.save(MODEL_DIR, save_format='tf')
from tensorflow.python.tools import saved_model_utils
meta_graph_def = saved_model_utils.get_meta_graph_def(MODEL_DIR, 'serve')
inputs = meta_graph_def.signature_def['serving_default'].inputs
outputs = meta_graph_def.signature_def['serving_default'].outputs
# Just get the first thing(s) from the serving signature def. i.e. this
# model only has a single input and a single output.
input_name = None
for k,v in inputs.items():
input_name = v.name
break
output_name = None
for k,v in outputs.items():
output_name = v.name
break
# Make a dictionary that maps Earth Engine outputs and inputs to
# AI Platform inputs and outputs, respectively.
import json
input_dict = "'" + json.dumps({input_name: "array"}) + "'"
output_dict = "'" + json.dumps({output_name: "output"}) + "'"
print(input_dict)
print(output_dict)
# Put the EEified model next to the trained model directory.
EEIFIED_DIR = 'gs://' + OUTPUT_BUCKET + '/eeified_pixel_model'
# You need to set the project before using the model prepare command.
!earthengine set_project {PROJECT}
!earthengine model prepare --source_dir {MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}
MODEL_NAME = 'pixel_demo_model'
VERSION_NAME = 'v0'
!gcloud ai-platform models create {MODEL_NAME} \
--project {PROJECT} \
--region {REGION}
!gcloud ai-platform versions create {VERSION_NAME} \
--project {PROJECT} \
--region {REGION} \
--model {MODEL_NAME} \
--origin {EEIFIED_DIR} \
--framework "TENSORFLOW" \
--runtime-version=2.3 \
--python-version=3.7
# Cloud masking function.
def maskL8sr(image):
cloudShadowBitMask = ee.Number(2).pow(3).int()
cloudsBitMask = ee.Number(2).pow(5).int()
qa = image.select('pixel_qa')
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(
qa.bitwiseAnd(cloudsBitMask).eq(0))
return image.updateMask(mask).select(BANDS).divide(10000)
# The image input data is a 2018 cloud-masked median composite.
image = L8SR.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()
# Get a map ID for display in folium.
rgb_vis = {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3, 'format': 'png'}
mapid = image.getMapId(rgb_vis)
# Turn into an array image for input to the model.
array_image = image.float().toArray()
# Point to the model hosted on AI Platform. If you specified a region other
# than the default (us-central1) at model creation, specify it here.
model = ee.Model.fromAiPlatformPredictor(
projectName=PROJECT,
modelName=MODEL_NAME,
version=VERSION_NAME,
# Can be anything, but don't make it too big.
inputTileSize=[8, 8],
# Keep this the same as your training data.
proj=ee.Projection('EPSG:4326').atScale(30),
fixInputProj=True,
# Note the names here need to match what you specified in the
# output dictionary you passed to the EEifier.
outputBands={'output': {
'type': ee.PixelType.float(),
'dimensions': 1
}
},
)
# model.predictImage outputs a one dimensional array image that
# packs the output nodes of your model into an array. These
# are class probabilities that you need to unpack into a
# multiband image with arrayFlatten(). If you want class
# labels, use arrayArgmax() as follows.
predictions = model.predictImage(array_image)
probabilities = predictions.arrayFlatten([['bare', 'veg', 'water']])
label = predictions.arrayArgmax().arrayGet([0]).rename('label')
# Get map IDs for display in folium.
probability_vis = {
'bands': ['bare', 'veg', 'water'], 'max': 0.5, 'format': 'png'
}
label_vis = {
'palette': ['red', 'green', 'blue'], 'min': 0, 'max': 2, 'format': 'png'
}
probability_mapid = probabilities.getMapId(probability_vis)
label_mapid = label.getMapId(label_vis)
# Visualize the input imagery and the predictions.
map = folium.Map(location=[37.6413, -122.2582], zoom_start=11)
folium.TileLayer(
tiles=mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='median composite',
).add_to(map)
folium.TileLayer(
tiles=label_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='predicted label',
).add_to(map)
folium.TileLayer(
tiles=probability_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='probability',
).add_to(map)
map.add_child(folium.LayerControl())
map
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Upgrade Earth Engine and Authenticate
Step2: Test the TensorFlow installation
Step3: Test the Folium installation
Step4: Define variables
Step5: Read data
Step6: Read into a tf.data.Dataset
Step8: Parse the dataset
Step9: Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands names for keys and tensors storing the pixel data for values. The second element of the tuple is tensor storing the class label.
Step10: Model setup
Step11: Save the trained model
Step12: EEification
Step13: Run the EEifier
Step14: Deploy and host the EEified model on AI Platform
Step15: Connect to the hosted model from Earth Engine
|
2,905
|
<ASSISTANT_TASK:>
Python Code:
import os, tempfile
import logging
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from ionchannelABC import theoretical_population_size
from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor
from ionchannelABC.experiment import setup
from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom
import myokit
from pyabc import Distribution, RV, History, ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler
from pyabc.populationstrategy import ConstantPopulationSize
from experiments.isus_wang import wang_act_and_kin
from experiments.isus_courtemanche import courtemanche_deact
from experiments.isus_firek import firek_inact
from experiments.isus_nygren import (nygren_inact_kin,
nygren_rec)
modelfile = 'models/courtemanche_isus.mmt'
from ionchannelABC.visualization import plot_variables
sns.set_context('talk')
V = np.arange(-100, 50, 0.01)
cou_par_map = {'ri': 'isus.a_inf',
'si': 'isus.i_inf',
'rt': 'isus.tau_a',
'st': 'isus.tau_i'}
f, ax = plot_variables(V, cou_par_map, modelfile, figshape=(2,2))
observations, model, summary_statistics = setup(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec)
assert len(observations)==len(summary_statistics(model({})))
g = plot_sim_results(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec)
limits = {'isus.q1': (-200, 200),
'isus.q2': (1e-7, 50),
'log_isus.q3': (-1, 4),
'isus.q4': (-200, 200),
'isus.q5': (1e-7, 50),
'isus.q6': (-200, 0),
'isus.q7': (1e-7, 50)}
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
# Test this works correctly with set-up functions
assert len(observations) == len(summary_statistics(model(prior.rvs())))
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "courtemanche_isus_igate_unified.db"))
pop_size = theoretical_population_size(2, len(limits))
print("Theoretical minimum population size is {} particles".format(pop_size))
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
exp_id=list(observations.exp_id),
variance=list(observations.variance),
delta=0.05),
population_size=ConstantPopulationSize(1000),
summary_statistics=summary_statistics,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(initial_epsilon=100),
sampler=MulticoreEvalParallelSampler(n_procs=16),
acceptor=IonChannelAcceptor())
obs = observations.to_dict()['y']
obs = {str(k): v for k, v in obs.items()}
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
history = History('sqlite:///results/courtemanche/isus/unified/courtemanche_isus_igate_unified.db')
history.all_runs() # most recent is the relevant run
df, w = history.get_distribution()
df.describe()
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_sim_results(modelfile,
firek_inact,
nygren_inact_kin,
nygren_rec,
df=df, w=w)
plt.tight_layout()
import pandas as pd
N = 100
cou_par_samples = df.sample(n=N, weights=w, replace=True)
cou_par_samples = cou_par_samples.set_index([pd.Index(range(N))])
cou_par_samples = cou_par_samples.to_dict(orient='records')
sns.set_context('talk')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
f, ax = plot_variables(V, cou_par_map,
'models/courtemanche_isus.mmt',
[cou_par_samples],
figshape=(2,2))
plt.tight_layout()
m,_,_ = myokit.load(modelfile)
originals = {}
for name in limits.keys():
if name.startswith("log"):
name_ = name[4:]
else:
name_ = name
val = m.value(name_)
if name.startswith("log"):
val_ = np.log10(val)
else:
val_ = val
originals[name] = val_
sns.set_context('paper')
g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initial set-up
Step2: Plot steady-state and tau functions of original model
Step3: Activation gate ($a$) calibration
Step4: Set up prior ranges for each parameter in the model.
Step5: Run ABC calibration
Step6: Analysis of results
|
2,906
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import sys
# In ../testing/ dir, we prepared a small script for generating the above matrix A
sys.path.append('../testing/')
import make_LosMatrix
# Import GPinv
import GPinv
n = 30
N = 40
# radial coordinate
r = np.linspace(0, 1., n)
# synthetic latent function
f = np.exp(-(r-0.3)*(r-0.3)/0.1) + np.exp(-(r+0.3)*(r+0.3)/0.1)
# plotting the latent function
plt.figure(figsize=(5,3))
plt.plot(r, f)
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$f$: Function value')
# los height
z = np.linspace(-0.9,0.9, N)
# Los-matrix
A = make_LosMatrix.make_LosMatrix(r, z)
# noise amplitude
e_amp = 0.1
# synthetic observation
y = np.dot(A, f) + e_amp * np.random.randn(N)
plt.figure(figsize=(5,3))
plt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
class AbelLikelihood(GPinv.likelihoods.Likelihood):
def __init__(self, Amat):
GPinv.likelihoods.Likelihood.__init__(self)
self.Amat = GPinv.param.DataHolder(Amat)
self.variance = GPinv.param.Param(np.ones(1), GPinv.transforms.positive)
def logp(self, F, Y):
Af = self.sample_F(F)
Y = tf.tile(tf.expand_dims(Y, 0), [tf.shape(F)[0],1,1])
return GPinv.densities.gaussian(Af, Y, self.variance)
def sample_F(self, F):
N = tf.shape(F)[0]
Amat = tf.tile(tf.expand_dims(self.Amat,0), [N, 1,1])
Af = tf.batch_matmul(Amat, tf.exp(F))
return Af
def sample_Y(self, F):
f_sample = self.sample_F(F)
return f_sample + tf.random_normal(tf.shape(f_sample)) * tf.sqrt(self.variance)
model_stvgp = GPinv.stvgp.StVGP(r.reshape(-1,1), y.reshape(-1,1),
kern = GPinv.kernels.RBF_csym(1,1),
mean_function = GPinv.mean_functions.Constant(1),
likelihood=AbelLikelihood(A),
num_samples=10)
# Data Y should scatter around the transform F of the GP function f.
sample_F = model_stvgp.sample_F(100)
plt.figure(figsize=(5,3))
plt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)
for s in sample_F:
plt.plot(z, s, '-k', alpha=0.1, lw=1)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
# This function is just for the visualization of the iteration
from IPython import display
logf = []
def logger(x):
if (logger.i % 10) == 0:
obj = -model_stvgp._objective(x)[0]
logf.append(obj)
# display
if (logger.i % 100) ==0:
plt.clf()
plt.plot(logf, '--ko', markersize=3, linewidth=1)
plt.ylabel('ELBO')
plt.xlabel('iteration')
display.display(plt.gcf())
display.clear_output(wait=True)
logger.i+=1
logger.i = 1
plt.figure(figsize=(5,3))
# Rough optimization by scipy.minimize
model_stvgp.optimize()
# Final optimization by tf.train
trainer = tf.train.AdamOptimizer(learning_rate=0.002)
_= model_stvgp.optimize(trainer, maxiter=5000, callback=logger)
display.clear_output(wait=True)
# Predict the latent function f, which follows Gaussian Process
r_new = np.linspace(0.,1.2, 40)
f_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))
# Data Y should scatter around the transform F of the GP function f.
sample_F = model_stvgp.sample_F(100)
plt.figure(figsize=(8,3))
plt.subplot(1,2,1)
f_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))
f_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))
plt.fill_between(r_new, f_plus, f_minus, alpha=0.2)
plt.plot(r_new, np.exp(f_pred.flatten()), label='StVGP',lw=1.5)
plt.plot(r, f, '-r', label='true',lw=1.5)# ground truth
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best')
plt.subplot(1,2,2)
for s in sample_F:
plt.plot(z, s, '-k', alpha=0.05, lw=1)
plt.plot(z, y, 'o', ms=5)
plt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
plt.legend(loc='best')
plt.tight_layout()
model_gpmc = GPinv.gpmc.GPMC(r.reshape(-1,1), y.reshape(-1,1),
kern = GPinv.kernels.RBF_csym(1,1),
mean_function = GPinv.mean_functions.Constant(1),
likelihood=AbelLikelihood(A))
samples = model_gpmc.sample(300, thin=3, burn=500, verbose=True, epsilon=0.01, Lmax=15)
r_new = np.linspace(0.,1.2, 40)
plt.figure(figsize=(8,3))
# Latent function
plt.subplot(1,2,1)
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)
plt.plot(r, f, '-r', label='true',lw=1.5)# ground truth
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best')
#
plt.subplot(1,2,2)
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_sample = model_gpmc.sample_F()
plt.plot(z, f_sample[0], 'k',lw=1, alpha=0.1)
plt.plot(z, y, 'o', ms=5)
plt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)
plt.xlabel('$z$: Los-height')
plt.ylabel('$y$: Observation')
plt.legend(loc='best')
plt.tight_layout()
# make a histogram (posterior) for these hyperparameter estimated by GPMC
gpmc_hyp_samples = {
'k_variance' : [], # variance
'k_lengthscale': [], # kernel lengthscale
'mean' : [], # mean function values
'lik_variance' : [], # variance for the likelihood
}
for s in samples:
model_gpmc.set_state(s)
gpmc_hyp_samples['k_variance' ].append(model_gpmc.kern.variance.value[0])
gpmc_hyp_samples['k_lengthscale'].append(model_gpmc.kern.lengthscales.value[0])
gpmc_hyp_samples['mean'].append(model_gpmc.mean_function.c.value[0])
gpmc_hyp_samples['lik_variance'].append(model_gpmc.likelihood.variance.value[0])
plt.figure(figsize=(10,2))
# kernel variance
plt.subplot(1,4,1)
plt.title('k_variance')
_= plt.hist(gpmc_hyp_samples['k_variance'])
plt.plot([model_stvgp.kern.variance.value]*2, [0,100], '-r')
plt.subplot(1,4,2)
plt.title('k_lengthscale')
_= plt.hist(gpmc_hyp_samples['k_lengthscale'])
plt.plot([model_stvgp.kern.lengthscales.value]*2, [0,100], '-r')
plt.subplot(1,4,3)
plt.title('mean')
_= plt.hist(gpmc_hyp_samples['mean'])
plt.plot([model_stvgp.mean_function.c.value]*2, [0,100], '-r')
plt.subplot(1,4,4)
plt.title('lik_variance')
_= plt.hist(gpmc_hyp_samples['lik_variance'])
plt.plot([model_stvgp.likelihood.variance.value]*2, [0,100], '-r')
plt.tight_layout()
print('Here the red line shows the MAP estimate by StVGP')
r_new = np.linspace(0.,1.2, 40)
plt.figure(figsize=(4,3))
# StVGP
f_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))
f_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))
f_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))
plt.plot(r_new, np.exp(f_pred.flatten()), 'b', label='StVGP',lw=1.5)
plt.plot(r_new, f_plus, '--b', r_new, f_minus, '--b', lw=1.5)
# GPMC
for i in range(0,len(samples),3):
s = samples[i]
model_gpmc.set_state(s)
f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)
plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1, label='GPMC')
plt.xlabel('$r$: Radial coordinate')
plt.ylabel('$g$: Latent function')
plt.legend(loc='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Synthetic signals
Step2: Prepare the synthetic signal.
Step3: Inference
Step4: Variational inference by StVGP
Step5: Check the initial estimate
Step6: Iteration
Step7: Plot results
Step8: MCMC
Step9: Sample from posterior
Step10: Plot result
Step11: Comparison between StVGP and GPMC
Step12: Difference in the prediction.
|
2,907
|
<ASSISTANT_TASK:>
Python Code:
from molpher.core import MolpherMol
cymene_smiles = MolpherMol("CC1=CC=C(C(C)C)C=C1")
print(cymene_smiles.smiles)
cymene_sdf = MolpherMol("cymene.sdf") # if the string ends with '.sdf', the library interprets it as a path to a file
print(cymene_sdf.smiles)
# imports that will enable direct display of RDKit molecules
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem.Draw.MolDrawing import MolDrawing, DrawingOptions
IPythonConsole.ipython_useSVG = False
DrawingOptions.includeAtomNumbers = True
cymene_rdkit = cymene_sdf.asRDMol()
print(cymene_rdkit.__class__)
cymene_rdkit
MolpherMol(other=cymene_rdkit).smiles
MolpherMol(other=cymene_smiles).smiles
with open("cymene.sdf", "r") as cymene_file:
print(cymene_file.read())
def get_locked_atoms(mol):
return [(idx, atm) for idx, atm in enumerate(mol.atoms) if atm.is_locked]
locked_atoms = get_locked_atoms(cymene_sdf)
locked_atoms
cymene_rdkit.GetPropsAsDict()
from rdkit.Chem.Draw import rdMolDraw2D
from IPython.display import SVG
def show_locked_atoms(mol):
drawer = rdMolDraw2D.MolDraw2DSVG(200, 200)
drawer.DrawMolecule(
mol.asRDMol()
, highlightAtoms=[x[0] for x in get_locked_atoms(mol)]
)
drawer.FinishDrawing()
return drawer.GetDrawingText().replace('svg:','')
SVG(show_locked_atoms(cymene_sdf))
for atm in locked_atoms:
print(atm[1].lock_info)
from molpher.core import MolpherAtom
locked_atoms[0][1].locking_mask = locked_atoms[0][1].locking_mask | MolpherAtom.KEEP_NEIGHBORS_AND_BONDS
MolpherAtom.lockingMaskToString(locked_atoms[0][1].locking_mask)
print(cymene_sdf.getAtom(1).lock_info)
def show_locked_atoms(mol, mask = MolpherAtom.FULL_LOCK):
drawer = rdMolDraw2D.MolDraw2DSVG(300, 300)
drawer.DrawMolecule(
mol.asRDMol()
, highlightAtoms=[x[0] for x in get_locked_atoms(mol) if (x[1].locking_mask & mask) != 0]
)
drawer.FinishDrawing()
return drawer.GetDrawingText().replace('svg:','')
SVG(show_locked_atoms(cymene_sdf))
SVG(show_locked_atoms(cymene_sdf, MolpherAtom.NO_ADDITION))
SVG(show_locked_atoms(cymene_sdf, MolpherAtom.KEEP_NEIGHBORS_AND_BONDS))
from molpher.core.morphing.operators import AddAtom
add_atom = AddAtom()
add_atom.setOriginal(cymene_sdf)
morph = add_atom.morph()
print(morph.smiles)
morph.asRDMol()
morphs = [add_atom.morph() for i in range(10)]
[x.smiles for x in morphs]
from rdkit.Chem.Draw import MolsToGridImage
def show_mol_grid(mols):
return MolsToGridImage(
[x.asRDMol() for x in morphs]
,subImgSize=(250,200)
)
show_mol_grid(morphs)
add_atom.setOriginal(cymene_sdf)
morphs = []
for iter in range(10):
morph = add_atom.morph()
if morph:
morphs.append(morph)
add_atom.setOriginal(morph)
show_mol_grid(morphs)
from molpher.core.morphing import AtomLibrary
my_lib = AtomLibrary(["C"])
add_atom = AddAtom(my_lib)
add_atom.setOriginal(cymene_sdf)
morphs = []
for iter in range(10):
morph = add_atom.morph()
if morph:
morphs.append(morph)
add_atom.setOriginal(morph)
show_mol_grid(morphs)
def show_mol_grid(mols):
locked_atoms = [[y[0] for y in get_locked_atoms(x)] for x in morphs]
return MolsToGridImage(
[x.asRDMol() for x in morphs]
, subImgSize=(250,200)
, highlightAtomLists=locked_atoms
)
show_mol_grid(morphs)
from molpher.core.morphing.operators import *
from molpher.core.morphing import Molpher
operators = [
AddAtom()
, AddBond()
, ContractBond()
, InterlayAtom()
, MutateAtom()
, RemoveBond()
, RerouteBond()
, RemoveAtom()
]
molpher = Molpher(cymene_sdf, operators, attempts=20)
molpher()
morphs = molpher.getMorphs()
show_mol_grid(morphs)
morphs = []
for idx, morph in enumerate(molpher):
if idx == 12: # this loop is infinite so we stop after 50 morphs
break
morphs.append(morph)
show_mol_grid(morphs)
from rdkit import Chem
from molpher.core.morphing.operators import MorphingOperator
import random
class IntroduceNitrogen(MorphingOperator):
def setOriginal(self, mol):
super(IntroduceNitrogen, self).setOriginal(mol)
# generate candidates for the replacement
candidates = Chem.ReplaceSubstructs(
self.original.asRDMol(include_locks=False),
Chem.MolFromSmiles('C'),
Chem.MolFromSmiles('N'),
replaceAll=False
)
# filter potential nonsence molecules
viable = []
for x in candidates:
try:
Chem.rdmolops.SanitizeMol(x)
viable.append(x)
except ValueError:
continue
self.candidates = viable
def morph(self):
return MolpherMol(other=self.candidates[random.randint(0, len(self.candidates)-1)])
operators = [
AddAtom()
, RemoveAtom()
, IntroduceNitrogen()
]
molpher = Molpher(cymene_sdf, operators, attempts=20)
molpher()
morphs = molpher.getMorphs()
show_mol_grid(morphs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For convenience, it is possible to readily convert the MolpherMol molecule to an RDKit Mol
Step2: We can also convert the RDKit molecule back to a MolpherMol instance using the other parameter of the MolpherMol class
Step3: The constructor will automatically recognize an rdkit Mol instance and do all the necessary steps to convert it. We can also just supply another MolpherMol instead
Step4: The code will again do the introspection and perform the necessary steps.
Step5: The locked atoms are indicated in the SDF as molecule properties. Each property name starts with a prefix MOLPHER_ continued by a 'lock name' which indicates what kind of locking mechanism should be employed. The value of the property is a list of atom numbers that indicates what atoms should be locked in the specified manner. In our example, we decided to forbid addition of new atoms to atoms 2,5,6,7,8 and 9 and we also disabled their removal. Atom number 10 should also not be removed from the molecule during morphing.
Step6: The list contains tuples where the first member is the index of the atom in the molecule and the second is an instance of MolpherAtom, a class used to represent atoms in the Molpher-lib library. MolpherAtom instances can be used to query lock information and other atom-related data. If we convert the molecule to an RDKit instance, the atom locks are saved as properties of the molecule
Step7: I think that atoms in RDKit can have properties as well so in the future, the locking information will probably be written directly to the atoms as well as to the molecule as a whole.
Step8: This works because the order of atoms is the same in the generated RDKit molecule and the original MolpherMol instance.
Step9: The different locks are available as constant members of the MolpherAtom class and can be easily combined with bit operators. The following code example adds a new lock to atom #2 (index 1)
Step10: All locks are saved in the locking_mask property. You can always transform this property to a list of lock names using the lockingMaskToString static method
Step11: or you can look at the lock_info again
Step12: You can see that the KEEP_NEIGHBORS and KEEP_BONDS locks were also automatically added to the atom since the lock we set is basically a combination of the two. KEEP_NEIGHBORS will ensure that the neighboring atoms will not be changed or removed while KEEP_BONDS will prevent any changes to the bonds that this atom contributes to.
Step13: This function will by default again simply highlight atoms that have any locks in place since the FULL_LOCK lock combines all locks into one
Step14: If we want to highlight specific locks, we can easily do so by supplying the associated value
Step15: Let's now use a morphing operator to generate a random morph of this molecule
Step16: In short, the AddAtom operator basically selects a random atom in the molecule (as allowed by the placed locks) and then adds a random atom to it.
Step17: We can easily display the created morphs like so
Step18: You can see that the code respects the constraints we specified and that no atoms are added to the aromatic ring system. Therefore, we essentially explored a part of chemical space around a certain scaffold.
Step19: Now these molecules are obviously rather crazy since the probability of adding an atom of certain type is the same for all of them at the moment, but there will be a mechanism to affect the ratio in which the added types are selected in the future.
Step20: The AddAtom class will have more features in the future to customize the atoms that can be added and in what ratio. This is just a simple example to show how morphing could work using a single operator.
Step21: Let's now showcase the Molpher class which brings all operators together
Step22: Using all operators obviously can break the aromatic core sometimes since we did not fully protect it against bond breaking and other modifications, but we could easily achieve that by introducing suitable locks, such as KEEP_NEIGHBORS and/or KEEP_BONDS.
Step23: Custom Morphing Operators
Step24: The IntroduceNitrogen class in this example is rather simple and basically replaces a random carbon atom in the original structure with a nitrogen. But we could definitely imgine more complex logic here that also takes atom locking into account and other things.
|
2,908
|
<ASSISTANT_TASK:>
Python Code:
from zdf import zdf_read_grid, zdf_read_particles
(data, info) = zdf_read_grid( "J3-000500.zdf" )
print(type(data))
print(info)
import numpy as np
from bokeh.io import push_notebook, show, output_notebook
output_notebook()
from bokeh.plotting import figure
from bokeh.models import LinearColorMapper, BasicTicker, ColorBar
from bokeh.core.enums import TextBaseline
p = figure(x_range=(info['grid']['axis'][0]['min'], info['grid']['axis'][0]['max']),
y_range=(info['grid']['axis'][1]['min'], info['grid']['axis'][1]['max']),
toolbar_sticky=False)
p.title.text = info['grid']['label']
p.xaxis.axis_label = info['grid']['axis'][0]['label']
p.yaxis.axis_label = info['grid']['axis'][1]['label']
color_map = LinearColorMapper(palette="Viridis256", low = np.amin(data), high = np.amax(data))
p.image(image=[data], x = 0, y = 0,
dw = info['grid']['axis'][0]['max'], dh = info['grid']['axis'][1]['max'],
color_mapper = color_map )
color_bar = ColorBar(color_mapper = color_map, ticker = BasicTicker(),
location = (0,0))
p.add_layout( color_bar, 'right')
t = show(p, notebook_handle = True)
import matplotlib.pyplot as plt
%matplotlib notebook
fig = plt.figure( figsize = (8,6), dpi = 80)
fig.subplots_adjust( top = 0.85 )
fig.set_facecolor("#FFFFFF")
timeLabel = r'$\sf{t = ' + str( info['iteration']['t'] ) + \
' ['+info['iteration']['tunits']+r']}$'
plotTitle = r'$\sf{' + info['grid']['label'] + r'}$' + '\n' + timeLabel
plotArea = fig.add_subplot(1,1,1)
plotArea.set_title(plotTitle, fontsize = 16)
colorMap = plotArea.imshow(data, cmap = plt.cm.jet, interpolation = 'nearest',
origin = 'lower')
colorBar = fig.colorbar(colorMap)
colorBar.set_label(r'$\sf{'+info['grid']['label'] +
' [' + info['grid']['units'] + r']}$', fontsize = 14)
xlabel = info['grid']['axis'][0]['label'] + '[' + info['grid']['axis'][0]['units'] + ']'
ylabel = info['grid']['axis'][1]['label'] + '[' + info['grid']['axis'][1]['units'] + ']'
plt.xlabel(r'$\sf{'+xlabel+r'}$', fontsize = 14)
plt.ylabel(r'$\sf{'+ylabel+r'}$', fontsize = 14)
(particles, info) = zdf_read_particles("particles-electrons-001200.zdf")
print(type(particles))
print(type(particles['x1']))
import matplotlib.pyplot as plt
%matplotlib notebook
x = particles['x1']
y = particles['u1']
plt.plot(x, y, 'r.', ms=1,alpha=0.5)
t = str(info["iteration"]["t"])
tunits = str(info["iteration"]["tunits"])
title = info['particles']['name'] + ' u_1 x_1'
timeLabel = r'$\sf{t = ' + t + ' [' + tunits + r']}$'
plt.title(r'$\sf{' + title + r'}$' + '\n' + timeLabel)
xlabel = 'x_1' + '[' + info['particles']['units']['x1'] + ']'
ylabel = 'u_1' + '[' + info['particles']['units']['u1'] + ']'
plt.xlabel(r'$\sf{' + xlabel + r'}$', fontsize=14)
plt.ylabel(r'$\sf{' + ylabel + r'}$', fontsize=14)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next you need to read the data. You should also read the metadata while you are at it.
Step2: data is a NumPy ndarray, info is a dictionary with all the Metadata.
Step3: You can plot the data with any of your favorite tools.
Step4: Plotting with MatplotLib
Step5: Working with particle data
Step6: Particles is a dictionary of NumPy arrays containing all the particles quantities
Step7: Again, you can plot the data with any of your favorite tools. Here's an example using Matplotlib
|
2,909
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mh', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
2,910
|
<ASSISTANT_TASK:>
Python Code:
ds['lon']
# multiple titles are stacked into a single 2D array
# just passinsg this will crash ESMPy
plt.scatter(ds['lon'], ds['lat'], s=0.2)
# Get a more well-defined 2D mesh (subset of the full grid)
plt.scatter(ds['lon'][:80,:], ds['lat'][:80,], s=0.2)
ds_subset = ds.isel(i=slice(0,80), j=slice(0,80))
regridder = xe.Regridder(ds_subset, ds_out, 'bilinear')
dr_out = regridder(ds['thetao'][:, :, 0:80, 0:80])
dr_out
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
dr_out[0, 0, ::].plot.pcolormesh(ax=ax, x='lon', y='lat');
ax.coastlines();
cdo_infile = '/g/data/r87/dbi599/temp/thetao_Omon_IPSL-CM5A-LR_historical_r1i1p1_1850-01-01_susan-grid-cdo.nc'
ds_cdo = xr.open_dataset(cdo_infile, decode_times=False)
ds_cdo
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
ds_cdo['thetao'][0, 0, ::].plot.pcolormesh(ax=ax);
ax.coastlines();
ds_cdo['lev'].data[0]
test_file = '/g/data/r87/dbi599/DRSv2/CMIP5/IPSL-CM5A-LR/historical/mon/ocean/r1i1p1/thetao/latest/thetao_Omon_IPSL-CM5A-LR_historical_r1i1p1_185001-189912_susan-horiz-grid.nc'
ds_test = xr.open_dataset(test_file, decode_times=False)
ds_test
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
ds_test['thetao'][0, 0, ::].plot.pcolormesh(ax=ax);
ax.coastlines();
type(ds_test['thetao'].data)
ds_test['thetao'].
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Correctly build the regridder
Step2: So the error could be fixed by breaking your full grid to several well-defined 2D tiles... or maybe it's easier to just use CDO...
|
2,911
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
2,912
|
<ASSISTANT_TASK:>
Python Code:
r = np.random.ranf(4)
def seed(n, size=4):
global r
np.random.seed(n)
r = np.random.ranf(size)
seed(0, 8)
y = [f1(x) for x in x]
plt.plot(x, y, 'bo')
v0 = 2
v1 = 5
plt.plot([0,1], [v0,v1])
def f1(x): return 3 * x + 2
x = np.linspace(0, 1, 10)
y = [f1(x) for x in x]
plt.plot(x, y)
def f2(v0, v1, t):
return v0 + (v1 - v0) * t
y = [f2(v0, v1, t) for t in x]
plt.plot(x, y)
v0, v1 = -3, 2
y = [f2(v0, v1, t) for t in x]
plt.plot(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Actually, lets define a seed function that initializes this table with using a seed number and and sample size
Step2: And now we can define ${f_1}$ using this table
Step3: Which already looks a lot more random. Now we repeat every 8 units along the x-axis. We will get back to the amount of samples later, for now well keep it small to make things more manageable.
Step4: Now let's say we want to find out what the ${y}$ value is for when ${x} = 1/3$. Let's consider this line for a moment, we notice it starts at ${y} = 2$ where ${x} = 0$. So let's start with a function ${f_1}$ that describes this
Step5: It would be nice to have a more generic version of this function. This is pretty easy though
Step6: In theory we now could use any ${v_0}$ and ${v_1}$ and interpolate between them. Let's try ${v_0} = -3$ and ${v_1} = 2$
|
2,913
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,1,101), dataset='lc01')
b.set_value('irrad_method', 'none')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'linear')
b.set_value_all('ld_coeffs', [0.])
b.set_value_all('ld_mode_bol', 'manual')
b.set_value_all('ld_func_bol', 'linear')
b.set_value_all('ld_coeffs_bol', [0.])
b.set_value_all('atm', 'blackbody')
print(b.get_parameter(qualifier='pblum_mode', dataset='lc01'))
print(b.compute_pblums())
print(b.filter(qualifier='pblum'))
print(b.get_parameter(qualifier='pblum_component'))
b.set_value('pblum_component', 'secondary')
print(b.filter(qualifier='pblum'))
b.set_value('pblum_component', 'primary')
print(b.get_parameter(qualifier='pblum', component='primary'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value('pblum', component='primary', value=2*np.pi)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value('teff', component='secondary', value=0.5 * b.get_value('teff', component='primary'))
print(b.filter(qualifier='teff'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b.set_value('pblum_mode', 'decoupled')
print(b.filter(qualifier='pblum'))
b.set_value_all('pblum', 4*np.pi)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
print(b.filter(qualifier='teff'))
b.set_value('teff', component='secondary', value=3000)
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b.set_value('pblum_mode', 'absolute')
print(b.filter(qualifier='pblum'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
fluxes = b.get_value('fluxes', context='model') * 0.8 + (np.random.random(101) * 0.1)
b.set_value('fluxes', context='dataset', value=fluxes)
afig, mplfig = b.plot(context='dataset', show=True)
b.set_value('pblum_mode', 'dataset-scaled')
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value('pblum_mode', 'component-coupled')
b.set_value('fluxes', context='dataset', value=[])
b.add_dataset('lc', times=phoebe.linspace(0,1,101),
ld_mode='manual', ld_func='linear', ld_coeffs=[0],
passband='Johnson:B', dataset='lc02')
b.set_value('pblum_mode', dataset='lc02', value='dataset-coupled')
print(b.filter('pblum*'))
print(b.compute_pblums())
b.run_compute()
afig, mplfig = b.plot(show=True, legend=True)
print(b.compute_pblums())
print(b.compute_pblums(dataset='lc01', component='primary'))
b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01', columns=['areas', 'pblum_ext@lc01', 'ldint@lc01', 'ptfarea@lc01', 'abs_normal_intensities@lc01', 'normal_intensities@lc01'])
b.run_compute()
print(b.filter(qualifier='pblum_ext', context='model').twigs)
t0 = b.get_value('t0@system')
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value('pblum@primary@dataset'))
print(b.compute_pblums(component='primary', dataset='lc01'))
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model'))
b['teff@secondary@component'] = 3000
print(b.compute_pblums(dataset='lc01'))
b.run_compute()
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value(qualifier='pblum_ext', time=t0, component='secondary', kind='mesh', context='model'))
print(b['ld_mode'])
print(b['atm'])
b.run_compute(irrad_method='horvat')
print(b.get_value(qualifier='pblum_ext', time=t0, component='primary', kind='mesh', context='model'))
print(b.get_value('pblum@primary@dataset'))
print(b.compute_pblums(dataset='lc01', irrad_method='horvat'))
b.set_value_all('teff@component', 6000)
b.run_compute()
areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2')
ldint = b.get_value(qualifier='ldint', component='primary', time=t0)
ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0)
abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary')
normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary')
print(np.median(abs_normal_intensities))
print(np.median(normal_intensities))
pblum = b.get_value(qualifier='pblum', component='primary', context='dataset')
print(np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
Step3: Lastly, just to make things a bit easier and faster, we'll turn off irradiation (reflection), use blackbody atmospheres, and disable limb-darkening (so that we can play with weird temperatures without having to worry about falling of the grids).
Step4: Relevant Parameters & Methods
Step5: For any of these modes, you can expose the intrinsic (excluding extrinsic effects such as spots and irradiation) and extrinsic computed luminosities of each star (in each dataset) by calling b.compute_pblums.
Step6: For more details, see the section below on "Accessing Model Luminosities" as well as the b.compute_pblums API docs
Step7: Note that in general (for the case of a spherical star), a pblum of 4pi will result in an out-of-eclipse flux of ~1.
Step8: NOTE
Step9: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Step10: If we now set pblum to be only 2 pi, we should expect the luminosities as well as entire light curve to be scaled in half.
Step11: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
Step12: Let us undo our changes before we look at decoupled luminosities.
Step13: pblum_mode = 'decoupled'
Step14: Now we see that both pblum parameters are available and can have different values.
Step15: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
Step16: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii - although in that case the eclipse widths would still change due to the change in geometry).
Step17: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Step18: pblum_mode = 'absolute'
Step19: As we no longer provide pblum values to scale, those parameters are not visible when filtering.
Step20: (note the exponent on the y-axis of the above figure)
Step21: Now if we set pblum_mode to 'dataset-scaled', the resulting model will be scaled to best fit the data. Note that in this mode we cannot access computed luminosities via b.compute_pblums (which would raise an error if we attempted to do so), nor can we access scaled intensities from the mesh.
Step22: Before moving on, let's remove our fake data (and reset pblum_mode or else PHOEBE will complain about the lack of data).
Step23: pblum_mode = 'dataset-coupled'
Step24: Here we see the pblum_mode@lc01 is set to 'component-coupled' meaning it will follow the rules described earlier where pblum is provided for the primary component and the secondary is coupled to that. pblum_mode@lc02 is set to 'dataset-coupled' with pblum_dataset@lc01 pointing to 'lc01'.
Step25: Accessing Model Luminosities
Step26: By default this exposes 'pblum' and 'pblum_ext' for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset.
Step27: For more options, see the b.compute_pblums API docs.
Step28: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
Step29: Now let's compare the value of the synthetic luminosities to those of the input pblum
Step30: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
Step31: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
Step32: And lastly, if we re-enable irradiation, we'll see that the extrinsic luminosities do not match the prescribed value of pblum (an intrinsic luminosity).
Step33: Now, we'll just undo our changes before continuing
Step34: Role of Pblum
Step35: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
Step36: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
Step37: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
|
2,914
|
<ASSISTANT_TASK:>
Python Code:
%run ../../shared_setup.ipynb
# load variation data
sample_exclusions = dup_samples.copy()
for cross in excessive_recomb_samples:
sample_exclusions[cross] += excessive_recomb_samples[cross]
callsets = load_callsets(COMBINED_CALLSET_FN_TEMPLATE,
sample_exclusions=sample_exclusions,
variant_filter='FILTER_PASS',
call_filter=combined_conf_calls)
samples = {cross: callsets[cross]['calldata'].dtype.names
for cross in CROSSES}
progeny = {cross: samples[cross][2:] for cross in CROSSES}
n_progeny = {cross: len(progeny[cross]) for cross in CROSSES}
print(n_progeny)
print(np.sum(list(n_progeny.values())))
min_co_block_size = 10000
callsets_co = {cross: filter_calls(callsets[cross], min_haplen_calls(min_co_block_size))
for cross in CROSSES}
def tabulate_crossovers(cross):
variants, calldata, _ = unpack_callset(callsets_co[cross])
tbl = (
tabulate_switches(variants, calldata)
.addfield('cross', cross)
.rename({'pos': 'co_pos_mid', 'lpos': 'co_pos_min', 'rpos': 'co_pos_max', 'range': 'co_pos_range'})
.addfield('co_from_parent', lambda r: r.cross.split('_')[r['from'] - 7])
.addfield('co_to_parent', lambda r: r.cross.split('_')[r['to'] - 7])
.cutout('from', 'to')
)
return etl.wrap(tbl)
tbl_co = (etl
.cat(*[tabulate_crossovers(cross) for cross in CROSSES])
.sort(key=('chrom', 'co_pos_mid'))
)
tbl_co.totsv(os.path.join(PUBLIC_DIR, 'tbl_co.txt'))
tbl_co.topickle(os.path.join(PUBLIC_DIR, 'tbl_co.pickle'))
display_with_nrows(tbl_co, caption='CO events')
def tabulate_short_inheritance_blocks(cross):
variants, calldata, _ = unpack_callset(callsets[cross])
_, _, tbl_blocks = haplotypes(variants, calldata)
tbl = (
tbl_blocks
.select(lambda r: r.length_min < min_co_block_size and r.nxt_inheritance != -1 and r.prv_inheritance != -1)
.addfield('cross', cross)
.addfield('is_complex', False)
.addfield('blocks', 1)
)
return tbl
tbl_short_blocks = (etl
.cat(*[tabulate_short_inheritance_blocks(cross) for cross in CROSSES])
.sort(key=('sample', 'chrom', 'start_min'))
)
display_with_nrows(tbl_short_blocks, caption='short inheritance blocks')
tbl_short_blocks.valuecounts('sample').head(10)
df_sb = tbl_short_blocks.valuecounts('sample').todataframe()
plt.hist(df_sb['count']);
class MergeAdjacentBlocks(object):
def __init__(self, source):
self.source = source
def __iter__(self):
tbl = etl.wrap(self.source)
fields = tbl.fieldnames()
it = iter(tbl.records())
yield ['sample', 'cross', 'chrom', 'start_min', 'start_mid', 'start_max', 'stop_min', 'stop_mid', 'stop_max', 'length_min', 'length_mid', 'length_max', 'support', 'is_complex', 'blocks']
cur = next(it)
sample = cur.sample
cross = cur.cross
chrom = cur.chrom
start_min = cur.start_min
start_mid = cur.start_mid
start_max = cur.start_max
stop_min = cur.stop_min
stop_mid = cur.stop_mid
stop_max = cur.stop_max
length_min = cur.length_min
length_mid = cur.length_mid
length_max = cur.length_max
support = cur.support
is_complex = cur.is_complex
blocks = cur.blocks
for cur in it:
# are they adjacent?
if sample == cur.sample and chrom == cur.chrom and stop_mid == cur.start_mid:
# yes, merge
stop_min = cur.stop_min
stop_mid = cur.stop_mid
stop_max = cur.stop_max
support += cur.support
length_min = stop_min - start_max
length_max = stop_max - start_min
length_mid = stop_mid - start_mid
is_complex = True
blocks += 1
else:
# yield previous
yield (sample, cross, chrom, start_min, start_mid, start_max, stop_min, stop_mid, stop_max, length_min, length_mid, length_max, support, is_complex, blocks)
# reset
sample = cur.sample
cross = cur.cross
chrom = cur.chrom
start_min = cur.start_min
start_mid = cur.start_mid
start_max = cur.start_max
stop_min = cur.stop_min
stop_mid = cur.stop_mid
stop_max = cur.stop_max
length_min = cur.length_min
length_mid = cur.length_mid
length_max = cur.length_max
support = cur.support
is_complex = cur.is_complex
blocks = cur.blocks
# handle last one left over
yield (sample, cross, chrom, start_min, start_mid, start_max, stop_min, stop_mid, stop_max, length_min, length_mid, length_max, support, is_complex, blocks)
tbl_conversion_tracts = etl.wrap(
MergeAdjacentBlocks(tbl_short_blocks.cutout('prv_inheritance', 'inheritance', 'nxt_inheritance'))
)
display_with_nrows(tbl_conversion_tracts, caption='conversion tracts')
tbl_conversion_tracts.valuecounts('sample').head()
tbl_conversion_tracts.valuecounts('is_complex')
tbl_conversion_tracts.valuecounts('blocks')
X = tbl_conversion_tracts.valuecounts('sample').values('count').list()
plt.hist(X);
tbl_tracts_robust = tbl_conversion_tracts.select(lambda r: r.support > 1 and r.length_min > 100)
display_with_nrows(tbl_tracts_robust, caption='conversion tracts with robust support')
tbl_tracts_robust.valuecounts('is_complex')
tbl_tracts_robust.valuecounts('blocks')
tbl_tracts_robust.valuecounts('sample').head(5).display()
tbl_tracts_robust.valuecounts('sample').tail(5).display()
X = tbl_tracts_robust.valuecounts('sample').values('count').list()
plt.hist(X, bins=10);
X = tbl_tracts_robust.values('length_min').list()
plt.hist(X, bins=60);
tbl_tracts_differentiated = (
tbl_tracts_robust
.addfield('facet', lambda r: '%s_%s' % (r.sample, r.chrom))
.intervalleftjoin(tbl_co.addfield('facet', lambda r: '%s_%s' % (r.sample, r.chrom)),
lkey='facet',
rkey='facet',
lstart='start_min',
lstop='stop_max',
rstart='co_pos_min',
rstop='co_pos_max')
.cutout(15, 16, 17, 22, 23, 24, 25)
.rename({
'start_min': 'tract_start_min',
'start_mid': 'tract_start_mid',
'start_max': 'tract_start_max',
'stop_min': 'tract_stop_min',
'stop_mid': 'tract_stop_mid',
'stop_max': 'tract_stop_max',
'length_min': 'tract_length_min',
'length_mid': 'tract_length_mid',
'length_max': 'tract_length_max',
'support': 'tract_support',
'is_complex': 'tract_is_complex',
'blocks': 'tract_blocks',
})
.addfield('tract_type', lambda row: 'NCO' if row.co_pos_min is None else 'CO')
)
tbl_tracts_differentiated.topickle(os.path.join(PUBLIC_DIR, 'tbl_conversion_tracts.pickle'))
tbl_tracts_differentiated.totsv(os.path.join(PUBLIC_DIR, 'tbl_conversion_tracts.txt'))
display_with_nrows(tbl_tracts_differentiated, caption='differentiated conversion tracts')
tbl_tracts_differentiated.valuecounts('tract_type')
tbl_tracts_co = tbl_tracts_differentiated.eq('tract_type', 'CO')
display_with_nrows(tbl_tracts_co, caption='CO conversion tracts')
tbl_tracts_nco = tbl_tracts_differentiated.eq('tract_type', 'NCO').cutout(15, 16, 17, 18, 19)
display_with_nrows(tbl_tracts_nco, caption='NCO conversion tracts')
tbl_tracts_nco.valuecounts('tract_is_complex')
tbl_tracts_differentiated.gt('tract_length_min', 15000).displayall()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CO recombination events
Step2: Conversion tracts
Step3: Combine adjacent blocks into conversion tracts
Step4: Identify tracts with robust support
Step5: Differentiate CO and NCO conversion tracts
|
2,915
|
<ASSISTANT_TASK:>
Python Code:
##########################################
plt.rcParams["figure.figsize"] = (12,3.5)
fig, ax = plt.subplots(1, 3)
##########################################
# Observed points of the Lorenz curve and population parameters
p = [0.0, 0.4, 0.8, 1.0]
L = [0.0, 0.16, 0.64, 1.0]
mean = 10.0
population = 100
# Grids for plotting
x = np.linspace(0, 1, 1000)
y = np.linspace(0, 2*mean, 1000)
ax[0].plot(p, L, '.')
# Quadratic fit - transformations calculated analytically beforehand
# L(p) = p**2
quadratic_lorenz = lambda p: np.power(p, 2)
quadratic_quantile = lambda p: mean * 2 * p # Q(p) = mean * 2p
quadratic_cdf = lambda y: (1/2)*(y/mean) # F(y) = 1/2 (y / mean)
quadratic_pdf = lambda y: np.full(y.shape, (1/2)*(1/mean)) # f(y) = 1 / (2*mean)
quadratic_Gini = 2*(0.5 - ((1/3)*1**3 - (1/3*0**3)))
ax[0].plot(x, quadratic_lorenz(x),'c-')
ax[0].text(0.2, 0.8, "$G_c$={}".format(np.round(quadratic_Gini,3)))
ax[1].plot(y, quadratic_cdf(y), 'c-')
ax[2].plot(y, quadratic_pdf(y), 'c-');
# Quartic fit - transformations calculated numerically
# L(p) = (-25/38) p**4 + (55/38) p**3 + (4/19) p
quartic_lorenz = np.vectorize(lambda p: np.power(p, 4)*(-25/38.0) + np.power(p,3)*(55/38.0) + p*(4/19.0))
quartic_quantile = lambda p: mean*derivative(quartic_lorenz)(p)
quartic_cdf = inverse(quartic_quantile)
quartic_pdf = derivative(quartic_cdf)
quartic_Gini = 1-2*scipy.integrate.quad(quartic_lorenz, 0.0, 1.0)[0]
ax[0].plot(x, quartic_lorenz(x), 'm-')
ax[0].text(0.2, 0.7, "$G_m$={}".format(np.round(quartic_Gini,3)))
ax[1].plot(y, quartic_cdf(y), 'm-')
ax[2].plot(y, quartic_pdf(y), 'm-')
##########################################
ymin = mean*derivative(quartic_lorenz)(0)
ymax = mean*derivative(quartic_lorenz)(1)
ax[1].axvline(ymin, color='m', linestyle='--')
ax[1].axvline(ymax, color='m', linestyle='--')
ax[2].axvline(ymin, color='m', linestyle='--')
ax[2].axvline(ymax, color='m', linestyle='--')
ax[0].plot((0,1),(0,1),"k--");
ax[0].set_title("L(p)")
ax[1].set_title("F(y)")
ax[2].set_title("f(y)");
##########################################
##########################################
plt.rcParams["figure.figsize"] = (12,3.5)
fig, (ax1,ax2,ax3) = plt.subplots(1, 3)
##########################################
from scipy.integrate import quad
# The 'straight' curve
interior_slope = (L[2]-L[1])/(p[2]-p[1])
bottom_intercept = p[1]-L[1]/interior_slope
right_intercept = L[2]+interior_slope*(1.0-(1.0/population)-p[2])
p_straight = [0.0,bottom_intercept,p[1],p[2],1.0-(1.0/population),1.0]
L_straight = [0.0,0.0,L[1],L[2],right_intercept,1.0]
# The 'triangle' curve
left_slope = L[1]/p[1] # curve 1: L = left_slope * p
right_slope = (1.0-L[2])/(1.0-p[2]) # curve 2: L = 1.0 - right_slope * (1.0 - p)
middle_p = (1.0 - right_slope) / (left_slope - right_slope) # solve for p
middle_L = left_slope * middle_p
p_tri = [0.0,p[1],middle_p,p[2],1.0]
L_tri = [0.0,L[1],middle_L,L[2],1.0]
lorenz_natural = lambda z: np.interp(z, p, L)
lorenz_straight = lambda z: np.interp(z, p_straight, L_straight)
lorenz_triangle = lambda z: np.interp(z, p_tri, L_tri)
ax1.plot(p, L, '.')
ax1.plot((0,1),(0,1),"k--");
ax1.plot(x, lorenz_natural(x), 'r-')
Gini = 2*(0.5 - quad(lorenz_natural, 0, 1)[0])
ax1.text(0.2, 0.8, "$G=G_L={}$".format(np.round(Gini,3)))
ax2.plot(p, L, '.')
ax2.plot((0,1),(0,1),"k--");
ax2.plot(x, lorenz_straight(x), 'g-')
Gini = 2*(0.5 - quad(lorenz_straight, 0, 1)[0])
ax2.text(0.2, 0.8, "$G={}$".format(np.round(Gini,3)))
ax3.plot(p, L, '.')
ax3.plot((0,1),(0,1),"k--");
ax3.plot(x, lorenz_triangle(x), 'b-')
Gini = 2*(0.5 - quad(lorenz_triangle, 0, 1)[0])
ax3.text(0.2, 0.8, "$G={}$".format(np.round(Gini,3)));
##########################################
from matplotlib import gridspec
fig = plt.figure(figsize=(12, 3.5))
gs = gridspec.GridSpec(1, 4, width_ratios=[1,0.43,0.43,1])
ax1 = plt.subplot(gs[0])
ax2a, ax2b = plt.subplot(gs[1]),plt.subplot(gs[2])
ax3 = plt.subplot(gs[3])
##########################################
y = np.linspace(0, mean*15, 1000)
quantile_natural = lambda z: mean * derivative(lorenz_natural)(z)
quantile_straight = lambda z: mean * derivative(lorenz_straight)(z)
quantile_triangle = lambda z: mean * derivative(lorenz_triangle)(z)
cdf_natural = inverse(quantile_natural, extrapolate=(0.0,1.0))
cdf_straight = inverse(quantile_straight, extrapolate=(0.0,1.0))
cdf_triangle = inverse(quantile_triangle, extrapolate=(0.0,1.0))
ax1.set_xlim((0, 50))
ax1.set_ylim((0, 1.05))
ax1.plot(y, cdf_natural(y), "r-")
ax3.set_xlim((0, 50))
ax3.set_ylim((0, 1.05))
ax3.plot(y, cdf_triangle(y), "b-")
ax2a.plot(y, cdf_straight(y), "g-")
ax2b.plot(y, cdf_straight(y), "g-")
ax2a.set_xlim((0, 22))
ax2b.set_xlim((118,140))
ax2a.set_ylim((0, 1.05))
ax2b.set_ylim((0, 1.05))
# hide the spines between ax and ax2
ax2a.spines['right'].set_visible(False)
ax2b.spines['left'].set_visible(False)
ax2a.yaxis.tick_left()
ax2b.yaxis.tick_right()
ax2b.tick_params(labelright=False)
d = .015
kwargs = dict(transform=ax2a.transAxes, color='k', clip_on=False)
ax2a.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
ax2a.plot((1 - d, 1 + d), (-d, +d), **kwargs)
kwargs.update(transform=ax2b.transAxes)
ax2b.plot((-d, + d), (1 - d, 1 + d), **kwargs)
ax2b.plot((-d, + d), (-d, +d), **kwargs)
##########################################
from matplotlib import gridspec
fig = plt.figure(figsize=(12, 3.5))
gs = gridspec.GridSpec(1, 4, width_ratios=[1,0.43,0.43,1])
ax1 = plt.subplot(gs[0])
ax2a, ax2b = plt.subplot(gs[1]),plt.subplot(gs[2])
ax3 = plt.subplot(gs[3])
##########################################
dp = np.diff(cdf_natural(y))
dp[dp < 0.005] = np.nan # to hide numerical errors
ax1.plot(y[1:], dp, "ro")
ax1.set_ylim((0, 1.0))
ax1.set_xlim((0, 50))
dp = np.diff(cdf_triangle(y))
dp[dp < 0.005] = np.nan # to hide numerical errors
ax3.plot(y[1:], dp, "bo")
ax3.set_ylim((0, 1.0))
ax3.set_xlim((0, 50))
dp = np.diff(cdf_straight(y))
dp[dp < 0.005] = np.nan # to hide numerical errors
ax2a.plot(y[1:], dp, "go")
ax2b.plot(y[1:], dp, "go")
ax2a.set_xlim((0, 22))
ax2b.set_xlim((118,140))
ax2a.set_ylim((0, 1.0))
ax2b.set_ylim((0, 1.0))
# hide the spines between ax and ax2
ax2a.spines['right'].set_visible(False)
ax2b.spines['left'].set_visible(False)
ax2a.yaxis.tick_left()
ax2b.yaxis.tick_right()
ax2b.tick_params(labelright=False)
d = .015
kwargs = dict(transform=ax2a.transAxes, color='k', clip_on=False)
ax2a.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
ax2a.plot((1 - d, 1 + d), (-d, +d), **kwargs)
kwargs.update(transform=ax2b.transAxes)
ax2b.plot((-d, + d), (1 - d, 1 + d), **kwargs)
ax2b.plot((-d, + d), (-d, +d), **kwargs);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The cyan curve is a valid Lorenz curve (increasing, convex) that interpolates the points, but so is the magenta - with a different resulting income distribution. Looking at the plots, we might suppose that the tails of the distribution are particularly sensitive to the interpolation. And although the Gini coefficients for the two distributions are quite close (0.333 vs 0.329, the magenta curve being slightly more equal) other measures of inequality, that depend more heavily on the tails, may be vastly different. For example, the ratio of the bottom 1% to the median, $Q_{0.01}
Step2: Two of these curves have special significance. The red curve, the usual linear interpolation, results in the smallest Gini coefficient of all possible interpolations ($G_L$ in Fuller, 1979). It seems likely that the green curve results, in the limit, in the largest Gini coefficient of all possible interpolations [haven't proved this].
Step3: Finally, since piecewise linear Lorenz curves have piecewise constant $L'(p)$ and so piecewise constant $Q(p)$ and $F(y)$, they have discrete probability functions, with atoms of positive probability. The three examples above have the following probability mass functions (the equivalent of densities for discrete variables).
|
2,916
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 12, 10
plt.rcParams.update({'font.size': 15})
data = pd.read_csv('../data/train.csv')
data.describe()
# Create a quick function to allow reusing
def scatter_plot(independent, dependent, title="", xlabel="", ylabel=""):
plt.scatter(independent,dependent)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.show()
scatter_plot(data['likes'],data['mean'],
title="Relationship between number of likes and mean mark",
xlabel="Number of students who liked the announcements",
ylabel="Mean mark")
data['likes_normalised'] = (100 * data['likes'])/data['number_of_students']
scatter_plot(data['likes_normalised'],data['mean'],
title="Relationship between percentage of likes and mean mark",
xlabel="Percentage of students who liked the announcements",
ylabel="Mean mark")
scatter_plot(data['likes_normalised'],data['median'],
title="Relationship between percentage of likes and median mark",
xlabel="Percentage of students who liked the announcements",
ylabel="Median mark")
from sklearn import linear_model
# Convert series to dataframes to prepare them for fitting
X = pd.DataFrame(data['likes_normalised'])
y = pd.DataFrame(data['median'])
# Add a squared feature
X = pd.concat([X,pow(X,2)],axis=1)
# Fit data with linear regression
model_linear = linear_model.LinearRegression()
model_linear.fit(X,y)
# Create some new points to graph the curve
new_x = np.linspace(X.iloc[:,0].min(),X.iloc[:,0].max())
test_x = pd.DataFrame([new_x, pow(new_x,2)]).T
# Plot curve
plt.plot(test_x.iloc[:,0],model_linear.predict(test_x))
# Plot points
scatter_plot(X.iloc[:,0],y,
title="Relationship between percentage of likes and median mark",
xlabel="Percentage of students who liked the announcements",
ylabel="Median mark")
# Add a 3rd power and fit the new data
X_ = pd.concat([X,pow(X.iloc[:,0],3)],axis=1)
model_linear.fit(X_,y)
test_x_ = pd.concat([test_x, pow(test_x.iloc[:,0],3)],axis=1)
plt.plot(test_x_.iloc[:,0],model_linear.predict(test_x_))
# Plot points
scatter_plot(X_[[0]],y,
title="Relationship between percentage of likes and median mark",
xlabel="Percentage of students who liked the announcements",
ylabel="Median mark")
from sklearn.linear_model import RidgeCV
list_reg_params = np.linspace(0.00001, X_.max().max() * 100, 2000)
model_ridge = linear_model.RidgeCV(alphas=list_reg_params, cv=10)
model_ridge.fit(X_,y)
reg_param = model_ridge.alpha_
print("The regularization parameter that provided the best results was {}.".format(reg_param))
plt.plot(test_x_.iloc[:,0],model_ridge.predict(test_x_))
scatter_plot(X_[[0]],y,
title="Relationship between percentage of likes and median mark",
xlabel="Percentage of students who liked the announcements",
ylabel="Median mark")
model_ridge_reg = linear_model.Ridge(alpha=reg_param)
from sklearn import cross_validation
scores_quadratic = abs(cross_validation.cross_val_score(
model_linear, X, y, cv=10, scoring="mean_absolute_error"))
scores_cubic = abs(cross_validation.cross_val_score(
model_ridge_reg, X_, y, cv=10, scoring="mean_absolute_error"))
print("Average error for quadratic polynomial: {:.2f} (+/- {:.2f})" .format(
scores_quadratic.mean(), scores_quadratic.std() * 2))
print("Average error for cubic polynomial: {:.2f} (+/- {:.2f})" .format(
scores_cubic.mean(), scores_cubic.std() * 2))
from sklearn.metrics import mean_absolute_error
from sklearn.cross_validation import KFold
kf = KFold(X.shape[0], n_folds=10)
scores_naive = []
for train, test in kf:
train_mean = data['median'][train].mean()
scores_naive.append(mean_absolute_error(data['median'][test], [train_mean] * len(test)))
print("Average error for naive predictor: {:.2f} (+/- {:.2f})" .format(
np.mean(scores_naive), np.std(scores_naive) * 2))
from scipy.stats import ttest_rel
results = ttest_rel(scores_naive, scores_cubic)
print("P-value: {:.4f}" .format(results.pvalue/2)) # Dividing by 2 because it's one-tailed
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As shown above, I managed to collect data for 56 exams. It seems that the minimum mean mark was of 37, and the maximum was of 83.66. On average, it looks like exams tend to end up with a mark slightly above 60 (std of 9.39), about 66% percent have been closed exams, and I collected slightly more data from 3rd years (as seen by the mean of the cohort column.) The minimum number of students that an exam had was 13 (I was there!), and the maximum was 131 (I was there too!).
Step2: Alright, there is very little structure here, but let's see how it looks when we use the percentage of students.
Step3: Quite a bit better, but let's do it for the median as it seems like a more intuitive measurement here
Step4: Alright, without starting to remove outliers, this seems to be the best I have. I will stay away from outlier detection, and from the graph there doesn't seem to be any really obvious one (maybe besides that one at 80+ median, but I won't go there).
Step5: Looks about right to me. I don't think stopping here would be a bad decision, but for the sake of it, I will try to fit it with a cubic polynomial.
Step6: Looks pretty, but it likely overfits. So let's try to add L2 regularization and find the regularization parameter through 10-fold cross validation.
Step7: The regularization parameter introduced only a very subtle difference. It's also a very big number as I haven't normalised the data. Now let's see which of the 2 polynomial curves fit the data better.
Step8: Not too bad. An error of less than 7 marks is better than I expected. Now for a comparison test, let's just predict that the median of a module is the mean of the medians of all the other modules (but using K-fold CV).
Step9: D'aww, the regression predictor is not much better than the naive one. At least it's not worse! As a last test, let's see what are the chances that the the regression results are better just by chance. For this, a paired one-tailed t-test will be performed, with the null hypothesis that the 2 algorithms perform the same, and the alternative hypothesis that the regression one performs better. The p-value I will choose is 0.05.
|
2,917
|
<ASSISTANT_TASK:>
Python Code:
import espressomd
import espressomd.magnetostatics
import espressomd.magnetostatic_extensions
import espressomd.cluster_analysis
import espressomd.pair_criteria
espressomd.assert_features('DIPOLES', 'LENNARD_JONES')
import numpy as np
# Lennard-Jones parameters
LJ_SIGMA = 1
LJ_EPSILON = 1
LJ_CUT = 2**(1. / 6.) * LJ_SIGMA
# Particles
N_PART = 1200
# Area fraction of the mono-layer
PHI = 0.1
# Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 KT)
DIP_LAMBDA = 4
# Temperature
KT = 1.0
# Friction coefficient
GAMMA = 1.0
# Time step
TIME_STEP = 0.01
# System setup
# BOX_SIZE = ...
print("Box size", BOX_SIZE)
# Note that the dipolar P3M and dipolar layer correction need a cubic
# simulation box for technical reasons.
system = espressomd.System(box_l=(BOX_SIZE, BOX_SIZE, BOX_SIZE))
system.time_step = TIME_STEP
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=LJ_EPSILON, sigma=LJ_SIGMA, cutoff=LJ_CUT, shift="auto")
# Random dipole moments
# ...
# dip = ...
# Random positions in the monolayer
pos = BOX_SIZE * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
# Add particles
system.part.add(pos=pos, rotation=N_PART * [(1, 1, 1)], dip=dip, fix=N_PART * [(0, 0, 1)])
# Set integrator to steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
CI_DP3M_PARAMS = {} # debug variable for continuous integration, can be left empty
# Setup dipolar P3M and dipolar layer correction
dp3m = espressomd.magnetostatics.DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT, **CI_DP3M_PARAMS)
dlc = espressomd.magnetostatic_extensions.DLC(maxPWerror=1E-4, gap_size=BOX_SIZE - LJ_SIGMA)
system.actors.add(dp3m)
system.actors.add(dlc)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# print skin value
print('tuned skin = {}'.format(system.cell_system.skin))
# Equilibrate
print("Equilibration...")
EQUIL_ROUNDS = 20
EQUIL_STEPS = 1000
for i in range(EQUIL_ROUNDS):
system.integrator.run(EQUIL_STEPS)
print(
f"progress: {(i + 1) * 100. / EQUIL_ROUNDS}%, dipolar energy: {system.analysis.energy()['dipolar']}",
end="\r")
print("\nEquilibration done")
LOOPS = 100
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import tempfile
import base64
VIDEO_TAG = <video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with tempfile.NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
with open(f.name, "rb") as g:
video = g.read()
anim._encoded_video = base64.b64encode(video).decode('ascii')
plt.close(anim._fig)
return VIDEO_TAG.format(anim._encoded_video)
animation.Animation._repr_html_ = anim_to_html
def init():
# Set x and y range
ax.set_ylim(0, BOX_SIZE)
ax.set_xlim(0, BOX_SIZE)
x_data, y_data = [], []
part.set_data(x_data, y_data)
return part,
fig, ax = plt.subplots(figsize=(10, 10))
part, = ax.plot([], [], 'o')
animation.FuncAnimation(fig, run, frames=LOOPS, blit=True, interval=0, repeat=False, init_func=init)
n_clusters = []
cluster_sizes = []
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.xlim(0, BOX_SIZE)
plt.ylim(0, BOX_SIZE)
plt.xlabel('x-position', fontsize=20)
plt.ylabel('y-position', fontsize=20)
plt.plot(system.part.all().pos_folded[:, 0], system.part.all().pos_folded[:, 1], 'o')
plt.show()
plt.figure(figsize=(10, 10))
plt.grid()
plt.xticks(range(0, 20))
plt.plot(size_dist[1][:-2], size_dist[0][:-1] / float(LOOPS))
plt.xlabel('size of clusters', fontsize=20)
plt.ylabel('distribution', fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we set up all simulation parameters.
Step2: Note that we declared a <tt>lj_cut</tt>. This will be used as the cut-off radius of the Lennard-Jones potential to obtain a purely repulsive WCA potential.
Step3: Now we set up the interaction between the particles as a non-bonded interaction and use the Lennard-Jones potential as the interaction potential. Here we use the above mentioned cut-off radius to get a purely repulsive interaction.
Step4: Now we generate random positions and orientations of the particles and their dipole moments.
Step5: Now we add the particles with their positions and orientations to our system. Thereby we activate all degrees of freedom for the orientation of the dipole moments. As we want a two dimensional system we only allow the particles to translate in $x$- and $y$-direction and not in $z$-direction by using the <tt>fix</tt> argument.
Step6: Be aware that we do not set the magnitude of the magnetic dipole moments to the particles. As in our case all particles have the same dipole moment it is possible to rewrite the dipole-dipole interaction potential to
Step7: Exercise
Step8: To calculate the dipole-dipole interaction we use the Dipolar P3M method (see Ref. <a href='#[1]'>[1]</a>) which is based on the Ewald summation. By default the boundary conditions of the system are set to conducting which means the dielectric constant is set to infinity for the surrounding medium. As we want to simulate a two dimensional system we additionally use the dipolar layer correction (DLC) (see Ref. <a href='#[2]'>[2]</a>). As we add <tt>DipolarP3M</tt> to our system as an actor, a tuning function is started automatically which tries to find the optimal parameters for Dipolar P3M and prints them to the screen. The last line of the output is the value of the tuned skin.
Step9: Now we equilibrate the dipole-dipole interaction for some time
Step10: Sampling
Step12: As the system is two dimensional, we can simply do a scatter plot to get a visual representation of a system state. To get a better insight of how a ferrofluid system develops during time we will create a video of the development of our system during the sampling. If you only want to sample the system simply go to Sampling without animation
Step13: Exercise
Step14: Cluster analysis
Step15: Sampling without animation
Step16: In the plot chain-like and ring-like clusters should be visible. Some of them are connected via Y- or X-links to each other. Also some monomers should be present.
|
2,918
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
# Modify the path
sys.path.append("/Users/rebeccabilbro/Desktop/waves/stuff/yellowbrick")
import requests
import numpy as np
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
}
def load_data(name, download=False):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
# Load the classification data set
data = load_data('credit')
# Specify the features of interest
features = [
'limit', 'sex', 'edu', 'married', 'age', 'apr_delay', 'may_delay',
'jun_delay', 'jul_delay', 'aug_delay', 'sep_delay', 'apr_bill', 'may_bill',
'jun_bill', 'jul_bill', 'aug_bill', 'sep_bill', 'apr_pay', 'may_pay', 'jun_pay',
'jul_pay', 'aug_pay', 'sep_pay',
]
classes = ['default', 'paid']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.default.as_matrix()
from yellowbrick.features.rankd import Rank2D
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y)
visualizer.transform(X)
visualizer.show()
from yellowbrick.features.radviz import RadViz
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y)
visualizer.transform(X)
visualizer.show()
from yellowbrick.pipeline import VisualPipeline
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
multivisualizer = VisualPipeline([
('rank2d', Rank2D(features=features, algorithm='covariance')),
('radviz', RadViz(classes=classes, features=features)),
])
multivisualizer.fit(X, y)
multivisualizer.transform(X)
multivisualizer.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Fetching the data
Step3: Ok now try with VisualPipeline
|
2,919
|
<ASSISTANT_TASK:>
Python Code:
# Load image and libraries
%matplotlib inline
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc
input_image = misc.imread('figure_processed.png')
# Convert input image from RGBA to binary
input_image = input_image - 255
input_image = np.mean(input_image,2)
binary_image = input_image[::-1,:]
binary_image[binary_image>0] = 1
Npixels_rate,Npixels_time = np.shape(binary_image)
# Visualize binary image
plt.figure(figsize=(8,5))
plt.pcolor(np.arange(Npixels_time),np.arange(Npixels_rate),binary_image, cmap=cm.bone)
plt.xlim((0,Npixels_time))
plt.ylim((0,Npixels_rate))
plt.xlabel('Time (pixels)',size=20)
plt.ylabel('Firing rate (pixels)',size=20)
# Extract the time series (not the scale bars) by starting in the first column
col_in_time_series = True
s1rate_pixels = []
col = 0
while col_in_time_series == True:
if len(np.where(binary_image[:,col]==1)[0]):
s1rate_pixels.append(np.mean(np.where(binary_image[:,col]==1)[0]))
else:
col_in_time_series = False
col += 1
s1rate_pixels = np.array(s1rate_pixels)
# Subtract baseline
s1rate_pixels = s1rate_pixels - np.min(s1rate_pixels)
# Visualize time series
plt.figure(figsize=(5,5))
plt.plot(s1rate_pixels,'k',linewidth=3)
plt.xlabel('Time (pixels)',size=20)
plt.ylabel('Firing rate (pixels)',size=20)
# Convert rate from pixels to Hz
ratescale_col = 395 # Column in image containing containing rate scale
rate_scale = 50 # Hz, scale in image
ratescale_Npixels = np.sum(binary_image[:,ratescale_col])
pixels_to_rate = rate_scale/ratescale_Npixels
s1rate = s1rate_pixels*pixels_to_rate
# Convert time from pixels to ms
timescale_row = np.argmax(np.mean(binary_image[:,400:],1)) # Row in image containing time scale
time_scale = 100 # ms, scale in image
timescale_Npixels = np.sum(binary_image[timescale_row,400:])
pixels_to_time = time_scale/timescale_Npixels
pixels = np.arange(len(s1rate_pixels))
t = pixels*pixels_to_time
# Visualize re-scaled time series
plt.figure(figsize=(5,5))
plt.plot(t, s1rate,'k',linewidth=3)
plt.xlabel('Time (ms)',size=20)
plt.ylabel('Firing rate (Hz)',size=20)
# Interpolate time series to sample every 1ms
from scipy import interpolate
f = interpolate.interp1d(t, s1rate) # Set up interpolation
tmax = np.floor(t[-1])
t_ms = np.arange(tmax) # Desired time series, in ms
s1rate_ms = f(t_ms) # Perform interpolation
# Visualize re-scaled time series
plt.figure(figsize=(5,5))
plt.plot(t_ms, s1rate_ms,'k',linewidth=3)
plt.xlabel('Time (ms)',size=20)
plt.ylabel('Firing rate (Hz)',size=20)
# Save final time series
np.save('extracted_timeseries',s1rate_ms)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 3. Project 2-D binary image to 1-D time series
Step2: Step 4. Rescale in x- and y- variables
Step3: Step 5. Resample at desired sampling rate
|
2,920
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
2,921
|
<ASSISTANT_TASK:>
Python Code:
from biovida.images import OpeniInterface
opi = OpeniInterface()
opi.search(query='lung cancer')
pull_df1 = opi.pull()
from biovida.images import CancerImageInterface
cii = CancerImageInterface(api_key=YOUR_API_KEY_HERE)
cii.search(cancer_type='lung')
pull_df2 = cii.pull(collections_limit=1) # only download the first collection/study
from biovida.unification import unify_against_images
unified_df = unify_against_images(instances=[opi, cii])
import numpy as np
def simplify_df(df):
This function simplifies dataframes
for the purposes of this tutorial.
data_frame = df.copy()
for c in ('source_images_path', 'cached_images_path'):
data_frame[c] = 'path_to_image'
return data_frame.replace({np.NaN: ''})
simplify_df(unified_df)[85:90]
from biovida.images import image_delete
def my_delete_rule(row):
if isinstance(row['abstract'], str) and 'proteins' in row['abstract'].lower():
return True
deleted_rows = image_delete(opi, delete_rule=my_delete_rule, only_recent=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's also get some data from the Cancer Imaging Archive.
Step3: Next, we can import the tool we will be using to unify the data
Step4: To close out this section, we can take a quick look at the resultant DataFrame.
Step5: Note
Step6: Next, we simply define a which will inform image_delete of which rows to delete.
Step7: In this example, we'll use the instance of OpeniInterface created above.
|
2,922
|
<ASSISTANT_TASK:>
Python Code:
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
combine = [train_df, test_df]
print(train_df.columns.values)
# preview the data
train_df.head()
train_df.tail()
train_df.info()
print('_'*40)
test_df.info()
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
train_df.describe(include=['O'])
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
guess_ages = np.zeros((2,3))
guess_ages
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
test_df.head(10)
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
# submission.to_csv('../output/submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Acquire data
Step2: Analyze by describing data
Step3: Which features are categorical?
Step4: Which features are mixed data types?
Step5: Which features contain blank, null or empty values?
Step6: What is the distribution of numerical feature values across the samples?
Step7: What is the distribution of categorical features?
Step8: Assumtions based on data analysis
Step9: Analyze by visualizing data
Step10: Correlating numerical and ordinal features
Step11: Correlating categorical features
Step12: Correlating categorical and numerical features
Step13: Wrangle data
Step14: Creating new feature extracting from existing
Step15: We can replace many titles with a more common name or classify them as Rare.
Step16: We can convert the categorical titles to ordinal.
Step17: Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
Step18: Converting a categorical feature
Step19: Completing a numerical continuous feature
Step20: Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
Step21: Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
Step22: Let us create Age bands and determine correlations with Survived.
Step23: Let us replace Age with ordinals based on these bands.
Step24: We can not remove the AgeBand feature.
Step25: Create new feature combining existing features
Step26: We can create another feature called IsAlone.
Step27: Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
Step28: We can also create an artificial feature combining Pclass and Age.
Step29: Completing a categorical feature
Step30: Converting categorical feature to numeric
Step31: Quick completing and converting a numeric feature
Step32: We can not create FareBand.
Step33: Convert the Fare feature to ordinal values based on the FareBand.
Step34: And the test dataset.
Step35: Model, predict and solve
Step36: Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.
Step37: We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Step38: Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.
Step39: In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.
Step40: In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.
Step41: The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
Step42: This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.
Step43: The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.
Step44: Model evaluation
|
2,923
|
<ASSISTANT_TASK:>
Python Code:
print('Hello World')
x = 5
print(x)
print(type(x))
x = 5.0
print(x)
print(type(x))
x = 1
y = 7.3
is_red = False
title = 'Just a string'
print(type(x), type(y), type(is_red), type(title))
names = ['Hugo', 'Charles','Janine']
ages = [72, 33, 16]
print(type(names), type(ages))
print(names)
print(ages)
first_name = names[0]
last_name = names[-1]
print('First name: %-10s' % first_name)
print('Last name: %-10s' % last_name)
print(type(names[0]))
print(names[0:2])
print(names[0:3:2])
print(names[1:2])
print(names[1:3])
print(names[::-1])
names_ln = names
names_cp = names[:]
names[0] = 'Ben'
print(names_ln)
print(names_cp)
names.append('Paul')
print(names)
names += ['Liz']
print(names)
names.insert(1,'Merle')
print(names)
names.extend(['Sophie','Sebastian','James'])
print(names)
names.remove('Janine')
print(names)
names.pop()
print(names)
names.pop(2)
print(names)
names.reverse()
print(names)
tup = (0, 1, 1, 5, 3, 8, 5)
print(type(tup))
td = 15
tm = 12
ty = 2018
print(td,tm,ty)
td,tm,ty = 15,12,2018
print(td,tm,ty)
print(type(td))
date = 31,12,2018
print(date)
print(type(date))
(day, month, year) = date
print(year, month, day)
x,y = 47,11
x,y = y,x
print(x,y)
tup = (123,34,79,133)
X,*Y = tup
print(X)
print(Y)
X,*Y,Z = tup
print(X)
print(Y)
print(Z)
X,Y,*Z = tup
print(X)
print(Y)
print(Z)
Name = 'Elizabeth'
A,*B,C = Name
print(A)
print(B)
print(C)
A,B,*C = Name
print(A)
print(B)
print(C)
m = 12
d = 8.1
s = m + d
print(s)
print(type(s))
data = [12.2, 16.7, 22.0, 9.3, 13.1, 18.1, 15.0, 6.8]
data_min = min(data)
data_max = max(data)
data_sum = sum(data)
print('Minimum %6.1f' % data_min)
print('Maximum %6.1f' % data_max)
print('Sum %6.1f' % data_sum)
values = [1,2,3,4,5,6,7,8,9,10]
values10 = values*10
print(values10)
values10 = values[:]
for i in range(0,len(values)):
values10[i] = values[i]*10
print(values10)
values10 = [i * 10 for i in values]
print(values10)
# just to be sure that the original values list is not overwritten.
print(values)
ix = 1
print(ix)
ix += 3 # same as x = x + 3
print(ix)
ix *= 2 # same as x = x * 2
print(ix)
x = 0
if(x>0):
print('x is greater than 0')
elif(x<0):
print('x is less than 0')
elif(x==0):
print('x is equal 0')
user = 'George'
if(user):
print('user is set')
if(user=='Dennis'):
print('--> it is Dennis')
else:
print('--> but it is not Dennis')
a = 0
b = 10
while(a < b):
print('a =',a)
a = a + 1
s = 0
for x in [1,2,3,4]:
s = s + x
print('sum = ', s)
# Now, let us find the shortest name of the list names.
# Oh, by the way this is a comment line :), which will not be executed.
index = -99
length = 50
i = 0
for name in names:
if(len(name)<length):
length = len(name)
index = i
i+=1
print('--> shortest name in list names is', names[index])
import os
print(help(os))
pwd = os.getcwd()
print(pwd)
os.chdir('..')
print("Directory changed: ", os.getcwd())
os.chdir(pwd)
print("Directory changed: ", os.getcwd())
HOME = os.environ.get('HOME')
print('My HOME environment variable is set to ', HOME)
datadir = 'data'
newpath = os.path.join(HOME,datadir)
print(newpath)
if os.path.isdir(newpath):
print('--> directory %s exists' % newpath)
else:
print('--> directory %s does not exist' % newpath)
input_file = os.path.join('data','precip.nc')
if os.path.isfile(input_file):
print('--> file %s exists' % input_file)
else:
print('--> file %s does not exist' % input_file)
import glob
fnames = sorted(glob.glob('./data/*.nc'))
print(fnames)
print(fnames[1])
print(os.path.basename(fnames[1]))
import sys
sys.path.append('./lib/')
import dkrz_utils
tempK = 286.5 #-- units Kelvin
print('Convert: %6.2f degK == %6.2f degC' % (tempK, (dkrz_utils.conv_K2C(tempK))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is the easiest way to use print. In order to produce a prettier output of the variable contents format specifications can be used. But we will come to this later.
Step2: Let's see what type the variable really has. You can use the function type as argument to the function print.
Step3: Change the value of variable x to a floating point value of 5.0.
Step4: Get the type of the changed variable x.
Step5: Define variables of different types.
Step6: 1.3 Lists
Step7: To select single or multiple elements of a list you can use indexing. A negative value takes the element from the end of the list.
Step8: To select a subset of a list you can use indices, slicing, [start_index<font color='red'><b>
Step9: What will be returned when doing the following?
Step10: The slicing with [
Step11: Well, how do we do an insertion of an element right after the first element?
Step12: If you want to add more than one element to a list use extend.
Step13: If you decide to remove an element use remove.
Step14: With pop you can remove an element, too. Remove the last element of the list.
Step15: Remove an element by its index.
Step16: Use reverse to - yupp - reverse the list.
Step17: 1.4 Tuples
Step18: Sometimes it is increasingly necessary to make multiple variable assignments which is very tedious. But it is very easy with the tuple value packaging method. Here are some examples how to use tuples.
Step19: Tuple packaging
Step20: You can use tuple packaging to assign the values to a single variable, too.
Step21: Tuple packaging makes an exchange of the content of variables much easier.
Step22: Ok, now we've learned a lot about tuples, but not all. There is a very helpful way to unpack a tuple.
Step23: Unpacking example with a tuple of strings.
Step24: 1.5 Computations
Step25: The built-in functions max(), min(), and sum() for instance can be used to do computations for us.
Step26: To do computations with lists is not that simple.
Step27: Yeah, that is not what you have expected, isn't it?
Step28: The more efficient way is to use Python's list comprehension
Step29: To notice would be the inplace operators += and *=.
Step30: 1.6 Statements
Step31: 1.6.2 while statement
Step32: 1.6.3 for statement
Step33: 1.7 Import Python modules
Step34: Take a look at the module.
Step35: Ok, let's see in which directory we are.
Step36: Go to the parent directory and let us see where we are then.
Step37: Go back to the directory where we started (that's why we wrote the name of the directory to the variable pwd ;)).
Step38: To retrieve the content of an environment variable the module os provides os.environment.get function.
Step39: Concatenate path names with os.path.join.
Step40: Now, we want to see if the directory really exist.
Step41: Modify the datadir variable, run the cells and see what happens.
Step42: Add a cell and play with the os functions in your environment.
Step43: Now, we can select a file, for instance the second one, of fnames.
Step44: But how can we get rid of the leading path? And yes, the os module can help us again with its path.basename function.
Step45: 1.7.2 Module sys
|
2,924
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from time import gmtime, strftime
import os
import re
import pickle
import random
import sys
# load ascii text from file
filename = "data/obama.txt"
raw_text = open(filename).read()
# get rid of any characters other than letters, numbers,
# and a few special characters
raw_text = re.sub('[^\nA-Za-z0-9 ,.:;?!-]+', '', raw_text)
# convert all text to lowercase
raw_text = raw_text.lower()
n_chars = len(raw_text)
print "length of text:", n_chars
print "text preview:", raw_text[:500]
# extract all unique characters in the text
chars = sorted(list(set(raw_text)))
n_vocab = len(chars)
print "number of unique characters found:", n_vocab
# create mapping of characters to integers and back
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
# test our mapping
print 'a', "- maps to ->", char_to_int["a"]
print 25, "- maps to ->", int_to_char[25]
# prepare the dataset of input to output pairs encoded as integers
seq_length = 100
inputs = []
outputs = []
for i in range(0, n_chars - seq_length, 1):
inputs.append(raw_text[i:i + seq_length])
outputs.append(raw_text[i + seq_length])
n_sequences = len(inputs)
print "Total sequences: ", n_sequences
indeces = range(len(inputs))
random.shuffle(indeces)
inputs = [inputs[x] for x in indeces]
outputs = [outputs[x] for x in indeces]
print inputs[0], "-->", outputs[0]
# create two empty numpy array with the proper dimensions
X = np.zeros((n_sequences, seq_length, n_vocab), dtype=np.bool)
y = np.zeros((n_sequences, n_vocab), dtype=np.bool)
# iterate over the data and build up the X and y data sets
# by setting the appropriate indices to 1 in each one-hot vector
for i, example in enumerate(inputs):
for t, char in enumerate(example):
X[i, t, char_to_int[char]] = 1
y[i, char_to_int[outputs[i]]] = 1
print 'X dims -->', X.shape
print 'y dims -->', y.shape
# define the LSTM model
model = Sequential()
model.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.50))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def generate(sentence, prediction_length=50, diversity=0.35):
print '----- diversity:', diversity
generated = sentence
sys.stdout.write(generated)
# iterate over number of characters requested
for i in range(prediction_length):
# build up sequence data from current sentence
x = np.zeros((1, X.shape[1], X.shape[2]))
for t, char in enumerate(sentence):
x[0, t, char_to_int[char]] = 1.
# use trained model to return probability distribution
# for next character based on input sequence
preds = model.predict(x, verbose=0)[0]
# use sample() function to sample next character
# based on probability distribution and desired diversity
next_index = sample(preds, diversity)
# convert integer to character
next_char = int_to_char[next_index]
# add new character to generated text
generated += next_char
# delete the first character from beginning of sentance,
# and add new caracter to the end. This will form the
# input sequence for the next predicted character.
sentence = sentence[1:] + next_char
# print results to screen
sys.stdout.write(next_char)
sys.stdout.flush()
print
filepath="-basic_LSTM.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=0, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
epochs = 50
prediction_length = 100
for iteration in range(epochs):
print 'epoch:', iteration + 1, '/', epochs
model.fit(X, y, validation_split=0.2, batch_size=256, nb_epoch=1, callbacks=callbacks_list)
# get random starting point for seed
start_index = random.randint(0, len(raw_text) - seq_length - 1)
# extract seed sequence from raw text
seed = raw_text[start_index: start_index + seq_length]
print '----- generating with seed:', seed
for diversity in [0.5, 1.2]:
generate(seed, prediction_length, diversity)
pickle_file = '-basic_data.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'X': X,
'y': y,
'int_to_char': int_to_char,
'char_to_int': char_to_int,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Saved data to', pickle_file
print 'Compressed pickle size:', statinfo.st_size
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first thing we need to do is generate our training data set. In this case we will use a recent article written by Barack Obama for The Economist newspaper. Make sure you have the obama.txt file in the /data folder within the /week-6 folder in your repository.
Step2: Next, we use python's set() function to generate a list of all unique characters in the text. This will form our 'vocabulary' of characters, which is similar to the categories found in typical ML classification problems.
Step3: Now we need to define the training data for our network. With RNN's, the training data usually takes the shape of a three-dimensional matrix, with the size of each dimension representing
Step4: Now let's shuffle both the input and output data so that we can later have Keras split it automatically into a training and test set. To make sure the two lists are shuffled the same way (maintaining correspondance between inputs and outputs), we create a separate shuffled list of indeces, and use these indeces to reorder both lists.
Step5: Let's visualize one of these sequences to make sure we are getting what we expect
Step6: Next we will prepare the actual numpy datasets which will be used to train our network. We first initialize two empty numpy arrays in the proper formatting
Step7: Next, we define our RNN model in Keras. This is very similar to how we defined the CNN model, except now we use the LSTM() function to create an LSTM layer with an internal memory of 128 neurons. LSTM is a special type of RNN layer which solves the unstable gradients issue seen in basic RNN. Along with LSTM layers, Keras also supports basic RNN layers and GRU layers, which are similar to LSTM. You can find full documentation for recurrent layers in Keras' documentation
Step8: Next, we define two helper functions
Step9: The generate() function will take in
Step10: Next, we define a system for Keras to save our model's parameters to a local file after each epoch where it achieves an improvement in the overall loss. This will allow us to reuse the trained model at a later time without having to retrain it from scratch. This is useful for recovering models incase your computer crashes, or you want to stop the training early.
Step11: Now we are finally ready to train the model. We want to train the model over 50 epochs, but we also want to output some generated text after each epoch to see how our model is doing.
Step12: That looks pretty good! You can see that the RNN has learned alot of the linguistic structure of the original writing, including typical length for words, where to put spaces, and basic punctuation with commas and periods. Many words are still misspelled but seem almost reasonable, and it is pretty amazing that it is able to learn this much in only 50 epochs of training.
|
2,925
|
<ASSISTANT_TASK:>
Python Code:
numbers = [1.5, 2.3, 0.7, -0.001, 4.4]
total = 0.0
for n in numbers:
assert n > 0.0, 'Data should only contain positve values'
total += n
print('total is: ', total)
def normalize_rectangle(rect):
'''Normalizes a rectangle so that it is at the origin and 1.0 units long on its longest axis.'''
assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
x0, y0, x1, y1 = rect
assert x0 < x1, 'Invalid X coordinates'
assert y0 < y1, 'Invalid Y coordinates'
dx = x1 - x0
dy = y1 - y0
if dx > dy:
scaled = float(dx) / dy
upper_x, upper_y = 1.0, scaled
else:
scaled = float(dx) / dy
upper_x, upper_y = scaled, 1.0
assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid'
return (0, 0, upper_x, upper_y)
print(normalize_rectangle((0.0, 1.0, 2.0))) #missing the fourth coordinate
print(normalize_rectangle((0.0,0.0, 1.0, 5.0)))
print(normalize_rectangle((0.0,0.0,5.0,1.0)))
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == ???
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Programs like firefox browser are full of assertions
Step2: now look at the post-conditions to help us catch bugs by telling us the calculation isn't right
Step3: re-reading our function, we realize that line 10 should divide dy by dx rather than dx by dy
Step4: error is reassuring b/c we haven't written that fuction!
|
2,926
|
<ASSISTANT_TASK:>
Python Code:
from selenium import webdriver # powers the browser interaction
from selenium.webdriver.support.ui import Select # selects menu options
from pyvirtualdisplay import Display # for JHub environment
from bs4 import BeautifulSoup # to parse HTML
import csv # to write CSV
import pandas # to see CSV
# download firefox binaries
!wget http://ftp.mozilla.org/pub/firefox/releases/54.0/linux-x86_64/en-US/firefox-54.0.tar.bz2
# untar binaries
!tar xvjf firefox-54.0.tar.bz2
# download geckodriver
!wget https://github.com/mozilla/geckodriver/releases/download/v0.17.0/geckodriver-v0.17.0-linux64.tar.gz
# untar geckdriver
!tar xzvf geckodriver-v0.17.0-linux64.tar.gz
display = Display(visible=0, size=(1024, 768))
display.start()
# setup driver
driver = webdriver.Firefox(firefox_binary='./firefox/firefox', executable_path="./geckodriver")
driver.get("http://www.google.com")
print(driver.page_source)
# go results page
driver.get("http://wbsec.gov.in/(S(eoxjutirydhdvx550untivvu))/DetailedResult/Detailed_gp_2013.aspx")
# find "district" drop down menu
district = driver.find_element_by_name("ddldistrict")
district
# find options in "disrict" drop down
district_options = district.find_elements_by_tag_name("option")
print(district_options[1].get_attribute("value"))
print(district_options[1].text)
d_options = {option.text.strip(): option.get_attribute("value") for option in district_options if option.get_attribute("value").isdigit()}
print(d_options)
district_select = Select(district)
district_select.select_by_value(d_options["Bankura"])
# your code here
print(Select(driver.find_element_by_name("ddlblock")).first_selected_option.text)
# your code here
print(Select(driver.find_element_by_name("ddlgp")).first_selected_option.text)
soup = BeautifulSoup(driver.page_source, 'html5lib')
# get the html for the table
table = soup.select('#DataGrid1')[0]
# get list of rows
rows = [row for row in table.select("tr")]
rows = rows[1:]
rows[0].select('td')
data = []
for row in rows:
d = {}
seat_names = row.select('td')[0].find_all("span")
d['seat'] = ' '.join([x.text for x in seat_names])
d['electors'] = row.select('td')[1].text.strip()
d['polled'] = row.select('td')[2].text.strip()
d['rejected'] = row.select('td')[3].text.strip()
d['osn'] = row.select('td')[4].text.strip()
d['candidate'] = row.select('td')[5].text.strip()
d['party'] = row.select('td')[6].text.strip()
d['secured'] = row.select('td')[7].text.strip()
data.append(d)
print(data[1])
i = 0
while i < len(data):
if data[i]['seat']:
seat = data[i]['seat']
electors = data[i]['electors']
polled = data[i]['polled']
rejected = data[i]['rejected']
i = i+1
else:
data[i]['seat'] = seat
data[i]['electors'] = electors
data[i]['polled'] = polled
data[i]['rejected'] = rejected
i = i+1
data
header = data[0].keys()
with open('WBS-table.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, header)
dict_writer.writeheader()
dict_writer.writerows(data)
pandas.read_csv('WBS-table.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selenium actually uses our web browser, and since the JupyterHub doesn't come with Firefox, we'll download the binaries
Step2: We also need the webdriver for Firefox that allows Selenium to interact directly with the browser through the code we write. We can download the geckodriver for Firefox from the github page
Step3: 1. Launching the webdriver
Step4: Now we can initialize the Selenium web driver, giving it the path to the Firefox binary code and the driver
Step5: You can navigate Selenium to a URL by using the get method, exactly the same way we used the requests.get before
Step6: Cool, right? You can see Google in your browser now. Let's go look at some West Bengal State election results
Step7: Zilla Parishad
Step8: Now if we want to get the different options in this drop down, we can do the same. You'll notice that each name is associated with a unique value. Since we're getting multiple elements here, we'll use find_elements_by_tag_name
Step9: Now we'll make a dictionary associating each name with its value.
Step10: We can then select a district by using its name and our dictionary. First we'll make our own function using Selenium's Select, and then we'll call it on "Bankura".
Step11: You should have seen the dropdown menu select 'Bankura' by running the previous cell.
Step12: Gram Panchayat
Step13: Once we selected the last dropdown menu parameter, the website automatically generate a table below. This table could not have been called up by a URL, as you can see that the URL in the browser did not change. This is why Selenium is so helpful.
Step14: First we'll get all the rows of the table using the tr selector.
Step15: But the first row is the header so we don't want that.
Step16: Each cell in the row corresponds to the data we want.
Step17: Now it's just a matter of looping through the rows and getting the information we want from each one.
Step18: You'll notice that some of the information, such as total electors, is not supplied for each canddiate. This code will add that information for the candidates who don't have it.
Step19: 4. Exporting data to CSV
|
2,927
|
<ASSISTANT_TASK:>
Python Code:
# Convention for import to get shortened namespace
import numpy as np
import xarray as xr
# Create some sample "temperature" data
data = 283 + 5 * np.random.randn(5, 3, 4)
data
temp = xr.DataArray(data)
temp
temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])
temp
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2018-01-01', periods=5)
times
# Sample lon/lats
lons = np.linspace(-120, -60, 4)
lats = np.linspace(25, 55, 3)
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])
temp
temp.attrs['units'] = 'kelvin'
temp.attrs['standard_name'] = 'air_temperature'
temp
# For example, convert Kelvin to Celsius
temp - 273.15
temp.sel(time='2018-01-02')
from datetime import timedelta
temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
# Your code goes here
# %load solutions/interp_solution.py
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
# As done above
temp.loc['2018-01-02']
temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]
# This *doesn't* work however:
#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
# Open sample North American Reanalysis data in netCDF format
ds = xr.open_dataset('../../data/NARR_19930313_0000.nc')
ds
ds.isobaric1
ds['isobaric1']
ds_1000 = ds.sel(isobaric1=1000.0)
ds_1000
ds_1000.Temperature_isobaric
u_winds = ds['u-component_of_wind_isobaric']
u_winds.std(dim=['x', 'y'])
# %load solutions/mean_profile.py
# Import some useful Python tools
from datetime import datetime
# Twelve hours of hourly output starting at 22Z today
start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)
times = np.array([start + timedelta(hours=h) for h in range(13)])
# 3km spacing in x and y
x = np.arange(-150, 153, 3)
y = np.arange(-100, 100, 3)
# Standard pressure levels in hPa
press = np.array([1000, 925, 850, 700, 500, 300, 250])
temps = np.random.randn(times.size, press.size, y.size, x.size)
from cftime import date2num
time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])
time_vals = date2num(times, time_units)
time_vals
ds = xr.Dataset({'temperature': (['time', 'z', 'y', 'x'], temps, {'units':'Kelvin'})},
coords={'x_dist': (['x'], x, {'units':'km'}),
'y_dist': (['y'], y, {'units':'km'}),
'pressure': (['z'], press, {'units':'hPa'}),
'forecast_time': (['time'], times)
})
ds
ds.forecast_time.encoding['units'] = time_units
ds.temperature
ds.attrs['Conventions'] = 'CF-1.7'
ds.attrs['title'] = 'Forecast model run'
ds.attrs['nc.institution'] = 'Unidata'
ds.attrs['source'] = 'WRF-1.5'
ds.attrs['history'] = str(datetime.utcnow()) + ' Python'
ds.attrs['references'] = ''
ds.attrs['comment'] = ''
ds
ds.temperature.attrs['standard_name'] = 'air_temperature'
ds.temperature.attrs['long_name'] = 'Forecast air temperature'
ds.temperature.attrs['missing_value'] = -9999
ds.temperature
ds.x.attrs['axis'] = 'X' # Optional
ds.x.attrs['standard_name'] = 'projection_x_coordinate'
ds.x.attrs['long_name'] = 'x-coordinate in projected coordinate system'
ds.y.attrs['axis'] = 'Y' # Optional
ds.y.attrs['standard_name'] = 'projection_y_coordinate'
ds.y.attrs['long_name'] = 'y-coordinate in projected coordinate system'
ds.pressure.attrs['axis'] = 'Z' # Optional
ds.pressure.attrs['standard_name'] = 'air_pressure'
ds.pressure.attrs['positive'] = 'down' # Optional
ds.forecast_time['axis'] = 'T' # Optional
ds.forecast_time['standard_name'] = 'time' # Optional
ds.forecast_time['long_name'] = 'time'
from pyproj import Proj
X, Y = np.meshgrid(x, y)
lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,
'lat_1':25})
lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
ds = ds.assign_coords(lon = (['y', 'x'], lon))
ds = ds.assign_coords(lat = (['y', 'x'], lat))
ds
ds.lon.attrs['units'] = 'degrees_east'
ds.lon.attrs['standard_name'] = 'longitude' # Optional
ds.lon.attrs['long_name'] = 'longitude'
ds.lat.attrs['units'] = 'degrees_north'
ds.lat.attrs['standard_name'] = 'latitude' # Optional
ds.lat.attrs['long_name'] = 'latitude'
ds
ds['lambert_projection'] = int()
ds.lambert_projection.attrs['grid_mapping_name'] = 'lambert_conformal_conic'
ds.lambert_projection.attrs['standard_parallel'] = 25.
ds.lambert_projection.attrs['latitude_of_projection_origin'] = 40.
ds.lambert_projection.attrs['longitude_of_central_meridian'] = -105.
ds.lambert_projection.attrs['semi_major_axis'] = 6371000.0
ds.lambert_projection
ds.temperature.attrs['grid_mapping'] = 'lambert_projection' # or proj_var.name
ds
ds.to_netcdf('test_netcdf.nc', format='NETCDF4')
!ncdump test_netcdf.nc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
Step2: We can also pass in our own dimension names
Step3: This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.
Step4: When we create the DataArray instance, we pass in the arrays we just created
Step5: ...and we can also set some attribute metadata
Step6: Notice what happens if we perform a mathematical operaton with the DataArray
Step7: Selection
Step8: .sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance
Step9: Exercise
Step10: Solution
Step11: Slicing with Selection
Step12: .loc
Step13: Opening netCDF data
Step14: This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields
Step15: or
Step16: Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data
Step17: Aggregation operations
Step18: Exercise
Step19: Resources
Step20: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12
Step21: Now we can create the forecast_time variable just as we did before for the other coordinate variables
Step22: Due to how xarray handles time units, we need to encode the units in the forecast_time coordinate.
Step23: If we look at our data variable, we can see the units printed out, so they were attached properly!
Step24: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
Step25: We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we have already set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.
Step26: Coordinate variables
Step27: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
Step28: Auxilliary Coordinates
Step29: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
Step30: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables
Step31: Coordinate System Information
Step32: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable
Step33: Write to NetCDF
|
2,928
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
2,929
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
from astropy import coordinates as coord, units as u, time, constants as const
import logging
import sys
import os
from scipy import interpolate, stats
import ephem
import multiprocessing as mp
logger = logging.getLogger('window-function')
logger.setLevel(logging.DEBUG)
np.random.seed(42)
colours = sns.color_palette(n_colors=5)
sns.set(style='white')
def datetime_to_jd(dt):
datetime_object = datetime.datetime(
dt.year, dt.month, dt.day, 0, 0, 1)
return time.Time(datetime_object.isoformat(), format='isot', scale='utc').jd
os.environ['NPLANETS'] = '100'
try:
n_planets_to_draw = int(os.environ['NPLANETS'])
except KeyError:
raise RuntimeError("Set NPLANETS environment variable to choose the number of planets")
print('Computing for %d planets' % n_planets_to_draw)
weather_fraction = 0.7
ndraws = int(os.environ.get('NDRAWS', 10))
n_requested_transits = 3.
start_date, end_date = datetime.date(2015, 1, 1), datetime.date(2018, 1, 1)
ndays = (end_date - start_date).days
dates = np.asarray([start_date + datetime.timedelta(days=i) for i in range(ndays)])
site_coordinates = coord.EarthLocation.from_geodetic(70.4042 * u.degree, -24.6272 * u.degree, 2400 * u.m)
ra_choices = np.linspace(175.5, 184.5, 24) * u.degree
dec_choices = np.linspace(-56, -44, 24) * u.degree
start_jd, end_jd = datetime_to_jd(start_date), datetime_to_jd(end_date)
half_an_hour = (0.5 * u.hour).to(u.d)
def altitude(position):
return site_coordinates.itrs.separation(position)
ind = np.ones(ndays, dtype=bool)
days = np.arange(ndays)
bad_days = np.random.choice(days, replace=False, size=(1 - weather_fraction) * ndays)
logger.info('Removing %d bad days', bad_days.size)
ind[bad_days] = False
valid_dates = set(dates[ind])
class DrawPeriods(object):
def __init__(self, values):
self.periods = np.asarray([2, 3.4, 5.9, 10, 17])
self.values = values
self.interp = interpolate.interp1d(self.values, self.periods)
def draw(self):
chosen_value = np.random.uniform(self.values.min(), self.values.max())
return float(self.interp(chosen_value))
@property
def min_period(self):
return self.periods.min()
@property
def max_period(self):
return self.periods.max()
@classmethod
def large_neptunes(cls):
values = np.asarray([0.004, 0.01, 0.12, 0.21, 0.49]) / 100.
return cls(values)
@classmethod
def small_neptunes(cls):
values = np.asarray([0.035, 0.22, 0.95, 2.88, 6.55]) / 100.
return cls(values)
d = DrawPeriods.large_neptunes()
periods = [d.draw() for _ in range(1000)]
_, _, lines = plt.hist(periods, bins=100, normed=True, cumulative=True, histtype='step', lw=2., color=colours[0])
plt.ylim(0, 1)
plt.twinx()
prob_dist = plt.plot(d.periods, d.values / d.values.max(), color=colours[1])
plt.ylim(0, 1)
plt.legend([lines[0], prob_dist[0]], ['Planet distribution', 'Probability distribution'], loc='best')
None
def separation_from_period(period):
period = (period * u.d).to(u.s)
mass = (1. * u.astrophys.M_sun).to(u.kg)
norm = ((const.G * mass / (4. * np.pi ** 2))) ** (1. / 3.)
return (norm * period ** (2. / 3.)).to(u.m)
def transit_probability(period):
radius = 1. * u.astrophys.R_sun.to(u.m)
return (radius / separation_from_period(period)).value
def compute_twighlight_limits(dt):
observer = ephem.Observer()
observer.date = datetime.datetime(dt.datetime.year,
dt.datetime.month,
dt.datetime.day,
0, 0, 1)
observer.lat = site_coordinates.latitude.value
observer.long = site_coordinates.longitude.value
observer.horizon = '-18'
return (observer.previous_rising(ephem.Sun(), use_center=True).datetime(),
observer.next_setting(ephem.Sun(), use_center=True).datetime())
def transit_is_visible(dt, sky_position, minimum_elevation=30. * u.degree):
night = dt.datetime.date()
if night not in valid_dates:
return False
twighlight_limits = compute_twighlight_limits(dt)
if dt < twighlight_limits[0] or dt > twighlight_limits[1]:
return False
if altitude(sky_position) < minimum_elevation:
return False
return True
def run_simulation(index):
periods, fraction = [], []
i = 0
while True:
period = d.draw()
# if np.random.uniform() > transit_probability(period):
# continue
sky_position = coord.SkyCoord(np.random.choice(ra_choices), np.random.choice(dec_choices), unit=(u.degree, u.degree))
epoch = time.Time(np.random.uniform(start_jd, end_jd), format='jd')
n_visible_transits = 0
for transit in np.arange(365.25 / period):
midpoint = epoch + time.TimeDelta(transit * period, format='jd')
if transit_is_visible(midpoint, sky_position):
n_visible_transits += 1
if n_visible_transits > 0:
i += 1
periods.append(period)
fraction.append(float(n_visible_transits) / n_requested_transits)
if i >= n_planets_to_draw:
break
periods, fraction = [np.asarray(val) for val in [periods, fraction]]
ind = periods.argsort()
periods, fraction = [data[ind] for data in [periods, fraction]]
return periods, fraction
pool = mp.Pool()
draws = np.arange(ndraws)
results = pool.map(run_simulation, draws)
periods = np.hstack([row[0] for row in results])
fraction = np.hstack([row[1] for row in results])
out = np.array(list(zip(periods, fraction)), dtype=[('periods', float), ('fraction', float)])
np.save('results', out)
by, bx, _ = stats.binned_statistic(np.log10(periods), fraction, bins=25)
norm, _, _ = stats.binned_statistic(np.log10(periods), periods, bins=bx)
by = by.astype(float) / norm
by[by != by] = 0.
l, = plt.semilogx(10 ** bx[:-1], by, drawstyle='steps-post')
ticks = [1., 2., 5., 10., 20]
plt.xticks(ticks, ticks)
plt.xlim(2, 20)
target = plt.axhline(n_requested_transits, color='k', ls=':')
plt.legend([l, target], ['Window function', 'Target'], loc='best')
plt.xlabel(r'Orbital period / days')
plt.ylabel(r'Detected transits')
plt.tight_layout()
for extension in 'png', 'pdf':
plt.savefig('window-function.{}'.format(extension))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll start with the simple observing as often as we can when a field is up. We only need to consider one year in this case. Start with some constants
Step2: Now we can remove the days in which the weather is bad.
Step3: Now a class to draw periods, as the setup is relatively expensive.
Step4: As a test, we shall see what the cumulative distribution of periods looks like.
Step5: Drawing planets
Step6: Run multiple copies of the simulation function in parallel, for speed, and then combine the results
Step7: Save the results for later analysis
Step8: These are a series of delta functions, so we need to bin up in the x axis. We also need to normalise by the number of planets in each period window.
Step9: Now plot the window function
|
2,930
|
<ASSISTANT_TASK:>
Python Code:
import json
import os
import pickle
import requests
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.cross_validation import KFold, StratifiedKFold, train_test_split
from sklearn.datasets.base import Bunch
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import ElasticNetCV, LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, auc, roc_curve, roc_auc_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.svm import LinearSVC, NuSVC, SVC
%matplotlib inline
pd.set_option('display.max_columns', 500)
names = [
'class',
'cap-shape',
'cap-surface',
'cap-color'
]
mushrooms = os.path.join('data','mushrooms','agaricus-lepiota.data')
data = pd.read_csv(mushrooms, usecols=['p','x','s','n'])
data.columns = names
data.head()
#seaborn factorplot to show edible/poisonous breakdown by different factors
for name in names[1:]:
g = sns.factorplot("class", col=name, data=data,
kind="count", size=2.5, aspect=.8, col_wrap=7)
# Run the command appropriate for your OS.
# OSX/ Linux
! cat data/mushrooms/agaricus-lepiota.attributes
# Windows
#! type data\mushrooms\agaricus-lepiota.attributes
def load_attribute_names_and_values(filename):
'''Returns a list of attribute names and values in a file.
This list contains dictionaries wherein the keys are names
and the values are value description dictionariess.
Each value description sub-dictionary will use
the attribute value abbreviations as its keys
and the attribute descriptions as the values.
filename is expected to have one attribute name and set of values per line,
with the following format:
name: value_description=value_abbreviation[,value_description=value_abbreviation]*
for example
cap-shape: bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s
The attribute name and values dictionary created from this line would be the following:
{'name': 'cap-shape',
'values': {'c': 'conical',
'b': 'bell',
'f': 'flat',
'k': 'knobbed',
's': 'sunken',
'x': 'convex'}}
'''
attribute_names_and_values = [] # this will be a list of dicts
with open(filename) as f:
for line in f:
attribute_name_and_value_dict = {}
attribute_name_and_value_string_list = line.strip().split(':')
attribute_name = attribute_name_and_value_string_list[0]
attribute_name_and_value_dict['name'] = attribute_name
if len(attribute_name_and_value_string_list) < 2:
attribute_name_and_value_dict['values'] = None # no values for this attribute
else:
value_abbreviation_description_dict = {}
description_and_abbreviation_string_list = attribute_name_and_value_string_list[1].strip().split(',')
for description_and_abbreviation_string in description_and_abbreviation_string_list:
description_and_abbreviation = description_and_abbreviation_string.strip().split('=')
description = description_and_abbreviation[0]
if len(description_and_abbreviation) < 2: # assumption: no more than 1 value is missing an abbreviation
value_abbreviation_description_dict[None] = description
else:
abbreviation = description_and_abbreviation[1]
value_abbreviation_description_dict[abbreviation] = description
attribute_name_and_value_dict['values'] = value_abbreviation_description_dict
attribute_names_and_values.append(attribute_name_and_value_dict)
return attribute_names_and_values
attribute_filename = os.path.join('data','mushrooms','agaricus-lepiota.attributes')
attribute_names_and_values = load_attribute_names_and_values(attribute_filename)
attribute_names_and_values[0]
# What does the while loop below do? Explain the code to your partner in plain english. Add extra cells to the notebook
# to test pieces of the code if that helps.
i = 0
while i < len(names):
data.replace({names[i] : attribute_names_and_values[i]['values']}, inplace=True)
i += 1
data.to_csv(os.path.join('data', 'mushrooms', 'agaricus-lepiota.txt'), header=None, index=False)
data.head()
json_file = os.path.join('data','mushrooms','meta.json')
meta = {
'target_names': list(data['class'].unique()),
'feature_names': list(data.columns),
'categorical_features': {
column: list(data[column].unique())
for column in data.columns
if data[column].dtype == 'object'
},
}
with open(json_file, 'w') as f:
json.dump(meta, f, indent=2)
def load_data():
root = os.path.join('data','mushrooms')
# Load the meta data from the file
with open(os.path.join(root, 'meta.json'), 'r') as f:
meta = json.load(f)
names = meta['feature_names']
# Load the readme information
with open(os.path.join(root, 'README.md'), 'r') as f:
readme = f.read()
# Load the data from the file where we updated the feature values
mushrooms = pd.read_csv(os.path.join(root, 'agaricus-lepiota.txt'), names=names)
# Remove the target from the categorical features
meta['categorical_features'].pop('class')
# Return the bunch with the appropriate data chunked apart
return Bunch(
data = mushrooms[names[1:]],
target = mushrooms[names[0]],
target_names = meta['target_names'],
feature_names = meta['feature_names'],
categorical_features = meta['categorical_features'],
DESCR = readme,
)
dataset = load_data()
class EncodeCategorical(BaseEstimator, TransformerMixin):
Encodes a specified list of columns or all columns if None.
def __init__(self, columns=None):
self.columns = [col for col in columns]
self.encoders = None
def fit(self, data, target=None):
Expects a data frame with named columns to encode.
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit a label encoder for each column in the data frame
self.encoders = {
column: LabelEncoder().fit(data[column])
for column in self.columns
}
return self
def transform(self, data):
Uses the encoders to transform a data frame.
output = data.copy()
for column, encoder in self.encoders.items():
output[column] = encoder.transform(data[column])
return output
# set some things up for our function
if not os.path.exists(os.path.join('data', 'mushrooms', 'output')):
os.mkdir(os.path.join('data', 'mushrooms', 'output'))
OUTPATH = os.path.abspath(os.path.join('.', 'data', 'mushrooms', 'output'))
print(OUTPATH)
def model_selection(dataset, feature_model, model_estimator, fse_label, model_label):
Test various combinations of estimators for feature selection and modeling.
The pipeline generates the dataset, encodes columns, then uses the encoded results for feature selection.
Finally,the selected features are sent to the estimator model for scoring.
start = time.time()
# assign X
X = dataset.data
# assign y, encoding the target value
y = LabelEncoder().fit_transform(dataset.target)
if feature_model == 'none':
# use the pipeline that does not use SelectFromModel
model = Pipeline([
('label_encoding', EncodeCategorical(dataset.categorical_features.keys())),
('one_hot_encoder', OneHotEncoder()),
('estimator', model_estimator)
])
else:
#use the pipeline that has SelectFromModel
model = Pipeline([
('label_encoding', EncodeCategorical()),
('one_hot_encoder', OneHotEncoder()),
('estimator', model_estimator),
('feature_selection', SelectFromModel(feature_model)),
('estimator', model_estimator)
])
Train and test the model using StratifiedKFold cross validation. Compile the scores for each iteration of the model.
scores = {'accuracy':[], 'auc':[], 'f1':[], 'precision':[], 'recall':[]}
for train, test in StratifiedKFold(y, n_folds=4, shuffle=True): # Ben says always use 12 folds! We cheat a bit here...
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
model.fit(X_train, y_train)
expected = y_test
predicted = model.predict(X_test)
## Visualize scores
fpr, tpr, thresholds = roc_curve(expected, predicted)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC-AUC for {}'.format(model_label))
plt.legend(loc="lower right")
plt.show()
## Record scores
scores['accuracy'].append(accuracy_score(expected, predicted))
scores['f1'].append(f1_score(expected, predicted, average='binary'))
scores['precision'].append(precision_score(expected, predicted, average='binary'))
scores['recall'].append(recall_score(expected, predicted, average='binary'))
AUC cannot be computed if only 1 class is represented in the data. When that happens, record an AUC score of 0.
try:
scores['auc'].append(roc_auc_score(expected, predicted))
except:
scores['auc'].append(0)
Print the modeling details and the mean score.
print("\nBuild and Validation of took {:0.3f} seconds\n".format(time.time()-start))
print("Feature Selection Estimator: {}\n".format(fse_label))
print("Estimator Model: {}\n".format(model_label))
print("Validation scores are as follows:\n")
print(pd.DataFrame(scores).mean())
Create a data frame with the mean score and estimator details.
df = pd.DataFrame(scores).mean()
df['SelectFromModel'] = fse_label
df['Estimator'] = model_label
Write official estimator to disk
estimator = model
estimator.fit(X,y)
pickle_path = os.path.join(OUTPATH + "/", fse_label.lower().replace(" ", "-") + "_" + model_label.lower().replace(" ", "-") + ".pickle")
with open(pickle_path, 'wb') as f:
pickle.dump(estimator, f)
print("\nFitted model written to:\n{}".format(os.path.abspath(pickle_path)))
return df
evaluation = pd.DataFrame()
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", LogisticRegression(), "none", "LogisticRegression")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", KNeighborsClassifier(), "none", "KNeighborsClassifier")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", BaggingClassifier(KNeighborsClassifier()), "none", "BaggedKNeighborsClassifier")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", RandomForestClassifier(), "none", "RandomForestClassifier")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", ExtraTreesClassifier(), "none", "ExtraTreesClassifier")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", SGDClassifier(), "none", "SGDClassifier")).T)
evaluation = evaluation.append(pd.DataFrame(model_selection(dataset, "none", LinearSVC(), "none", "LinearSVC")).T)
evaluation.to_csv(os.path.join('data', 'mushrooms', 'model_comparison.csv'), index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the data
Step2: Our data, including the target, is categorical. We will need to change these values to numeric ones for machine learning.
Step3: The factorplots show us how many mushrooms with each feature value are poisionous or edible. Only "gill color = b" seems to always indicate poisonous mushrooms.
Step4: We can now use this file to create a list of dictionaries. This list will allow us to easily replace the single letters with the full word it represents.
Step5: We now have a list that contains a dictionary for each column in our data. Take a look at the dictionaries.
Step6: We can use these dictionaries to replace our single letters with the full word it stands for and write our new dataframe to a txt file.
Step7: Now we can create our meta.json file and bunches.
Step8: Imputation or Drop
Step12: Compare the code above to the load_data() function in the Census notebook. What was changed? Why?
Step19: If we use the class as written, we will have the ability to use this to transform all of our features using the LabelEncoder. We will use it as part of our pipeline.
Step20: Read through the function and make sure you understand what it is doing.
|
2,931
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12, 9)
# Import Jupyter's sound player widget
# See: https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.Audio
from IPython.display import Audio
from scipy.io import wavfile
sample_rate = 44100 # in samples/sec
t = np.linspace(0, 5, sample_rate*5)
data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t)
data /= 2. # to keep data between -1 and 1
data.min(), data.max()
wavfile.write("foo.wav", sample_rate, data)
Audio(data, rate=sample_rate)
Audio("foo.wav")
sample_rate, data = wavfile.read("foo.wav")
sample_rate, data.shape, data.min(), data.max()
Audio(data, rate=sample_rate)
plt.plot(data[0:20000]);
#!wget -O sound.wav https://github.com/karoldvl/ESC-50/raw/master/audio/1-100038-A-14.wav
!wget -O sound.wav https://github.com/jdhp-misc/sample-sounds/raw/master/piano_notes/386725__luckylux__sol.wav
sample_rate, data = wavfile.read("sound.wav")
data.shape
Audio(data[:,0], rate=sample_rate)
plt.plot(data[:,0]);
!rm sound.wav
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Official documentation
Step2: Read wav file
Step3: Another example
|
2,932
|
<ASSISTANT_TASK:>
Python Code:
# versions
import IPython
print("pyspark version:" + str(sc.version))
print("Ipython version:" + str(IPython.__version__))
# agg
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.agg({"amt":"avg"})
x.show()
y.show()
# alias
from pyspark.sql.functions import col
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.alias('transactions')
x.show()
y.show()
y.select(col("transactions.to")).show()
# cache
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.cache()
print(x.count()) # first action materializes x in memory
print(x.count()) # later actions avoid IO overhead
# coalesce
x_rdd = sc.parallelize([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)],2)
x = sqlContext.createDataFrame(x_rdd, ['from','to','amt'])
y = x.coalesce(numPartitions=1)
print(x.rdd.getNumPartitions())
print(y.rdd.getNumPartitions())
# collect
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.collect() # creates list of rows on driver
x.show()
print(y)
# columns
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.columns #creates list of column names on driver
x.show()
print(y)
# corr
x = sqlContext.createDataFrame([("Alice","Bob",0.1,0.001),("Bob","Carol",0.2,0.02),("Carol","Dave",0.3,0.02)], ['from','to','amt','fee'])
y = x.corr(col1="amt",col2="fee")
x.show()
print(y)
# count
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.show()
print(x.count())
# cov
x = sqlContext.createDataFrame([("Alice","Bob",0.1,0.001),("Bob","Carol",0.2,0.02),("Carol","Dave",0.3,0.02)], ['from','to','amt','fee'])
y = x.cov(col1="amt",col2="fee")
x.show()
print(y)
# crosstab
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.crosstab(col1='from',col2='to')
x.show()
y.show()
# cube
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Alice","Carol",0.2)], ['from','to','amt'])
y = x.cube('from','to')
x.show()
print(y) # y is a grouped data object, aggregations will be applied to all numerical columns
y.sum().show()
y.max().show()
# describe
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.show()
x.describe().show()
# distinct
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3),("Bob","Carol",0.2)], ['from','to','amt'])
y = x.distinct()
x.show()
y.show()
# drop
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.drop('amt')
x.show()
y.show()
# dropDuplicates / drop_duplicates
x = sqlContext.createDataFrame([("Alice","Bob",0.1),("Bob","Carol",0.2),("Bob","Carol",0.3),("Bob","Carol",0.2)], ['from','to','amt'])
y = x.dropDuplicates(subset=['from','to'])
x.show()
y.show()
# dropna
x = sqlContext.createDataFrame([(None,"Bob",0.1),("Bob","Carol",None),("Carol",None,0.3),("Bob","Carol",0.2)], ['from','to','amt'])
y = x.dropna(how='any',subset=['from','to'])
x.show()
y.show()
# dtypes
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.dtypes
x.show()
print(y)
# explain
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.show()
x.agg({"amt":"avg"}).explain(extended = True)
# fillna
x = sqlContext.createDataFrame([(None,"Bob",0.1),("Bob","Carol",None),("Carol",None,0.3)], ['from','to','amt'])
y = x.fillna(value='unknown',subset=['from','to'])
x.show()
y.show()
# filter
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.filter("amt > 0.1")
x.show()
y.show()
# first
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.first()
x.show()
print(y)
# flatMap
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.flatMap(lambda x: (x[0],x[2]))
print(y) # implicit coversion to RDD
y.collect()
# foreach
from __future__ import print_function
# setup
fn = './foreachExampleDataFrames.txt'
open(fn, 'w').close() # clear the file
def fappend(el,f):
'''appends el to file f'''
print(el,file=open(f, 'a+') )
# example
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.foreach(lambda x: fappend(x,fn)) # writes into foreachExampleDataFrames.txt
x.show() # original dataframe
print(y) # foreach returns 'None'
# print the contents of the file
with open(fn, "r") as foreachExample:
print (foreachExample.read())
# foreachPartition
from __future__ import print_function
#setup
fn = './foreachPartitionExampleDataFrames.txt'
open(fn, 'w').close() # clear the file
def fappend(partition,f):
'''append all elements in partition to file f'''
print([el for el in partition],file=open(f, 'a+'))
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x = x.repartition(2) # force 2 partitions
y = x.foreachPartition(lambda x: fappend(x,fn)) # writes into foreachPartitionExampleDataFrames.txt
x.show() # original dataframe
print(y) # foreach returns 'None'
# print the contents of the file
with open(fn, "r") as foreachExample:
print (foreachExample.read())
# freqItems
x = sqlContext.createDataFrame([("Bob","Carol",0.1), \
("Alice","Dave",0.1), \
("Alice","Bob",0.1), \
("Alice","Bob",0.5), \
("Carol","Bob",0.1)], \
['from','to','amt'])
y = x.freqItems(cols=['from','amt'],support=0.8)
x.show()
y.show()
# groupBy
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Alice","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.groupBy('from')
x.show()
print(y)
# groupBy(col1).avg(col2)
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Alice","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.groupBy('from').avg('amt')
x.show()
y.show()
# head
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.head(2)
x.show()
print(y)
# intersect
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Alice",0.2),("Carol","Dave",0.1)], ['from','to','amt'])
z = x.intersect(y)
x.show()
y.show()
z.show()
# isLocal
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.isLocal()
x.show()
print(y)
# join
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = sqlContext.createDataFrame([('Alice',20),("Bob",40),("Dave",80)], ['name','age'])
z = x.join(y,x.to == y.name,'inner').select('from','to','amt','age')
x.show()
y.show()
z.show()
# limit
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.limit(2)
x.show()
y.show()
# map
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.map(lambda x: x.amt+1)
x.show()
print(y.collect()) # output is RDD
# mapPartitions
def amt_sum(partition):
'''sum the value in field amt'''
yield sum([el.amt for el in partition])
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x = x.repartition(2) # force 2 partitions
y = x.mapPartitions(lambda p: amt_sum(p))
x.show()
print(x.rdd.glom().collect()) # flatten elements on the same partition
print(y.collect())
print(y.glom().collect())
# na
x = sqlContext.createDataFrame([(None,"Bob",0.1),("Bob","Carol",None),("Carol",None,0.3),("Bob","Carol",0.2)], ['from','to','amt'])
y = x.na # returns an object for handling missing values, supports drop, fill, and replace methods
x.show()
print(y)
y.drop().show()
y.fill({'from':'unknown','to':'unknown','amt':0}).show()
y.fill(0).show()
# orderBy
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.orderBy(['from'],ascending=[False])
x.show()
y.show()
# persist
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.persist(storageLevel=StorageLevel(True,True,False,True,1)) # StorageLevel(useDisk,useMemory,useOffHeap,deserialized,replication=1)
x.show()
x.is_cached
# printSchema
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.show()
x.printSchema()
# randomSplit
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.randomSplit([0.5,0.5])
x.show()
y[0].show()
y[1].show()
# rdd
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.rdd
x.show()
print(y.collect())
# registerTempTable
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.registerTempTable(name="TRANSACTIONS")
y = sqlContext.sql('SELECT * FROM TRANSACTIONS WHERE amt > 0.1')
x.show()
y.show()
# repartition
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.repartition(3)
print(x.rdd.getNumPartitions())
print(y.rdd.getNumPartitions())
# replace
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.replace('Dave','David',['from','to'])
x.show()
y.show()
# rollup
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.rollup(['from','to'])
x.show()
print(y) # y is a grouped data object, aggregations will be applied to all numerical columns
y.sum().show()
y.max().show()
# sample
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.sample(False,0.5)
x.show()
y.show()
# sampleBy
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Alice","Carol",0.2),("Alice","Alice",0.3), \
('Alice',"Dave",0.4),("Bob","Bob",0.5),("Bob","Carol",0.6)], \
['from','to','amt'])
y = x.sampleBy(col='from',fractions={'Alice':0.1,'Bob':0.9})
x.show()
y.show()
# schema
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.schema
x.show()
print(y)
# select
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.select(['from','amt'])
x.show()
y.show()
# selectExpr
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.selectExpr(['substr(from,1,1)','amt+10'])
x.show()
y.show()
# show
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.show()
# sort
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Alice",0.3)], ['from','to','amt'])
y = x.sort(['to'])
x.show()
y.show()
# sortWithinPartitions
x = sqlContext.createDataFrame([('Alice',"Bob",0.1,1),("Bob","Carol",0.2,2),("Carol","Alice",0.3,2)], \
['from','to','amt','p_id']).repartition(2,'p_id')
y = x.sortWithinPartitions(['to'])
x.show()
y.show()
print(x.rdd.glom().collect()) # glom() flattens elements on the same partition
print(y.rdd.glom().collect())
# stat
x = sqlContext.createDataFrame([("Alice","Bob",0.1,0.001),("Bob","Carol",0.2,0.02),("Carol","Dave",0.3,0.02)], ['from','to','amt','fee'])
y = x.stat
x.show()
print(y)
print(y.corr(col1="amt",col2="fee"))
# subtract
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.1)], ['from','to','amt'])
z = x.subtract(y)
x.show()
y.show()
z.show()
# take
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.take(num=2)
x.show()
print(y)
# toDF
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.toDF("seller","buyer","amt")
x.show()
y.show()
# toJSON
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Alice",0.3)], ['from','to','amt'])
y = x.toJSON()
x.show()
print(y.collect())
# toPandas
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.toPandas()
x.show()
print(type(y))
y
# unionAll
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2)], ['from','to','amt'])
y = sqlContext.createDataFrame([("Bob","Carol",0.2),("Carol","Dave",0.1)], ['from','to','amt'])
z = x.unionAll(y)
x.show()
y.show()
z.show()
# unpersist
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
x.cache()
x.count()
x.show()
print(x.is_cached)
x.unpersist()
print(x.is_cached)
# where (filter)
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.where("amt > 0.1")
x.show()
y.show()
# withColumn
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",None),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.withColumn('conf',x.amt.isNotNull())
x.show()
y.show()
# withColumnRenamed
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.withColumnRenamed('amt','amount')
x.show()
y.show()
# write
import json
x = sqlContext.createDataFrame([('Alice',"Bob",0.1),("Bob","Carol",0.2),("Carol","Dave",0.3)], ['from','to','amt'])
y = x.write.mode('overwrite').json('./dataframeWriteExample.json')
x.show()
# read the dataframe back in from file
sqlContext.read.json('./dataframeWriteExample.json').show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a href="http
Step2: <a href="http
Step3: <a href="http
Step4: <a href="http
Step5: <a href="http
Step6: <a href="http
Step7: <a href="http
Step8: <a href="http
Step9: <a href="http
Step10: <a href="http
Step11: <a href="http
Step12: <a href="http
Step13: <a href="http
Step14: <a href="http
Step15: <a href="http
Step16: <a href="http
Step17: <a href="http
Step18: <a href="http
Step19: <a href="http
Step20: <a href="http
Step21: <a href="http
Step22: <a href="http
Step23: <a href="http
Step24: <a href="http
Step25: <a href="http
Step26: <a href="http
Step27: <a href="http
Step28: <a href="http
Step29: <a href="http
Step30: <a href="http
Step31: <a href="http
Step32: <a href="http
Step33: <a href="http
Step34: <a href="http
Step35: <a href="http
Step36: <a href="http
Step37: <a href="http
Step38: <a href="http
Step39: <a href="http
Step40: <a href="http
Step41: <a href="http
Step42: <a href="http
Step43: <a href="http
Step44: <a href="http
Step45: <a href="http
Step46: <a href="http
Step47: <a href="http
Step48: <a href="http
Step49: <a href="http
Step50: <a href="http
Step51: <a href="http
Step52: <a href="http
Step53: <a href="http
Step54: <a href="http
Step55: <a href="http
Step56: <a href="http
Step57: <a href="http
Step58: <a href="http
Step59: <a href="http
Step60: <a href="http
Step61: <a href="http
Step62: <a href="http
Step63: <a href="http
Step64: <a href="http
|
2,933
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import parallel
import itertools
import networkx as nx
import numpy as np
bhs = parallel.Parallel()
fig, ax = plt.subplots()
ax.bar([x+0.9 for x in range(5)], -1./bhs.evals[1:], width=0.2)
ax.set_xlabel(r'Eigenvalue', fontsize=16)
ax.set_ylabel(r'$\tau_i$', fontsize=18)
ax.set_xlim([0.5,5.5])
plt.show()
bhs.run_commit()
print " j J_j(<-) J_j(->)"
print " - -------- --------"
for i in range(1,5):
print "%2i %10.4e %10.4e"%(i, np.sum([bhs.J[i,x] for x in range(6) if bhs.pfold[x] < bhs.pfold[i]]),\
np.sum([bhs.J[x,i] for x in range(6) if bhs.pfold[x] > bhs.pfold[i]]))
import tpt_functions
Jnode, Jpath = tpt_functions.gen_path_lengths(range(6), bhs.J, bhs.pfold, \
bhs.sum_flux, [5], [0])
JpathG = nx.DiGraph(Jpath.transpose())
tot_flux = 0
for path in nx.all_simple_paths(JpathG, 0, 5):
print path
f = bhs.J[path[1],path[0]]
print "%2i -> %2i: %10.4e "%(path[0], path[1], \
bhs.J[path[1],path[0]])
for i in range(2, len(path)):
print "%2i -> %2i: %10.4e %10.4e"%(path[i-1], path[i], \
bhs.J[path[i],path[i-1]], Jnode[path[i-1]])
f *= bhs.J[path[i],path[i-1]]/Jnode[path[i-1]]
tot_flux +=f
print " J(path) = %10.4e"%f
print
print " Commulative flux: %10.4e"%tot_flux
while True:
Jnode, Jpath = tpt_functions.gen_path_lengths(range(6), bhs.J, bhs.pfold, \
bhs.sum_flux, [5], [0])
# generate nx graph from matrix
JpathG = nx.DiGraph(Jpath.transpose())
# find shortest path
try:
path = nx.dijkstra_path(JpathG, 0, 5)
pathlength = nx.dijkstra_path_length(JpathG, 0, 5)
print " shortest path:", path, pathlength
except nx.NetworkXNoPath:
print " No path for %g -> %g\n Stopping here"%(0, 5)
break
# calculate contribution to flux
f = bhs.J[path[1],path[0]]
print "%2i -> %2i: %10.4e "%(path[0], path[1], bhs.J[path[1],path[0]])
path_fluxes = [f]
for j in range(2, len(path)):
i = j - 1
print "%2i -> %2i: %10.4e %10.4e"%(path[i], path[j], \
bhs.J[path[j],path[i]], \
bhs.J[path[j],path[i]]/Jnode[path[i]])
f *= bhs.J[path[j],path[i]]/Jnode[path[i]]
path_fluxes.append(bhs.J[path[j],path[i]])
# find bottleneck
ib = np.argmin(path_fluxes)
print "bottleneck: %2i -> %2i"%(path[ib],path[ib+1])
# remove flux from edges
for j in range(1,len(path)):
i = j - 1
bhs.J[path[j],path[i]] -= f
#bhs.J[path[ib],path[ib+1]] -= f
# numerically there may be some leftover flux in bottleneck
#bhs.J[path[ib],path[ib+1]] = 0.
bhs.sum_flux -= f
print ' flux from path ', path, ': %10.4e'%f
print ' fluxes', path_fluxes
print ' leftover flux: %10.4e; %10.2f \n'%(bhs.sum_flux, bhs.sum_flux/tot_flux)
if bhs.sum_flux/tot_flux < 0.01:
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again, the system exhibits a time scale separation between the first and second modes as we would expect.
Step2: Committors and reactive flux
Step3: We check for the property of flux conservation, i.e. $J_j=\sum_{p_{fold}(i)<p_{fold}(j)}J_{i\rightarrow j}=\sum_{p_{fold}(i)>p_{fold}(j)}J_{j\rightarrow i}$.
Step4: Once that we have checked for this, we can now generate paths.
Step5: According to the enumeration of paths, the highest flux paths would be
|
2,934
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
from IPython.display import Image
def write_png(tensor, name):
casted_to_uint8 = tf.cast(tensor, tf.uint8)
converted_to_png = tf.image.encode_png(casted_to_uint8)
f = open(name, "wb+")
f.write(converted_to_png.eval())
f.close()
class CostTrace:
A simple example class
def __init__(self):
self.cost_array = []
def log(self, cost):
self.cost_array.append(cost)
def draw(self):
plt.figure(figsize=(12,5))
plt.plot(range(len(self.cost_array)), self.cost_array, label='cost')
plt.legend()
plt.yscale('log')
plt.show()
# Init size
width = 100
height = 100
RGB = 3
shape = [height,width, RGB]
# Create the generated tensor as a variable
rand_uniform = tf.random_uniform(shape, minval=0, maxval=255, dtype=tf.float32)
generated = tf.Variable(rand_uniform)
#define the cost function
c_mean = tf.reduce_mean(tf.pow(generated,2)) # we want a low mean
c_max = tf.reduce_max(generated) # we want a low max
c_min = -tf.reduce_min(generated) # we want a high mix
c_diff = 0
for i in range(0,height-1, 1):
line1 = tf.gather(generated, i,)
line2 = tf.gather(generated, i+1)
c_diff += tf.reduce_mean(tf.pow(line1-line2-30, 2)) # to force a gradient
cost = c_mean + c_max + c_min + c_diff
#cost = c_mean + c_diff
print ('cost defined')
train_op = tf.train.GradientDescentOptimizer(0.5).minimize(cost, var_list=[generated])
print ('train_op defined')
# Initializing the variables
init = tf.initialize_all_variables()
print ('variables initialiazed defined')
# Launch the graph
with tf.Session() as sess:
sess.run(init)
print ('init done')
cost_trace = CostTrace()
for epoch in range(0,10000):
sess.run(train_op)
if (epoch % 100 == 0):
c = cost.eval()
print ('epoch', epoch,'cost' ,c, c_mean.eval(), c_min.eval(), c_max.eval(), c_diff.eval())
cost_trace.log(c)
write_png(generated, "generated{:06}.png".format(epoch))
print ('all done')
cost_trace.draw()
Image("generated000000.png")
from PIL import Image, ImageSequence
import glob, sys, os
os.chdir(".")
frames = []
for file in glob.glob("gene*.png"):
print(file)
im = Image.open(file)
frames.append(im)
from images2gif import writeGif
writeGif("generated.gif", frames, duration=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import needed for Jupiter
Step2: A function to save a picture
Step4: A function to draw the cost function in Jupyter
Step5: Create some random pictures
Step6: To create a GIF
|
2,935
|
<ASSISTANT_TASK:>
Python Code:
import sncosmo
import analyzeSN as ans
import numpy as np
from analyzeSN import LightCurve
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
ex_data = sncosmo.load_example_data().to_pandas()
ex_data.head()
lc = LightCurve(ex_data)
lc.lightCurve.head()
lc_tmp = ex_data.copy()
del lc_tmp['band']
lc_tmp.head()
lc_bad = LightCurve(lc_tmp)
SNCosmoLC = lc.snCosmoLC()
SNCosmoLC
sncosmo.plot_lc(SNCosmoLC)
from analyzeSN import SNANASims
from astropy.table import Table
megacamBandNames = 'ugriz'
megacamRegisteredNames = tuple('mega_' + band for band in megacamBandNames)
snana_eg = SNANASims.fromSNANAfileroot(snanafileroot='snana_fits',
location='../analyzeSN/example_data/',
coerce_inds2int=False,
SNANABandNames=megacamBandNames,
registeredBandNames=megacamRegisteredNames)
Table.read(snana_eg.photFile)[45:50]
snana_eg.bandNames
snana_eg.newbandNames
lcInstance = snana_eg.get_SNANA_photometry(snid='03D1aw')
lcInstance.lightCurve[['mjd', 'band', 'flux', 'fluxerr', 'zp', 'zpsys']]
lcInstance.bandNameDict
snana_eg.headData['REDSHIFT_FINAL']
import sncosmo
dust = sncosmo.CCM89Dust()
model = sncosmo.Model(source='salt2', effects=[dust],
effect_names=['host', 'mw'], effect_frames=['rest', 'obs'])
model.set(z=0.5817)
resfit = sncosmo.fit_lc(lcInstance.snCosmoLC(), model, vparam_names=['t0', 'x0', 'x1', 'c'], modelcov=False)
res = sncosmo.mcmc_lc(lcInstance.snCosmoLC(), model, vparam_names=['t0', 'x0', 'x1', 'c'], modelcov=False)
from analyzeSN import ResChar
reschar = ResChar.fromSNCosmoRes(res)
reschar.salt_samples()['t0'].mean()
reschar.salt_samples()['c'].mean()
resfit[0].parameters[2]
resfit[0].parameters[2]
reschar.salt_samples()['c'].std()
fig, ax = plt.subplots(1, 4)
reschar.salt_samples()['c'].hist(bins=np.arange(-0.6, 0.6, 0.03), lw=2., histtype='step', normed=1, ax=ax[0])
reschar.salt_samples()['x1'].hist(bins=np.arange(-4., 4., 0.2), lw=2., histtype='step', normed=1, ax=ax[1])
reschar.salt_samples()['t0'].hist(bins=10, lw=2., histtype='step', normed=1, ax=ax[2])
reschar.salt_samples()['mu'].hist(bins=10, lw=2., histtype='step', normed=1, ax=ax[3])
sns.distplot(reschar.salt_samples()['t0'], rug=False, color='r', hist_kws=dict(histtype='step', lw=2., alpha=1., color='k'))
resfit[0].errors['c']
10**(-0.4 * 0.27)
-2.5 * np.log10(f)
10.0**(0.27 / 2.5 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiation
Step2: This was a SNCosmo example dataset, loaded into pandas.DataFrame. Note that the column representing temporal information is time. Now, we will use this to instantiate our class.
Step3: We get the representation in our convention by accessing the attribute lightCurve, which is a dataFrame with columns of the standard name. It could have more columns, but the minimal columns have to be there. If we supplied a dataFrame without having the minimal number of columns, it will raise a valueError, as described later. From the lightCurve attribute, we note that time has changed to our standard name mjd
Step4: What would have happened if an essential column was missing? to check let us do the following
Step5: This raises a ValueError pointing out that the column band was missing. However, it correctly recognizes that while
Step6: We can now plot the data using the usual SNCosmo plotting method, showing the data in g, r, and z bands.
Step7: SNANA simulations
Step8: Let us look at what the array in the fits file looks like
Step9: Fitting
Step10: To infer the light curve model parameters, we need a model. We will use the SALT model from SNCosmo
Step11: We will not try to infer the redshift first
Step12: We have two main methods of trying to infer the model parameters
Step13: Scratch
|
2,936
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# Groupnorm Layer
tfa.layers.GroupNormalization(groups=5, axis=3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# LayerNorm Layer
tfa.layers.InstanceNormalization(axis=3,
center=True,
scale=True,
beta_initializer="random_uniform",
gamma_initializer="random_uniform"),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# LayerNorm Layer
tf.keras.layers.LayerNormalization(axis=3 , center=True , scale=True),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 归一化
Step2: 准备数据集
Step3: 组归一化教程
Step4: 实例归一化教程
Step5: 层归一化教程
|
2,937
|
<ASSISTANT_TASK:>
Python Code:
# -*- coding: utf-8 -*-
from pylab import *
from pyspecdata import *
from numpy.random import random
import time
init_logging('debug')
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
a2_nd = nddata(random(10*2048),[2048,10],['y','z']).setaxis('y','#').setaxis('z','#')
a2 = a2_nd.data
# multiply two different matrices
time1 = time.time()
b = a @ a2
time2 = time.time()
b_nd = a_nd @ a2_nd
time3 = time.time()
assert b_nd.dimlabels == ['x','z'], b_nd.dimlabels
assert all(isclose(b,b_nd.data))
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1
time1 = time.time()
b = a @ a.T
time2 = time.time()
b_nd = a_nd.along('y',('x','x_new')) @ a_nd
time3 = time.time()
assert b_nd.dimlabels == ['x_new','x'], b_nd.dimlabels
assert all(b_nd.getaxis('x_new') == b_nd.getaxis('x'))
assert (id(b_nd.getaxis('x_new')) != id(b_nd.getaxis('x')))
assert all(isclose(b,b_nd.data))
if time2-time1>0:
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1.1
a_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
b_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
a = a_nd.data
b = b_nd.data
assert all(isclose(a.dot(b),(a_nd @ b_nd).data))
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
b_nd = a_nd.along('y') @ a_nd
b = matmul(a_nd.data.reshape(10,1,2048),
a_nd.data.reshape(10,2048,1)).reshape(-1)
assert all(isclose(b,b_nd.data))
assert len(b.data) == 10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example, the assertions essentially tell the story of what's going on
Step2: in the next line, note how only the dimension that goes away is named the
Step3: the previous is unambiguous b/c only 'y' is shared between the two,
Step4: calculate a projection matrix
Step5: note that here, I have to rename the column space
Step6: now, a standard dot product note how I don't need along here, since it's
Step7: Finally, let's show what happens when we multiply a matrix by itself and
|
2,938
|
<ASSISTANT_TASK:>
Python Code:
data = [
('california', 1),
('california', 3),
('colorado', 0),
('colorado', 10),
('washington', 2),
('washington', 4)
]
# This won't work because I haven't initialized keys
summed = dict()
for row in data:
key, value = row # destructure the tuple
summed[key] = summed[key] + value
summed = dict()
for row in data:
key, value = row
if key not in summed:
summed[key] = int()
summed[key] = summed[key] + value
summed
merged = dict()
for row in data:
key, value = row
if key not in merged:
merged[key] = list()
merged[key].append(value)
merged
from collections import defaultdict
summed = defaultdict(int)
for row in data:
key, value = row
summed[key] = summed[key] + value
summed
merged = defaultdict(list)
for row in data:
key, value = row
merged[key].append(value)
merged
def myinit():
return -100
summed = defaultdict(myinit)
for row in data:
key, value = row
summed[key] += value
summed
d = defaultdict(str)
# initially this is empty so all of these should be false
print('pedro in dictionary:', 'pedro' in d)
print('jordan in dictionary:', 'jordan' in d)
# Lets set something in the dictionary now and check that again
d['jordan'] = 'professor'
print('jordan is in dictionary:', 'jordan' in d)
print('pedro is in dictionary:', 'pedro' in d)
# Lets accidentally access 'pedro' before setting it then see what happens
pedro_job = d['pedro']
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro']))
d['pedro'] = 'PhD Student'
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With an ordinary dictionary, I would need to check if they key exists. If it doesn't I need to initialize it with a value. For instrutional purposes I will call the int() function which will return the default value for an integer which is 0.
Step2: As expected, the first time we try to set the value for california, it doesn't exist in the dictionary so the right handside of the equal sign errors. Thats easy to fix like this
Step3: Lets see one more example that instead of summing the numbers we wan't to collect everything into a list. So lets replace int() with list() since we wan't to make an empty list. We also need to change the summing term to use append instead
Step4: Its inconvenient to do this check every time so python has a nice way to make this pattern simpler. This is what collections.defaultdict was designed for. It does the following
Step5: As expected, the results are exactly the same, and it is based on the initial method you pass it. This function is called a factory method since each time a key needs to be initialized you can imagine that the function acts as a factory which creates new values. Lets cover one of the common mistakes with default dictionaries before concluding. The source of this mistake is that any time a non-existent key is accessed its initialized.
Step6: So this is odd! You never set a key (only accessed it), but nonetheless pedro is in the dictionary. This is because when the 'pedro' key was accessed and not there, python set it to the return of str which returns an empty string. Lets set this to the real value and be done
|
2,939
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
# YOUR CODE HERE
psi=(2.0/L)*np.sin(nx*np.pi*x/L)*np.sin(ny*np.pi*y/L)
return psi
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
# YOUR CODE HERE
x, y = np.meshgrid(np.linspace(0, 1.0, 100), np.linspace(0, 1.0, 100))
z=well2d(x,y,3,2,L=1)
plt.figure()
fig=plt.contour(x,y,z)
plt.colorbar()
plt.title("Contour Graph of 2d Wave Function")
plt.xlabel("X-Axis of 2d Scalar Field (Distance)")
plt.ylabel("Y-Axis of 2d Scalar Field (Distance)")
plt.tick_params(direction='out')
assert True # use this cell for grading the contour plot
# YOUR CODE HERE
# ? plt.pcolor
plt.pcolor(z)
plt.colorbar()
plt.title("PColor Graph of 2d Wave Function")
plt.xlabel("X-Axis of 2d Scalar Field (Distance)")
plt.ylabel("Y-Axis of 2d Scalar Field (Distance)")
plt.tick_params(direction='out')
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
2,940
|
<ASSISTANT_TASK:>
Python Code:
# SBML model we want to import
sbml_file = 'model_presimulation.xml'
# Name of the model that will also be the name of the python module
model_name = 'model_presimulation'
# Directory to which the generated model code is written
model_output_dir = model_name
import libsbml
import amici
import amici.plotting
import os
import sys
import importlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pprint import pprint
sbml_reader = libsbml.SBMLReader()
sbml_doc = sbml_reader.readSBML(sbml_file)
sbml_model = sbml_doc.getModel()
print('Species:')
pprint([(s.getId(),s.getName()) for s in sbml_model.getListOfSpecies()])
print('\nReactions:')
for reaction in sbml_model.getListOfReactions():
reactants = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfReactants()])
products = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfProducts()])
reversible = '<' if reaction.getReversible() else ''
print('%3s: %10s %1s->%10s\t\t[%s]' % (reaction.getName(),
reactants,
reversible,
products,
libsbml.formulaToL3String(reaction.getKineticLaw().getMath())))
print('Parameters:')
pprint([(p.getId(),p.getName()) for p in sbml_model.getListOfParameters()])
# Create an SbmlImporter instance for our SBML model
sbml_importer = amici.SbmlImporter(sbml_file)
from IPython.display import IFrame
IFrame('https://amici.readthedocs.io/en/latest/glossary.html#term-fixed-parameters', width=600, height=175)
fixedParameters = ['DRUG_0','KIN_0']
# Retrieve model output names and formulae from AssignmentRules and remove the respective rules
observables = amici.assignmentRules2observables(
sbml_importer.sbml, # the libsbml model object
filter_function=lambda variable: variable.getName() == 'pPROT'
)
print('Observables:')
pprint(observables)
sbml_importer.sbml2amici(model_name,
model_output_dir,
verbose=False,
observables=observables,
constant_parameters=fixedParameters)
# load the generated module
model_module = amici.import_model_module(model_name, model_output_dir)
# Create Model instance
model = model_module.getModel()
# Create solver instance
solver = model.getSolver()
# Run simulation using default model parameters and solver options
model.setTimepoints(np.linspace(0, 60, 60))
rdata = amici.runAmiciSimulation(model, solver)
amici.plotting.plotObservableTrajectories(rdata)
edata = amici.ExpData(rdata, 0.1, 0.0)
edata.fixedParameters = [0,2]
rdata = amici.runAmiciSimulation(model, solver, edata)
amici.plotting.plotObservableTrajectories(rdata)
edata.fixedParametersPreequilibration = [3,0]
rdata = amici.runAmiciSimulation(model, solver, edata)
amici.plotting.plotObservableTrajectories(rdata)
edata.reinitializeFixedParameterInitialStates = True
rdata = amici.runAmiciSimulation(model, solver, edata)
amici.plotting.plotObservableTrajectories(rdata)
edata.t_presim = 10
edata.fixedParametersPresimulation = [10.0, 0.0]
edata.fixedParameters = [10.0, 2.0]
print(edata.fixedParametersPreequilibration)
print(edata.fixedParametersPresimulation)
print(edata.fixedParameters)
rdata = amici.runAmiciSimulation(model, solver, edata)
amici.plotting.plotObservableTrajectories(rdata)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Loading
Step2: For this example we want specify the initial drug and kinase concentrations as experimental conditions. Accordingly we specify them as fixedParameters. The meaning of fixedParameters is defined in the Glossary, which we display here for convenience.
Step3: The SBML model specifies a single observable named pPROT which describes the fraction of phosphorylated Protein. We load this observable using amici.assignmentRules2observables.
Step4: Now the model is ready for compilation using sbml2amici. Note that we here pass fixedParameters as arguments to constant_parameters, which ensures that amici is aware that we want to have them as fixedParameters
Step5: To simulate the model we need to create an instance via the getModel() method in the generated model module.
Step6: The only thing we need to simulate the model is a timepoint vector, which can be specified using the setTimepoints method. If we do not specify any additional options, the default values for fixedParameters and parameters that were specified in the SBML file will be used.
Step7: Simulation options can be specified either in the Model or in an ExpData instance. The ExpData instance can also carry experimental data. To initialize an ExpData instance from simulation results, amici offers some convenient constructors. In the following we will initialize an ExpData object from simulation results, but add noise with standard deviation 0.1 and specify the standard deviation accordingly. Moreover, we will specify custom values for DRUG_0=0 and KIN_0=2. If fixedParameter is specified in an ExpData instance, runAmiciSimulation will use those parameters instead of the ones specified in the Model instance.
Step8: For many biological systems, it is reasonable to assume that they start in a
Step9: The resulting trajectory is definitely not what one may expect. The problem is that the DRUG_0 and KIN_0 set initial conditions for species in the model. By default these initial conditions are only applied at the very beginning of the simulation, i.e., before the preequilibration. Accordingly, the fixedParameters that we specified do not have any effect. To fix this, we need to set the reinitializeFixedParameterInitialStates attribue to True, to spefify that AMICI reinitializes all states that have fixedParameter-dependent initial states.
Step10: With this option activated, the kinase concentration will be reinitialized after the preequilibration and we will see the expected change in fractional phosphorylation
Step11: On top of preequilibration, we can also specify presimulation. This option can be used to specify pretreatments where the system is not assumed to reach steadystate. Presimulation can be activated by specifying t_presim and edata.fixedParametersPresimulation. If both fixedParametersPresimulation and fixedParametersPreequilibration are specified, preequilibration will be performed first, followed by presimulation, followed by regular simulation. For this example we specify DRUG_0=10 and KIN_0=0 for the presimulation and DRUG_0=10 and KIN_0=2 for the regular simulation. We do not overwrite the DRUG_0=3 and KIN_0=0 that was previously specified for preequilibration.
|
2,941
|
<ASSISTANT_TASK:>
Python Code:
red = tf.constant([255, 0, 0])
# The match_filenames_once will accept a regex but there is no need for this example.
image_filename = "./images/chapter-05-object-recognition-and-classification/working-with-images/test-input-image.jpg"
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once(image_filename))
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_file)
# setup-only-ignore
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(image)
# setup-only-ignore
filename_queue.close(cancel_pending_enqueues=True)
coord.request_stop()
coord.join(threads)
# Reuse the image from earlier and give it a fake label
image_label = b'\x01' # Assume the label data is in a one-hot representation (00000001)
# Convert the tensor into bytes, notice that this will load the entire image file
image_loaded = sess.run(image)
image_bytes = image_loaded.tobytes()
image_height, image_width, image_channels = image_loaded.shape
# Export TFRecord
writer = tf.python_io.TFRecordWriter("./output/training-image.tfrecord")
# Don't store the width, height or image channels in this Example file to save space but not required.
example = tf.train.Example(features=tf.train.Features(feature={
'label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_label])),
'image': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes]))
}))
# This will save the example to a text file tfrecord
writer.write(example.SerializeToString())
writer.close()
# Load TFRecord
tf_record_filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once("./output/training-image.tfrecord"))
# Notice the different record reader, this one is designed to work with TFRecord files which may
# have more than one example in them.
tf_record_reader = tf.TFRecordReader()
_, tf_record_serialized = tf_record_reader.read(tf_record_filename_queue)
# The label and image are stored as bytes but could be stored as int64 or float64 values in a
# serialized tf.Example protobuf.
tf_record_features = tf.parse_single_example(
tf_record_serialized,
features={
'label': tf.FixedLenFeature([], tf.string),
'image': tf.FixedLenFeature([], tf.string),
})
# Using tf.uint8 because all of the channel information is between 0-255
tf_record_image = tf.decode_raw(
tf_record_features['image'], tf.uint8)
# Reshape the image to look like the image saved, not required
tf_record_image = tf.reshape(
tf_record_image,
[image_height, image_width, image_channels])
# Use real values for the height, width and channels of the image because it's required
# to reshape the input.
tf_record_label = tf.cast(tf_record_features['label'], tf.string)
# setup-only-ignore
sess.close()
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(tf.equal(image, tf_record_image))
# Check that the label is still 0b00000001.
sess.run(tf_record_label)
# setup-only-ignore
tf_record_filename_queue.close(cancel_pending_enqueues=True)
coord.request_stop()
coord.join(threads)
sess.run(tf.image.central_crop(image, 0.1))
# This crop method only works on real value input.
real_image = sess.run(image)
bounding_crop = tf.image.crop_to_bounding_box(
real_image, offset_height=0, offset_width=0, target_height=2, target_width=1)
sess.run(bounding_crop)
# This padding method only works on real value input.
real_image = sess.run(image)
pad = tf.image.pad_to_bounding_box(
real_image, offset_height=0, offset_width=0, target_height=4, target_width=4)
sess.run(pad)
# This padding method only works on real value input.
real_image = sess.run(image)
crop_or_pad = tf.image.resize_image_with_crop_or_pad(
real_image, target_height=2, target_width=5)
sess.run(crop_or_pad)
top_left_pixels = tf.slice(image, [0, 0, 0], [2, 2, 3])
flip_horizon = tf.image.flip_left_right(top_left_pixels)
flip_vertical = tf.image.flip_up_down(flip_horizon)
sess.run([top_left_pixels, flip_vertical])
top_left_pixels = tf.slice(image, [0, 0, 0], [2, 2, 3])
random_flip_horizon = tf.image.random_flip_left_right(top_left_pixels)
random_flip_vertical = tf.image.random_flip_up_down(random_flip_horizon)
sess.run(random_flip_vertical)
example_red_pixel = tf.constant([254., 2., 15.])
adjust_brightness = tf.image.adjust_brightness(example_red_pixel, 0.2)
sess.run(adjust_brightness)
adjust_contrast = tf.image.adjust_contrast(image, -.5)
sess.run(tf.slice(adjust_contrast, [1, 0, 0], [1, 3, 3]))
adjust_hue = tf.image.adjust_hue(image, 0.7)
sess.run(tf.slice(adjust_hue, [1, 0, 0], [1, 3, 3]))
adjust_saturation = tf.image.adjust_saturation(image, 0.4)
sess.run(tf.slice(adjust_saturation, [1, 0, 0], [1, 3, 3]))
gray = tf.image.rgb_to_grayscale(image)
sess.run(tf.slice(gray, [0, 0, 0], [1, 3, 1]))
hsv = tf.image.rgb_to_hsv(tf.image.convert_image_dtype(image, tf.float32))
sess.run(tf.slice(hsv, [0, 0, 0], [3, 3, 3]))
rgb_hsv = tf.image.hsv_to_rgb(hsv)
rgb_grayscale = tf.image.grayscale_to_rgb(gray)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each scalar can be changed to make the pixel another color or a mix of colors. The rank 1 tensor of a pixel is in the format of [red, green, blue] for an RGB color space. All the pixels in an image are stored in files on a disk which need to be read into memory so TensorFlow may operate on them.
Step2: The image, which is assumed to be located in a relative directory from where this code is ran. An input producer (tf.train.string_input_producer) finds the files and adds them to a queue for loading. Loading an image requires loading the entire file into memory (tf.WholeFileReader) and onces a file has been read (image_reader.read) the resulting image is decoded (tf.image.decode_jpeg).
Step3: Inspect the output from loading an image, notice that it's a fairly simple rank 3 tensor. The RGB values are found in 9 rank 1 tensors. The higher rank of the image should be familiar from earlier sections. The format of the image loaded in memory is now [batch_size, image_height, image_width, channels].
Step4: The label is in a format known as one-hot encoding which is a common way to work with label data for categorization of multi-class data. The Stanford Dogs Dataset is being treated as multi-class data because the dogs are being categorized as a single breed and not a mix of breeds. In the real world, a multilabel solution would work well to predict dog breeds because it'd be capable of matching a dog with multiple breeds.
Step5: At first, the file is loaded in the same way as any other file. The main difference is that the file is then read using a TFRecordReader. Instead of decoding the image, the TFRecord is parsed tf.parse_single_example and then the image is read as raw bytes (tf.decode_raw).
Step6: All of the attributes of the original image and the image loaded from the TFRecord file are the same. To be sure, load the label from the TFRecord file and check that it is the same as the one saved earlier.
Step7: Creating a file which stores both the raw image data and the expected output label will save complexities during training. It's not required to use TFRecord files but it's highly recommend when working with images. If it doesn't work well for a workflow, it's still recommended to preprocess images and save them before training. Manipulating an image each time it's loaded is not recommended.
Step8: The example code uses tf.image.central_crop to crop out 10% of the image and return it. This method always returns based on the center of the image being used.
Step9: The example code uses tf.image.crop_to_bounding_box in order to crop the image starting at the upper left pixel located at (0, 0). Currently, the function only works with a tensor which has a defined shape so an input image needs to be executed on the graph first.
Step10: This example code increases the images height by one pixel and its width by a pixel as well. The new pixels are all set to 0. Padding in this manner is useful for scaling up an image which is too small. This can happen if there are images in the training set with a mix of aspect ratios. TensorFlow has a useful shortcut for resizing images which don't match the same aspect ratio using a combination of pad and crop.
Step11: The real_image has been reduced in height to be 2 pixels tall and the width has been increased by padding the image with zeros. This function works based on the center of the image input.
Step12: This example code flips a subset of the image horizontally and then vertically. The subset is used with tf.slice because the original image flipped returns the same images (for this example only). The subset of pixels illustrates the change which occurs when an image is flipped. tf.image.flip_left_right and tf.image.flip_up_down both operate on tensors which are not limited to images. These will flip an image a single time, randomly flipping an image is done using a separate set of functions.
Step13: This example does the same logic as the example before except that the output is random. Every time this runs, a different output is expected. There is a parameter named seed which may be used to control how random the flipping occurs.
Step14: This example brightens a single pixel, which is primarily red, with a delta of 0.2. Unfortunately, in the current version of TensorFlow 0.8, this method doesn't work well with a tf.uint8 input. It's best to avoid using this when possible and preprocess brightness changes.
Step15: The example code changes the contrast by -0.5 which makes the new version of the image fairly unrecognizable. Adjusting contrast is best done in small increments to keep from blowing out an image. Blowing out an image means the same thing as saturating a neuron, it reached its maximum value and can't be recovered. With contrast changes, an image can become completely white and completely black from the same adjustment.
Step16: The example code adjusts the hue found in the image to make it more colorful. The adjustment accepts a delta parameter which controls the amount of hue to adjust in the image.
Step17: The code is similar to adjusting the contrast. It is common to oversaturate an image in order to identify edges because the increased saturation highlights changes in colors.
Step18: This example converted the RGB image into grayscale. The tf.slice operation took the top row of pixels out to investigate how their color has changed. The grayscale conversion is done by averaging all the color values for a pixel and setting the amount of grayscale to be the average.
Step19: RGB
|
2,942
|
<ASSISTANT_TASK:>
Python Code:
from imdbpie import Imdb
imdb = Imdb()
imdb = Imdb(anonymize=True)
def title_search(title):
return pd.DataFrame(imdb.search_for_title(title),index=[x for x in range(len(pd.DataFrame(imdb.search_for_title(title))))])
titles_to_search=['Fast and Furious','Police Academy',
'Nightmare on Elm Street',
'Saw',
'Friday the 13th',
'Star Trek','Final Destination',
'Rocky','Step Up',
'Mission Impossible','Die Hard','Resident Evil','Child\'s Play','Halloween','Home Alone','House Party',
'Paranormal Activity','Predator','Terminator','Pirates of the Caribbean','Underworld',
'Death Wish','Godzilla','Scary Movie','Children of the Corn','Michael Myers','Zatoichi']
combined_titles = pd.concat([title_search(title) for title in titles_to_search],keys=titles_to_search)
#clean data
combined_titles.dropna(subset=['year'],inplace=True)
combined_titles['year']=combined_titles['year'].astype('int')
combined_titles = combined_titles[(combined_titles['year']<=2017)&(combined_titles['year']>1960)]
os.chdir(datafolder)
sequels = pd.read_csv('sequels_list_cleaned.csv')
sequels = sequels.sort_values(['Category','year']).set_index('Category')
sequels = charts_function_list.multi_numeric_index(sequels)
imdb = Imdb()
imdb = Imdb(anonymize=True)
title_list = sequels['imdb_id'].values
title_values = [imdb.get_title_by_id(title_) for title_ in range(len(title_list))]
ratings_and_votes = pd.DataFrame({'rating':[title_values[item].rating for item in range(len(title_values))],
'votes':[title_values[item].votes for item in range(len(title_values))],
'imdb_id':sequels['imdb_id'].values})
sequels = sequels.reset_index()
sequels_with_ratings = pd.merge(sequels,ratings_and_votes,on='imdb_id',how='left')
sequels_with_ratings= sequels_with_ratings.set_index(['Category','level_1'],drop=True)
sequels_with_ratings.to_csv('sequels_with_rating.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pull ratings and votes using the imdb id
|
2,943
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('mesh', times=np.linspace(0,1,11), dataset='mesh01')
print(b['requiv@component'])
print(b['requiv_max@primary@component'])
print(b['requiv_max@primary@constraint'])
b.set_value('requiv@primary@component', 3)
b.set_value('sma@binary@component', 10)
print(b.run_checks())
b.set_value('sma@binary@component', 5)
print(b.run_checks())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Now let's add a mesh dataset at a few different times so that we can see how the potentials affect the surfaces of the stars.
Step3: Relevant Parameters
Step4: Critical Potentials and System Checks
Step5: At this time, if you were to call run_compute, an error would be thrown. An error isn't immediately thrown when setting requiv, however, since the overflow can be recitified by changing any of the other relevant parameters. For instance, let's change sma to be large enough to account for this value of rpole and you'll see that the error does not occur again.
Step6: These logger warnings are handy when running phoebe interactively, but in a script its also handy to be able to check whether the system is currently computable /before/ running run_compute.
|
2,944
|
<ASSISTANT_TASK:>
Python Code:
#The following line fetches you two datasets: images, usable for autoencoder training and attributes.
#Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
from lfw_dataset import fetch_lfw_dataset
data,attrs = fetch_lfw_dataset()
import numpy as np
X_train = data[:10000].reshape((10000,-1))
print(X_train.shape)
X_val = data[10000:].reshape((-1,X_train.shape[1]))
print(X_val.shape)
image_h = data.shape[1]
image_w = data.shape[2]
X_train = np.float32(X_train)
X_train = X_train/255
X_val = np.float32(X_val)
X_val = X_val/255
%matplotlib inline
import matplotlib.pyplot as plt
def plot_gallery(images, h, w, n_row=3, n_col=6):
Helper function to plot a gallery of portraits
plt.figure(figsize=(1.5 * n_col, 1.7 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest')
plt.xticks(())
plt.yticks(())
plot_gallery(X_train, image_h, image_w)
import theano
import theano.tensor as T
import lasagne
input_X = T.matrix("X")
input_shape = [None,image_h*image_w*3]
HU_encoder = 2000 #you can play with this values
HU_decoder = 2000
dimZ = 1000 #considering face reconstruction task, which size of representation seems reasonable?
# define the network
# use ReLU for hidden layers' activations
# GlorotUniform initialization for W
# zero initialization for biases
# it's also convenient to put sigmoid activation on output layer to get nice normalized pics
#l_input =
#l_enc =
#l_z =
#l_dec =
#l_out =
# create prediction variable
prediction = lasagne.layers.get_output(l_out)
# create loss function
loss = lasagne.objectives.squared_error(prediction, input_X).mean()
# create parameter update expressions
params = lasagne.layers.get_all_params(l_out, trainable=True)
updates = lasagne.updates.adam(loss, params, learning_rate=0.001)
# compile training function that updates parameters and returns training loss
# this will take a while
train_fn = theano.function([input_X], loss, updates=updates)
test_fn = theano.function([input_X], prediction)
def iterate_minibatches(inputs, batchsize, shuffle=True):
if shuffle:
indices = np.arange(len(inputs))
np.random.shuffle(indices)
for start_idx in range(0, len(inputs) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt]
# train your autoencoder
# visualize progress in reconstruction and loss decay
for batch in iterate_minibatches(X_val[:10], 1):
pred = test_fn(batch)
plot_gallery([batch[0],pred], image_h, image_w, n_row=1, n_col=2)
z_sample = T.matrix()
# Your code goes here:
# generated_x =
gen_fn = theano.function([z_sample], generated_x)
z = np.random.randn(25, dimZ)*0.5
output = gen_fn(np.asarray(z, dtype=theano.config.floatX))
plot_gallery(output, image_h, image_w, n_row=5, n_col=5)
import GS
#reload(GS)
# to compare with conventional AE, keep these hyperparameters
# or change them for the values that you used before
HU_encoder = 2000
HU_decoder = 2000
dimZ = 1000
# define the network
# you can start from https://github.com/Lasagne/Recipes/blob/master/examples/variational_autoencoder/variational_autoencoder.py
# or another example https://github.com/y0ast/Variational-Autoencoder/blob/master/VAE.py
# but remember that this is not your ground truth since the data is not MNIST
def KL_divergence(mu, logsigma):
return 0
def log_likelihood(x, mu, logsigma):
return 0
lasagne.layers.get_all_layers(l_output) # should be ~9 layers total
# create prediction variable
# prediction =
# create loss function
# ...
# loss = KL_divergence(..., ...) + log_likelihood(..., ..., ...)
# create parameter update expressions
# params =
# updates =
# compile training and testing functions
# train_fn =
# test_fn =
# train your autoencoder
# visualize progress in reconstruction and loss decay
attrs[:10]
#show top- and bottom-10 faces of sorted
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: For simplicity we want all values of the data to lie in the interval $[0,1]$
Step3: Autoencoder
Step4: Sampling
Step5: Can you visualize how the distribution of $z$ looks like? Is it dense? What properties would we expect from it? Can we perform interpolation in $z$ space?
Step6: Since our decoder is also a function that generates distribution, we need to do the same splitting for output layer. When testing the model we will look only on mean values, so one of the output will be actual autoencoder output.
Step7: And the last, but not least! Place in the code where the most of the formulaes goes to - optimization objective. The objective for VAE has it's own name - variational lowerbound. And as for any lowerbound our intention is to maximize it. Here it is (for one sample $z$ per input $x$)
Step8: Now build the loss and training function
Step9: And train the model
Step10: Congrats!
|
2,945
|
<ASSISTANT_TASK:>
Python Code:
doc1="Electron acceleration in a post-flare decimetric continuum source Prasad Subramanian, S. M. White, M. Karlický, R. Sych, H. S. Sawant, S. Ananthakrishnan(Submitted on 23 Mar 2007)Aims: To calculate the power budget for electron acceleration and the efficiency of the plasma emission mechanism in a post-flare decimetric continuum source. Methods: We have imaged a high brightness temperature (∼109K) post-flare source at 1060 MHz with the Giant Metrewave Radio Telescope (GMRT). We use information from these images and the dynamic spectrum from the Hiraiso spectrograph together with the theoretical method described in Subramanian & Becker (2006) to calculate the power input to the electron acceleration process. The method assumes that the electrons are accelerated via a second-order Fermi acceleration mechanism. Results: We find that the power input to the nonthermal electrons is in the range 3×1025--1026 erg/s. The efficiency of the overall plasma emission process starting from electron acceleration and culminating in the observed emission could range from 2.87×10−9 to 2.38×10−8."
doc2="Local (shearing box) simulations of the nonlinear evolution of the magnetorotational instability in a collisionless plasma show that angular momentum transport by pressure anisotropy (p⊥≠p∥, where the directions are defined with respect to the local magnetic field) is comparable to that due to the Maxwell and Reynolds stresses. Pressure anisotropy, which is effectively a large-scale viscosity, arises because of adiabatic invariants related to p⊥ and p∥ in a fluctuating magnetic field. In a collisionless plasma, the magnitude of the pressure anisotropy, and thus the viscosity, is determined by kinetic instabilities at the cyclotron frequency. Our simulations show that ∼50 % of the gravitational potential energy is directly converted into heat at large scales by the viscous stress (the remaining energy is lost to grid-scale numerical dissipation of kinetic and magnetic energy). We show that electrons receive a significant fraction (∼[Te/Ti]1/2) of this dissipated energy. Employing this heating by an anisotropic viscous stress in one dimensional models of radiatively inefficient accretion flows, we find that the radiative efficiency of the flow is greater than 0.5% for M˙≳10−4M˙Edd. Thus a low accretion rate, rather than just a low radiative efficiency, is necessary to explain the low luminosity of many accreting black holes. For Sgr A* in the Galactic Center, our predicted radiative efficiencies imply an accretion rate of ≈3×10−8M⊙yr−1 and an electron temperature of ≈3×1010 K at ≈10 Schwarzschild radii; the latter is consistent with the brightness temperature inferred from VLBI observations."
doc3="We review the theory of electron-conduction opacity, a fundamental ingredient in the computation of low-mass stellar models; shortcomings and limitations of the existing calculations used in stellar evolution are discussed. We then present new determinations of the electron-conduction opacity in stellar conditions for an arbitrary chemical composition, that improve over previous works and, most importantly, cover the whole parameter space relevant to stellar evolution models (i.e., both the regime of partial and high electron degeneracy). A detailed comparison with the currently used tabulations is also performed. The impact of our new opacities on the evolution of low-mass stars is assessed by computing stellar models along both the H- and He-burning evolutionary phases, as well as Main Sequence models of very low-mass stars and white dwarf cooling tracks."
doc4="The best measurement of the cosmic ray positron flux available today was performed by the HEAT balloon experiment more than 10 years ago. Given the limitations in weight and power consumption for balloon experiments, a novel approach was needed to design a detector which could increase the existing data by more than a factor of 100. Using silicon photomultipliers for the readout of a scintillating fiber tracker and of an imaging electromagnetic calorimeter, the PEBS detector features a large geometrical acceptance of 2500 cm^2 sr for positrons, a total weight of 1500 kg and a power consumption of 600 W. The experiment is intended to measure cosmic ray particle spectra for a period of up to 20 days at an altitude of 40 km circulating the North or South Pole. A full Geant 4 simulation of the detector concept has been developed and key elements have been verified in a testbeam in October 2006 at CERN."
doc5="The fluorescence detection of ultra high energy (> 10^18 eV) cosmic rays requires a detailed knowledge of the fluorescence light emission from nitrogen molecules, which are excited by the cosmic ray shower particles along their path in the atmosphere. We have made a precise measurement of the fluorescence light spectrum excited by MeV electrons in dry air. We measured the relative intensities of 34 fluorescence bands in the wavelength range from 284 to 429 nm with a high resolution spectrograph. The pressure dependence of the fluorescence spectrum was also measured from a few hPa up to atmospheric pressure. Relative intensities and collisional quenching reference pressures for bands due to transitions from a common upper level were found in agreement with theoretical expectations. The presence of argon in air was found to have a negligible effect on the fluorescence yield. We estimated that the systematic uncertainty on the cosmic ray shower energy due to the pressure dependence of the fluorescence spectrum is reduced to a level of 1% by the AIRFLY results presented in this paper."
documents=[doc1,doc2,doc3,doc4,doc5]
# remove words from the stopwords and tokenize
#stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stopwords.words('english')] for document in documents]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
#texts contain all the key words
texts = [[token for token in text if frequency[token] > 1] for text in texts]
#save texts as a dictionary
dictionary = corpora.Dictionary(texts)
dictionary.save('firstdic.dict') # store the dictionary, for future reference
#print(dictionary)
#look at the unique integer IDs for the 75 words
print(dictionary.token2id)
#test a new document
doc6="This paper presents the effects of electron-positron pair production on the linear growth of the resistive hose instability of a filamentary beam that could lead to snake-like distortion. For both the rectangular radial density profile and the diffuse profile reflecting the Bennett-type equilibrium for a self-collimating flow, the modified eigenvalue equations are derived from a Vlasov-Maxwell equation. While for the simple rectangular profile, current perturbation is localized at the sharp radial edge, for the realistic Bennett profile with an obscure edge, it is non-locally distributed over the entire beam, removing catastrophic wave-particle resonance. The pair production effects likely decrease the betatron frequency, and expand the beam radius to increase the resistive decay time of the perturbed current; these also lead to a reduction of the growth rate. It is shown that, for the Bennett profile case, the characteristic growth distance for a preferential mode can exceed the observational length-scale of astrophysical jets. This might provide the key to the problem of the stabilized transport of the astrophysical jets including extragalactic jets up to Mpc (∼3×1024 cm) scales."
new_vec = dictionary.doc2bow(doc6.lower().split())
print(new_vec)
#this is how to grab the summary from each api link
url = 'http://export.arxiv.org/api/query?search_query=all:electron&start=0&max_results=1'
data=urllib.request.urlopen(url).read()
#datastring=str(data,'utf-8')
datasummary=str(data,'utf-8').split("<summary>",1)[1].split('</summary',1)[0]
#convert the bytes to string, split out the summary
print(datasummary)
for articleN in range(0,10):
url = 'http://export.arxiv.org/api/query?search_query=all:electron&start='+str(articleN)+'&max_results=1'
print(url)
data=urllib.request.urlopen(url).read()
datasummary=str(data,'utf-8').split("<summary>",1)[1].split('</summary',1)[0]
vec=dictionary.doc2bow(datasummary.lower().split())
print(vec)
class MyCorpus(object):
def _iter_(self):
for articleN in range(0,5):
url='http://export.arxiv.org/api/query?search_query=all:electron&start='+str(articleN)+'&max_results=1'
data=urllib.request.urlopen(url).read()
datasummary=str(data,'utf-8').split("<summary>",1)[1].split('</summary',1)[0]
yield dictionary.doc2bow(datasummary.lower().split())
corpus_memory_friendly=MyCorpus()
print(corpus_memory_friendly)
for vector in corpus_memory_friendly:
print(vector)
class MyCorpus(object):
def __iter__(self):
for line in open('mycorpus.txt'):
#assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now test our small dictionary on a new abstract. It returns with a vector that represents [[word ID, frequency]]
Step2: Now use the arxiv API instead of the manual scraping; try it on one abstract and see how it works
Step3: Now try our small dictionary on 10 abstracts
Step4: Now build a class to stream the corpus
|
2,946
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.cluster import vq
# create 100 coordinate pairs (i.e. two values), then add 5 to all of them
year_1 = np.random.randn(100, 2) + 5
# create 30 coordinatee pairs (i.e. two values), then subtract 5 to all of them
year_2 = np.random.randn(30, 2) - 5
# create 50 coordinatee pairs (i.e. two values)
year_3 = np.random.randn(50, 2)
print('year 1 battles:', year_1[0:3])
print('year 2 battles:', year_2[0:3])
print('year 3 battles:', year_3[0:3])
# vertically stack year_1, year_2, and year_3 elements
battles = np.vstack([year_1, year_2, year_3])
# calculate the centroid coordinates of each cluster
# and the variance of all the clusters
centroids, variance = vq.kmeans(battles, 3)
centroids
variance
identified, distance = vq.vq(battles, centroids)
identified
distance
cluster_1 = battles[identified == 0]
cluster_2 = battles[identified == 1]
cluster_3 = battles[identified == 2]
print(cluster_1[0:3])
print(cluster_2[0:3])
print(cluster_3[0:3])
# create a scatter plot there the x-axis is the first column of battles
# the y-axis is the second column of battles, the size is 100, and
# the color of each point is determined by the indentified variable
plt.scatter(battles[:,0], battles[:,1], s=100, c=identified)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create coordinates for battles for each year of the war
Step2: View the first 3 entries of each year of battles
Step3: Pool all three years of coordinates
Step4: Cluster the battle locations into three groups
Step5: View the centroid coordinate for each of the three clusters
Step6: View the variance of the clusters (they all share the same)
Step7: Seperate the battle data into clusters
Step8: View the cluster of each battle
Step9: View the distance of each individual battle from their cluster's centroid
Step10: Index the battles data by the cluster to which they belong
Step11: Print the first three coordinate pairs of each cluster
Step12: Plot all the battles, color each battle by cluster
|
2,947
|
<ASSISTANT_TASK:>
Python Code:
ifile = open("data/HOPG.txt", mode='r', encoding='utf-8')
text = ifile.read()
ifile.close()
print(text[:300], "...")
import nltk
text = text.lower()
print(text[:300], "...")
myFD = nltk.FreqDist(text)
print(myFD)
for x in ":,.-[];!'\"\t\n/ ?":
del myFD[x]
for x in myFD:
print(x, myFD[x])
myFD.values()
sum(myFD.values())
float(sum(myFD.values()))
total = float(sum(myFD.values()))
print(total)
relfrq = [ x/total for x in myFD.values() ]
print(relfrq)
from math import log
def entropy(p):
return -sum( [ x * log(x, 2) for x in p ] )
entropy([1/8, 1/16, 1/4, 1/8, 1/16, 1/16, 1/4, 1/16])
print(entropy(relfrq))
entdist = [ -x * log(x, 2) for x in relfrq ]
print(entdist)
tokens = nltk.word_tokenize(text)
tokens[:20]
myTokenFD = nltk.FreqDist(tokens)
print(myTokenFD)
for token in list(myTokenFD.items()):
print(token[0], token[1])
stopwords =
i
me
my
myself
we
our
ours
ourselves
you
you're
you've
you'll
you'd
your
yours
yourself
yourselves
he
him
his
himself
she
she's
her
hers
herself
it
it's
its
itself
they
them
their
theirs
themselves
what
which
who
whom
this
that
that'll
these
those
am
is
are
was
were
be
been
being
have
has
had
having
do
does
did
doing
a
an
the
and
but
if
or
because
as
until
while
of
at
by
for
with
about
against
between
into
through
during
before
after
above
below
to
from
up
down
in
out
on
off
over
under
again
further
then
once
here
there
when
where
why
how
all
any
both
each
few
more
most
other
some
such
no
nor
not
only
own
same
so
than
too
very
s
t
can
will
just
don
don't
should
should've
now
d
ll
m
o
re
ve
y
ain
aren
aren't
couldn
couldn't
didn
didn't
doesn
doesn't
hadn
hadn't
hasn
hasn't
haven
haven't
isn
isn't
ma
mightn
mightn't
mustn
mustn't
needn
needn't
shan
shan't
shouldn
shouldn't
wasn
wasn't
weren
weren't
won
won't
wouldn
wouldn't
for x in stopwords.split():
del myTokenFD[x]
print(list(myTokenFD))
myTokenBigrams = nltk.ngrams(tokens, 2)
bigrams = list(myTokenBigrams)
print(bigrams[:20])
print(len(bigrams))
print(len(tokens))
myBigramFD = nltk.FreqDist(bigrams)
print(myBigramFD)
print(myBigramFD)
for bigram in list(myBigramFD.items())[:20]:
print(bigram[0], bigram[1])
print("...")
for ngram in list(myBigramFD.items()):
print(" ".join(ngram[0]), ngram[1])
print("...")
ngrams = [ (" ".join(ngram), myBigramFD[ngram]) for ngram in myBigramFD ]
print(ngrams[:100])
sortedngrams = sorted(ngrams, key=lambda x: x[1])
print(sortedngrams[:20])
print("...")
from operator import itemgetter
sortedngrams = sorted(ngrams, key=itemgetter(1))
print(sortedngrams[:20])
print("...")
sortedngrams = sorted(ngrams, key=itemgetter(1), reverse=True)
print(sortedngrams[:20])
print("...")
sortedngrams = sorted(ngrams, key=itemgetter(1), reverse=True)
for t in sortedngrams[:20]:
print(t[0], t[1])
print("...")
total = float(sum(myBigramFD.values()))
exceptions = ["]", "[", "--", ",", ".", "'s", "?", "!", "'", "'ye"]
myStopWords = stopwords.split()
results = []
for x in myBigramFD:
if x[0] in exceptions or x[1] in exceptions:
continue
if x[0] in myStopWords or x[1] in myStopWords:
continue
#print("%s\t%s\t%s" % (x[0], x[1], myBigramFD[x]/total))
results.append( (x[0], x[1], myBigramFD[x]/total) )
#print(results)
sortedresults = sorted(results, key=itemgetter(2), reverse=True)
for x in sortedresults[:20]:
print(x[0], x[1], x[2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The optional parameters in the open function above define the mode of operations on the file and the encoding of the content. For example, setting the mode to r declares that reading from the file is the only permitted operation that we will perform in the following code. Setting the encoding to utf-8 declares that all characters will be encoded using the Unicode encoding schema UTF-8 for the content of the file.
Step2: We can now lower the text, which means normalizing it to all characters lower case
Step3: To generate a frequency profile from the text file, we can use the NLTK function FreqDist
Step4: We can remove certain characters from the distribution, or alternatively replace these characters in the text variable. The following loop removes them from the frequency profile in myFD, which is a dictionary data structure in Python.
Step5: We can print out the frequency profile by looping through the returned data structure
Step6: To relativize the frequencies, we need to compute the total number of characters. This is assuming that we removed all punctuation symbols. The frequency distribution instance myFD provides a method to access the values associated with the individual characters. This will return a list of values, that is the frequencies associated with the characters.
Step7: The sum function can summarize these values in its list argument
Step8: To avoid type problems when we compute the relative frequency of characters, we can convert the total number of characters into a float. This will guarantee that the division in the following relativisation step will be a float as well.
Step9: We store the resulting number of characters in the total variable
Step10: We can now generate a probability distribution over characters. To convert the frequencies into relative frequencies we use list comprehension and divide every single list element by total. The resulting relative frequencies are stored in the variable relfreq
Step11: Let us compute the Entropy for the character distribution using the relative frequencies. We will need the logarithm function from the Python math module for that
Step12: We can define the Entropy function according to the equation $I = - \sum P(x) log_2( P(x) )$ as
Step13: We can now compute the entropy of the character distribution
Step14: We might be interested in the point-wise entropy of the characters in this distribution, thus needing the entropy of each single character. We can compute that in the following way
Step15: We could now compute the variance over this point-wise entropy distribution or other properties of the frequency distribution as for example median, mode, or standard deviation.
Step16: We can print out the first 20 tokens to verify our data structure is a list with lower-case strings
Step17: We can now generate a frequency profile from the token list, as we did with the characters above
Step19: The frequency profile can be printed out in the same way as above by looping over the tokens and their frequencies. Note that we restrict the loop to the first 20 tokens here just to keep the notebook smaller. You can remove the [
Step20: Counting N-grams
Step21: To store the bigrams in a list that we want to process and analyze further, we convert the Python generator object myTokenBigrams to a list
Step22: Let us verify that the resulting data structure is indeed a list of string tuples. We will print out the first 20 tuples from the bigram list
Step23: We can now verify the number of bigrams and check that there are exactly number of tokens - 1 = number of bigrams in the resulting list
Step24: The frequency profile from these bigrams is generated in exactly the same way as from the token list in the examples above
Step25: If we would want to know some more general properties of the frequency distribution, we can print out information about it. The print statement for this bigram frequency distribution tells us that we have 17,766 types and 38,126 tokens
Step26: The bigrams and their corresponding frequencies can be printed using a for loop. We restrict the number of printed items to 20, just to keep this list reasonably long. If you would like to see the full frequency profile, remove the [
Step27: Pretty printing the bigrams is possible as well
Step28: You can remove the [
Step29: We can generate an increasing frequency profile using the sort function on the second element of the tuple list, that is on the frequency
Step30: We can increase the speed of this sorted call by using the itemgetter() function in the operator module. Let us import this function
Step31: We can now define the sort-key for sorted using the itemgetter function and selecting with 1 the second element in the tuple. Remember that the enumeration of elements in lists or tuples in Python starts at 0.
Step32: A decreasing frequency profile can be generated using another parameter to sorted
Step33: We can pretty-print the decreasing frequency profile
|
2,948
|
<ASSISTANT_TASK:>
Python Code:
import vaex
import vaex.jupyter.model as vjm
import numpy as np
import matplotlib.pyplot as plt
df = vaex.example()
df
E_axis = vjm.Axis(df=df, expression=df.E, shape=140)
Lz_axis = vjm.Axis(df=df, expression=df.Lz, shape=100)
Lz_axis
await vaex.jupyter.gather() # wait until Vaex is done with all background computation
Lz_axis # now min and max are computed, and bin_centers is set
data_array_widget = df.widget.data_array(axes=[Lz_axis, E_axis], selection=[None, 'default'])
data_array_widget # being the last expression in the cell, Jupyter will 'display' the widget
# NOTE: since the computations are done in the background, data_array_widget.model.grid is initially None.
# We can ask vaex-jupyter to wait till all executions are done using:
await vaex.jupyter.gather()
# get a reference to the xarray DataArray object
data_array = data_array_widget.model.grid
print(f"type:", type(data_array))
print("dims:", data_array.dims)
print("data:", data_array.data)
print("coords:", data_array.coords)
print("Lz's data:", data_array.coords['Lz'].data)
print("Lz's attrs:", data_array.coords['Lz'].attrs)
print("And displaying the xarray DataArray:")
display(data_array) # this is what the vaex.jupyter.view.DataArray uses
df.select(df.x > 0)
# NOTE: da is short for 'data array'
def plot2d(da):
plt.figure(figsize=(8, 8))
ar = da.data[1] # take the numpy data, and select take the selection
print(f'imshow of a numpy array of shape: {ar.shape}')
plt.imshow(np.log1p(ar.T), origin='lower')
df.widget.data_array(axes=[Lz_axis, E_axis], display_function=plot2d, selection=[None, True])
df.select(df.id < 10)
def plot2d_with_labels(da):
plt.figure(figsize=(8, 8))
grid = da.data # take the numpy data
dim_x = da.dims[0]
dim_y = da.dims[1]
plt.title(f'{dim_y} vs {dim_x} - shape: {grid.shape}')
extent = [
da.coords[dim_x].attrs['min'], da.coords[dim_x].attrs['max'],
da.coords[dim_y].attrs['min'], da.coords[dim_y].attrs['max']
]
plt.imshow(np.log1p(grid.T), origin='lower', extent=extent, aspect='auto')
plt.xlabel(da.dims[0])
plt.ylabel(da.dims[1])
da_plot_view_nicer = df.widget.data_array(axes=[Lz_axis, E_axis], display_function=plot2d_with_labels)
da_plot_view_nicer
def plot2d_with_selections(da):
grid = da.data
# Create 1 row and #selections of columns of matplotlib axes
fig, axgrid = plt.subplots(1, grid.shape[0], sharey=True, squeeze=False)
for selection_index, ax in enumerate(axgrid[0]):
ax.imshow(np.log1p(grid[selection_index].T), origin='lower')
df.widget.data_array(axes=[Lz_axis, E_axis], display_function=plot2d_with_selections,
selection=[None, 'default', 'rest'])
df.select(df.id < 10) # select 10 objects
df.select(df.id >= 10, name='rest') # and the rest
FeH_axis = vjm.Axis(df=df, expression='FeH', min=-3, max=1, shape=5)
da_view = df.widget.data_array(axes=[E_axis, Lz_axis, FeH_axis], selection=[None, 'default'])
da_view
def plot_with_xarray(da):
da_log = np.log1p(da) # Note that an xarray DataArray is like a numpy array
da_log.plot(x='Lz', y='E', col='FeH', row='selection', cmap='viridis')
plot_view = df.widget.data_array([E_axis, Lz_axis, FeH_axis], display_function=plot_with_xarray,
selection=[None, 'default', 'rest'])
plot_view
selection_widget = df.widget.selection_expression()
selection_widget
await vaex.jupyter.gather()
w = df.widget.counter_selection('default', lazy=True)
w
x_axis = vjm.Axis(df=df, expression=df.Lz)
y_axis = vjm.Axis(df=df, expression=df.E)
da_xy_view = df.widget.data_array(axes=[x_axis, y_axis], display_function=plot2d_with_labels, shape=180)
da_xy_view
# wait for the previous plot to finish
await vaex.jupyter.gather()
# Change both the x and y axis
x_axis.expression = np.log(df.x**2)
y_axis.expression = df.y
# Note that both assignment will create 1 computation in the background (minimal amount of passes over the data)
await vaex.jupyter.gather()
# vaex computed the new min/max, and the xarray DataArray
# x_axis.min, x_axis.max, da_xy_view.model.grid
x_widget = df.widget.expression(x_axis.expression, label='X axis')
x_widget
from ipywidgets import link
link((x_widget, 'value'), (x_axis, 'expression'))
y_widget = df.widget.expression(y_axis, label='X axis')
# vaex now does this for us, much shorter
# link((y_widget, 'value'), (y_axis, 'expression'))
y_widget
await vaex.jupyter.gather() # lets wait again till all calculations are finished
from vaex.jupyter.widgets import ContainerCard
ContainerCard(title='My plot',
subtitle="using vaex-jupyter",
main=da_xy_view,
controls=[x_widget, y_widget], show_controls=True)
y_axis.expression = df.vx
df = vaex.example() # we create the dataframe again, to leave all the plots above 'alone'
heatmap_xy = df.widget.heatmap(df.x, df.y, selection=[None, True])
heatmap_xy
heatmap_xy.model.x
x_widget = df.widget.expression(heatmap_xy.model.x, label='X axis')
y_widget = df.widget.expression(heatmap_xy.model.y, label='X axis')
ContainerCard(title='My plot',
subtitle="using vaex-jupyter and bqplot",
main=heatmap_xy,
controls=[x_widget, y_widget, selection_widget],
show_controls=True,
card_props={'style': 'min-width: 800px;'})
heatmap_xy.tool = 'pan-zoom' # we can also do this programmatically.
heatmap_xy.model.x.expression = np.log10(df.x**2)
await vaex.jupyter.gather() # and we wait before we continue
histogram_Lz = df.widget.histogram(df.Lz, selection_interact='default')
histogram_Lz.tool = 'select-x'
histogram_Lz
# You can graphically select a particular region, in this case we do it programmatically
# for reproducability of this notebook
histogram_Lz.plot.figure.interaction.selected = [1200, 1300]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to build a 2 dimensinoal grid with the number counts in each bin. To do this, we first define two axis objects
Step2: When we inspect the Lz_axis object we see that the min, max, and bin centers are all None. This is because Vaex calculates them in the background, so the kernel stays interactive, meaning you can continue working in the notebook. We can ask Vaex to wait until all background calculations are done. Note that for billions of rows, this can take over a second.
Step3: Note that the Axis is a traitlets HasTrait object, similar to all ipywidget objects. This means that we can link all of its properties to an ipywidget and thus creating interactivity. We can also use observe to listen to any changes to our model.
Step4: Note
Step5: Note that data_array.coords['Lz'].data is the same as Lz_axis.bin_centers and data_array.coords['Lz'].attrs contains the same min/max as the Lz_axis.
Step6: and scroll back we see that the data_array_view widget has updated itself, and now contains two selections! This is a very powerful feature, that allows us to make interactive visualizations.
Step7: In the above figure, we choose index 1 along the selection axis, which referes to the 'default' selection. Choosing an index of 0 would correspond to the None selection, and all the data would be displayed. If we now change the selection, the figure will update itself
Step8: As xarray's DataArray is fully self describing, we can improve the plot by using the dimension names for labeling, and setting the extent of the figure's axes.
Step9: We can also create more sophisticated plots, for example one where we show all of the selections. Note that we can pre-emptively expect a selection and define it later
Step10: Modifying a selection will update the figure.
Step11: Another advantage of using xarray is its excellent plotting capabilities. It handles a lot of the boring stuff like axis labeling, and also provides a nice interface for slicing the data even more.
Step12: We can see that we now have a 4 dimensional grid, which we would like to visualize.
Step13: We only have to tell xarray which axis it should map to which 'aesthetic', speaking in Grammar of Graphics terms.
Step14: The counter_selection creates a widget which keeps track of the number of rows in a selection. In this case we ask it to be 'lazy', which means that it will not cause extra passes over the data, but will ride along if some user action triggers a calculation.
Step15: Axis control widgets
Step16: Again, we can change the expressions of the axes programmatically
Step17: But, if we want to create a dashboard with Voila, we need to have a widget that controls them
Step18: This widget will allow us to edit an expression, which will be validated by Vaex. How do we 'link' the value of the widget to the axis expression? Because both the Axis as well as the x_widget are HasTrait objects, we can link their traits together
Step19: Since this operation is so common, we can also directly pass the Axis object, and Vaex will set up the linking for us
Step20: A nice container
Step21: We can directly assign a Vaex expression to the x_axis.expression, or to x_widget.value since they are linked.
Step22: Interactive plots
Step23: Note that we passed expressions, and not axis objects. Vaex recognizes this and will create the axis objects for you. You can access them from the model
Step24: The heatmap itself is again a widget. Thus we can combine it with other widgets to create a more sophisticated interface.
Step25: By switching the tool in the toolbar (click <i aria-hidden="true" class="v-icon notranslate material-icons theme--light">pan_tool</i>, or changing it programmmatically in the next cell), we can zoom in. The plot's axis bounds are directly synched to the axis object (the x_min is linked to the x_axis min, etc). Thus a zoom action causes the axis objects to be changed, which will trigger a recomputation.
Step26: Since we can access the Axis objects, we can also programmatically change the heatmap. Note that both the expression widget, the plot axis label and the heatmap it self is updated. Everything is linked together!
Step27: Another visualization based on bqplot is the interactive histogram. In the example below, we show all the data, but the selection interaction will affect/set the 'default' selection.
|
2,949
|
<ASSISTANT_TASK:>
Python Code:
import pixiedust
pixiedust.printAllPackages()
pixiedust.installPackage("graphframes:graphframes:0")
pixiedust.printAllPackages()
#import the Graphs example
from graphframes.examples import Graphs
#create the friends example graph
g=Graphs(sqlContext).friends()
#use the pixiedust display
display(g)
pixiedust.installPackage("org.apache.commons:commons-csv:0")
pixiedust.installPackage("https://github.com/ibm-watson-data-lab/spark.samples/raw/master/dist/streaming-twitter-assembly-1.6.jar")
pixiedust.uninstallPackage("org.apache.commons:commons-csv:0")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add a package from spark-packages.org
Step2: Restart your kernel
Step3: Display a GraphFrames data sample
Step4: Install from maven
Step5: Install a jar file directly from a URL
Step6: Follow the tutorial
|
2,950
|
<ASSISTANT_TASK:>
Python Code:
#|export
class TerminateOnNaNCallback(Callback):
"A `Callback` that terminates training if loss is NaN."
order=-9
def after_batch(self):
"Test if `last_loss` is NaN and interrupts training."
if torch.isinf(self.loss) or torch.isnan(self.loss): raise CancelFitException
learn = synth_learner()
learn.fit(10, lr=100, cbs=TerminateOnNaNCallback())
assert len(learn.recorder.losses) < 10 * len(learn.dls.train)
for l in learn.recorder.losses:
assert not torch.isinf(l) and not torch.isnan(l)
#|export
class TrackerCallback(Callback):
"A `Callback` that keeps track of the best value in `monitor`."
order,remove_on_fetch,_only_train_loop = 60,True,True
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., reset_on_fit=True):
if comp is None: comp = np.less if 'loss' in monitor or 'error' in monitor else np.greater
if comp == np.less: min_delta *= -1
self.monitor,self.comp,self.min_delta,self.reset_on_fit,self.best= monitor,comp,min_delta,reset_on_fit,None
def before_fit(self):
"Prepare the monitored value"
self.run = not hasattr(self, "lr_finder") and not hasattr(self, "gather_preds")
if self.reset_on_fit or self.best is None: self.best = float('inf') if self.comp == np.less else -float('inf')
assert self.monitor in self.recorder.metric_names[1:]
self.idx = list(self.recorder.metric_names[1:]).index(self.monitor)
def after_epoch(self):
"Compare the last value to the best up to now"
val = self.recorder.values[-1][self.idx]
if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
else: self.new_best = False
def after_fit(self): self.run=True
#|hide
class FakeRecords(Callback):
order=51
def __init__(self, monitor, values): self.monitor,self.values = monitor,values
def before_fit(self): self.idx = list(self.recorder.metric_names[1:]).index(self.monitor)
def after_epoch(self): self.recorder.values[-1][self.idx] = self.values[self.epoch]
class TestTracker(Callback):
order=61
def before_fit(self): self.bests,self.news = [],[]
def after_epoch(self):
self.bests.append(self.tracker.best)
self.news.append(self.tracker.new_best)
#|hide
learn = synth_learner(n_trn=2, cbs=TestTracker())
cbs=[TrackerCallback(monitor='valid_loss'), FakeRecords('valid_loss', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.1])
test_eq(learn.test_tracker.news, [True,True])
#With a min_delta
cbs=[TrackerCallback(monitor='valid_loss', min_delta=0.15), FakeRecords('valid_loss', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.2])
test_eq(learn.test_tracker.news, [True,False])
#|hide
#By default metrics have to be bigger at each epoch.
def tst_metric(out,targ): return F.mse_loss(out,targ)
learn = synth_learner(n_trn=2, cbs=TestTracker(), metrics=tst_metric)
cbs=[TrackerCallback(monitor='tst_metric'), FakeRecords('tst_metric', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.2])
test_eq(learn.test_tracker.news, [True,False])
#This can be overwritten by passing `comp=np.less`.
learn = synth_learner(n_trn=2, cbs=TestTracker(), metrics=tst_metric)
cbs=[TrackerCallback(monitor='tst_metric', comp=np.less), FakeRecords('tst_metric', [0.2,0.1])]
with learn.no_logging(): learn.fit(2, cbs=cbs)
test_eq(learn.test_tracker.bests, [0.2, 0.1])
test_eq(learn.test_tracker.news, [True,True])
#|hide
#Setting reset_on_fit=True will maintain the "best" value over subsequent calls to fit
learn = synth_learner(n_val=2, cbs=TrackerCallback(monitor='tst_metric', reset_on_fit=False), metrics=tst_metric)
tracker_cb = learn.cbs.filter(lambda cb: isinstance(cb, TrackerCallback))[0]
with learn.no_logging(): learn.fit(1)
first_best = tracker_cb.best
with learn.no_logging(): learn.fit(1)
test_eq(tracker_cb.best, first_best)
#|hide
#A tracker callback is not run during an lr_find
from fastai.callback.schedule import *
learn = synth_learner(n_trn=2, cbs=TrackerCallback(monitor='tst_metric'), metrics=tst_metric)
learn.lr_find(num_it=15, show_plot=False)
assert not hasattr(learn, 'new_best')
#|export
class EarlyStoppingCallback(TrackerCallback):
"A `TrackerCallback` that terminates training when monitored quantity stops improving."
order=TrackerCallback.order+3
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., patience=1, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
self.patience = patience
def before_fit(self): self.wait = 0; super().before_fit()
def after_epoch(self):
"Compare the value monitored to its best score and maybe stop training."
super().after_epoch()
if self.new_best: self.wait = 0
else:
self.wait += 1
if self.wait >= self.patience:
print(f'No improvement since epoch {self.epoch-self.wait}: early stopping')
raise CancelFitException()
learn = synth_learner(n_trn=2, metrics=F.mse_loss)
learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='mse_loss', min_delta=0.1, patience=2))
learn.validate()
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='valid_loss', min_delta=0.1, patience=2))
#|hide
test_eq(len(learn.recorder.values), 3)
#|export
class SaveModelCallback(TrackerCallback):
"A `TrackerCallback` that saves the model's best during training and loads it at the end."
order = TrackerCallback.order+1
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., fname='model', every_epoch=False, at_end=False,
with_opt=False, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
assert not (every_epoch and at_end), "every_epoch and at_end cannot both be set to True"
# keep track of file path for loggers
self.last_saved_path = None
store_attr('fname,every_epoch,at_end,with_opt')
def _save(self, name): self.last_saved_path = self.learn.save(name, with_opt=self.with_opt)
def after_epoch(self):
"Compare the value monitored to its best score and save if best."
if self.every_epoch:
if (self.epoch%self.every_epoch) == 0: self._save(f'{self.fname}_{self.epoch}')
else: #every improvement
super().after_epoch()
if self.new_best:
print(f'Better model found at epoch {self.epoch} with {self.monitor} value: {self.best}.')
self._save(f'{self.fname}')
def after_fit(self, **kwargs):
"Load the best model."
if self.at_end: self._save(f'{self.fname}')
elif not self.every_epoch: self.learn.load(f'{self.fname}', with_opt=self.with_opt)
learn = synth_learner(n_trn=2, path=Path.cwd()/'tmp')
learn.fit(n_epoch=2, cbs=SaveModelCallback())
assert (Path.cwd()/'tmp/models/model.pth').exists()
learn = synth_learner(n_trn=2, path=Path.cwd()/'tmp')
learn.fit(n_epoch=2, cbs=SaveModelCallback(fname='end',at_end=True))
assert (Path.cwd()/'tmp/models/end.pth').exists()
learn.fit(n_epoch=2, cbs=SaveModelCallback(every_epoch=True))
for i in range(2): assert (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
shutil.rmtree(Path.cwd()/'tmp')
learn.fit(n_epoch=4, cbs=SaveModelCallback(every_epoch=2))
for i in range(4):
if not i%2: assert (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
else: assert not (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
shutil.rmtree(Path.cwd()/'tmp')
#|export
class ReduceLROnPlateau(TrackerCallback):
"A `TrackerCallback` that reduces learning rate when a metric has stopped improving."
order=TrackerCallback.order+2
def __init__(self, monitor='valid_loss', comp=None, min_delta=0., patience=1, factor=10., min_lr=0, reset_on_fit=True):
super().__init__(monitor=monitor, comp=comp, min_delta=min_delta, reset_on_fit=reset_on_fit)
self.patience,self.factor,self.min_lr = patience,factor,min_lr
def before_fit(self): self.wait = 0; super().before_fit()
def after_epoch(self):
"Compare the value monitored to its best score and reduce LR by `factor` if no improvement."
super().after_epoch()
if self.new_best: self.wait = 0
else:
self.wait += 1
if self.wait >= self.patience:
old_lr = self.opt.hypers[-1]['lr']
for h in self.opt.hypers: h['lr'] = max(h['lr'] / self.factor, self.min_lr)
self.wait = 0
if self.opt.hypers[-1]["lr"] < old_lr:
print(f'Epoch {self.epoch}: reducing lr to {self.opt.hypers[-1]["lr"]}')
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=4, lr=1e-7, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2))
#|hide
test_eq(learn.opt.hypers[-1]['lr'], 1e-8)
learn = synth_learner(n_trn=2)
learn.fit(n_epoch=6, lr=5e-8, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2, min_lr=1e-8))
#|hide
test_eq(learn.opt.hypers[-1]['lr'], 1e-8)
#|hide
from nbdev.export import notebook2script
notebook2script()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TrackerCallback -
Step2: When implementing a Callback that has behavior that depends on the best value of a metric or loss, subclass this Callback and use its best (for best value so far) and new_best (there was a new best value this epoch) attributes. If you want to maintain best over subsequent calls to fit (e.g., Learner.fit_one_cycle), set reset_on_fit = True.
Step3: EarlyStoppingCallback -
Step4: comp is the comparison operator used to determine if a value is best than another (defaults to np.less if 'loss' is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. patience is the number of epochs you're willing to wait without improvement.
Step5: SaveModelCallback -
Step6: comp is the comparison operator used to determine if a value is best than another (defaults to np.less if 'loss' is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. Model will be saved in learn.path/learn.model_dir/name.pth, maybe every_epoch if True, every nth epoch if an integer is passed to every_epoch or at each improvement of the monitored quantity.
Step7: ReduceLROnPlateau
Step8: Each of these three derived TrackerCallbacks (SaveModelCallback, ReduceLROnPlateu, and EarlyStoppingCallback) all have an adjusted order so they can each run with each other without interference. That order is as follows
|
2,951
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Data generation obtained from http://cs231n.github.io/neural-networks-case-study/
def generate_data(N, K):
D = 2 # Dimensionality
X = np.zeros((N * K, D)) # Data matrix (each row = single example)
y = np.zeros(N * K, dtype='uint8') # Class labels
for j in xrange(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 8, (j + 1) * 8, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) # Visualize
plt.xlim([-1,1])
plt.ylim([-1,1])
return X, y
# Example:
generate_data(300, 3);
# Initialization
def initialize(num_inputs, num_hidden):
# +1 is added to account for the bias trick.
W1 = np.random.randn(num_inputs + 1, num_hidden) * np.sqrt(2.0 / num_inputs + 1)
W2 = np.random.randn(num_hidden + 1, num_classes) * np.sqrt(2.0 / (num_hidden + 1))
return W1, W2
# Forward propagate
def forw_prop(X, W1, W2):
# Hidden layer.
h11 = X.dot(W1)
h12 = np.maximum(0, h11) # ReLU nonlinearity
# Bias trick.
h12 = np.c_[h12, np.ones(h12.shape[0])]
# Final layer.
f = h12.dot(W2)
# Softmax transformation.
probs = np.exp(f)
prob_sums = probs.sum(axis=1, keepdims=True)
probs /= prob_sums
return probs, h11, h12
# Compute the softmax loss http://cs231n.github.io/linear-classify/#softmax
def calc_loss(probs, y, W1, W2, reg):
data_loss = -np.mean(np.log(probs[range(y.shape[0]), y]))
reg_loss = reg * 0.5 * (np.sum(W1 * W1) + np.sum(W2 * W2))
return data_loss + reg_loss
# Backpropagate
def back_prop(probs, X, y, h11, h12, W1, W2, reg):
# Partial derivatives at the final layer.
dL_df = probs
dL_df[range(y.shape[0]), y] -= 1
dL_df /= num_train
# Propagate back to the weights, along with the regularization term.
dL_dW2 = h12.T.dot(dL_df) + reg * W2
# At the output of the hidden layer.
dL_dh12 = dL_df.dot(W2.T)
# Propagate back through nonlinearities to the input of the layer.
dL_dh11 = dL_dh12[:,:-1] # Account for bias trick.
dL_dh11[h11 < 0] = 0 # ReLU
dL_dW1 = X.T.dot(dL_dh11) + reg * W1
return dL_dW1, dL_dW2
def accuracy(X, y, W1, W2):
h = np.maximum(0, X.dot(W1))
h = np.c_[h, np.ones(h.shape[0])]
f = h.dot(W2)
return np.mean(np.argmax(f, axis=1) == y)
# Hyperparameters.
reg = 0.001
step_size = 0.1
num_hidden = 200
data_per_class = 300 # Number of points per class
num_classes = 3 # Number of classes
X, y = generate_data(data_per_class, num_classes)
num_inputs = X.shape[1]
W1, W2 = initialize(num_inputs, num_hidden)
# Preprocess the data.
# Split data into train and test data.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33)
num_train = X_train.shape[0]
num_test = X_test.shape[0]
# The bias trick.
X_train = np.c_[X_train, np.ones(num_train)]
X_test = np.c_[X_test, np.ones(num_test)]
# Now we can perform gradient descent.
for i in xrange(5001):
probs, h11, h12 = forw_prop(X_train, W1, W2)
loss = calc_loss(probs, y_train, W1, W2, reg)
dW1, dW2 = back_prop(probs, X_train, y_train, h11, h12, W1, W2, reg)
W1 -= step_size * dW1
W2 -= step_size * dW2
if i % 500 == 0:
print "Step %4d. Loss=%.3f, train accuracy=%.5f" % (i, loss, accuracy(X_train, y_train, W1, W2))
print "Test accuracy=%.5f" % accuracy(X_test, y_test, W1, W2)
# Plot the resulting classifier on the test data.
h = 0.02
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
h = np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel(), np.ones(xx.ravel().shape)], W1))
h = np.c_[h, np.ones(h.shape[0])]
Z = np.dot(h, W2)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max());
# Tinkering
a = np.array([[-1, 4, 5], [2, 8, 0]])
print a
print np.sum(a, axis=1)
a / a.sum(axis=1, keepdims=True).astype(float)
print np.maximum(0, a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The initial variance is scaled by a factor of $\sqrt[]{\frac{2}{N}}$, where $N$ is the number of inputs to each neuron in the layer (as per http
Step2: We use the notation $h_{i,1}$, $h_{i,2}$ to specify the output of $i-$th layer before and after a nonlinearity, respectively. For example, if the $i-$th hidden layer contains a sigmoid activation function, then $$h_{i,1}=h_{i-1,2}W_i$$ and $$h_{i,2}=\sigma({h_{i,1}}).$$ Additionally, the bias trick could be applied to the output with nonlinearity in order to implicitly account for the bias in the weights.
Step3: The loss function is defined as the mean of sample losses $$ L_i = -f_i[y_i] + \log\Sigma_j\, e^{f_i[j]},\; \text{where }\; f_i=x_i^TW.$$ The final loss is then
Step4: During forward prop, we compose multiple functions to get the final output. Those functions could be simple dot products in case of weights, or complicated nonlinear functions within neurons. An important question when doing backpropagation then is w.r.t. what to differentiate when applying the chain rule?
Step5: Putting it all together
Step6: Training steps
Step7: Visualization
Step8: Tinkering with numpy
|
2,952
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (10, 20)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
from collections import Counter
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in source_sentence.split()] for source_sentence in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in (target_sentence + ' <EOS>').split()] for target_sentence in target_text.split('\n')]
print(target_id_text[:30])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None],name='input')
targets = tf.placeholder(tf.int32, [None, None],name='targets')
learning_rate = tf.placeholder(tf.float32,name='learning_rate')
keep_prob = tf.placeholder(tf.float32,name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_layers = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
output, enc_state = tf.nn.dynamic_rnn(enc_layers,rnn_inputs,dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
train_decoder = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
dec_cell_dropout = tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob = keep_prob)
train_pred,_,_ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell_dropout, train_decoder, dec_embed_input, sequence_length, scope=decoding_scope)
return output_fn(train_pred)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
infer_decoder = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings ,
start_of_sequence_id,end_of_sequence_id,maximum_length,
vocab_size)
#dec_cell_dropout = tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob = keep_prob)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_layers = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, activation_fn = None, scope=decoding_scope)
train_logits = decoding_layer_train(encoder_state,dec_layers,dec_embed_input,sequence_length,decoding_scope,output_fn,keep_prob)
with tf.variable_scope("decoding" , reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state,dec_layers,dec_embeddings,
target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'],
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob
)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.truncated_normal([target_vocab_size, dec_embedding_size],stddev=0.1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state,
target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
return [train_logits, infer_logits]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 30
decoding_embedding_size = 30
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'new jersey is is never rainy during the autumn , but it is sometimes mild in january .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
2,953
|
<ASSISTANT_TASK:>
Python Code:
%%capture
import sys
if not 'chempy' in sys.modules:
!pip install chempy
from chempy import ReactionSystem, Substance
from chempy.kinetics.ode import get_odesys
from collections import defaultdict
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 12}) # Feel free to change the font size in the plots.
from ipywidgets import interact
import datetime
import csv
##################################
# 1. Select the type of reaction #
# This is a required section. #
##################################
# Choose the type of polymerization you want to simulate
# by setting the value of Poly_type as:
# 'conven' for conventional radical polymerization initiated by a thermal initiator, such as AIBN;
# 'normal' for normal ATRP;
# 'arget' for ARGET and AGET ATRP;
# 'eatrp' for electrochemically mediated ATRP;
# 'sara' for SARA ATRP;
# 'cfa' for ATRP by continuous feeding of activators;
# 'icar' for ICAR ATRP.
Poly_type = 'normal'
############
# Optional #
############
Monomer = 'MA'
Solvent = 'MeCN' # For bulk polymerization, the value can be set as 'none'.
Ligand = 'TPMA' # For conventional radical polymerization, this value could be set as 'none'.
Initiator = 'EBrP' # For ICAR ATRP, this value could be set in the form of 'EBrP-AIBN'.
Temperature = 40
# The name of the monomer, ligand, initiator ,and the temperature will not be used in the simulation.
# They are used only as a record in the exported file.
##############################
# 2. Set the reaction time #
# This is a required section.#
##############################
# Set reaction time limit in seconds.
# For a 10 hour reaction, you can set the time as 36000 or 10*3600.
react_time = 18000
###########################################################
# 3. Set the initial concentrations and rate coefficients #
###########################################################
##############################
# 3.1. Monomer concentration #
# This is a required section.#
##############################
# Set the initial concentration of the monomer. All the concentrations are in mol/L unless otherwise specified.
c0_M = 5.5
# Set the molecular weight of the monomer.
MM = 86.09
##################################################################
# 3.2. Common reactions for all kinds of radical polymerizations #
# This is a required section. #
##################################################################
# Set the rate coefficients for propagation and termination (by coupling or by disproportionation).
# Propagation rate coefficients could be found here (http://www.ceic.unsw.edu.au/kpcalculator/kp.html)
k_p = 22000
k_tc = 1e8
k_td = 1e7
# Set the rate coefficients for the addition of the first monomer to the primary radical;
# the termination between primary radicals; the termination between a primary radical and a propagating radical.
k_p_R = 22000
k_t_R = 2.5e9
k_t_R_Pn = 1e8
# Set the rate coefficient for chain transfer to monomer.
k_tr_M = 0
# If syrenic type of monomers is used,
# set the rate coefficient of the thermal initiation of the monomer.
# Leave this value as 0 when you do not want to consider thermal initiation.
# For styrene, the value can be calculated as 2.2e5*exp(-13810/T), where T is the temperature in K.
k_th_S = 0
##############################################################
# 3.3. For conventional radical polymerization and ICAR ATRP #
##############################################################
# If a thermal initiator (TI), e.g., AIBN, BPO, is used in your reaction,
# set the initial concentration of the initiator.
c0_TI = 0.0025
# Set the decomposition rate coefficient and the initiation efficiency for the initiator.
k_d_TI = 2.63e-5
f_TI = 0.8
##############################
# 3.4. For all kinds of ATRP #
##############################
# Set the initial concentration of RX.
c0_RX = 0.0275
# Set the initial concentration of Cu(I) and Cu(II).
# Assume there is a sufficient amount of ligand to coordinate with all Cu(I) and Cu(II),
# and the concentration of free ligand does not affect the reaction kinetics.
c0_CuI = 0.00275
c0_CuII = 0.00275
# Set the rate coefficients for activation of RX; deactivation of radical R;
k_a_0_atrp = 6
k_d_0_atrp = 1e8
# Set the rate coefficient for activation of a polymer with a halogen chain end;
# deactivation of a propagating radical.
k_a_atrp = 6
k_d_atrp = 1e8
##################################
# 3.4.1. For AGET and ARGET ATRP #
##################################
# Set the initial concentration of the reducing agent.
c0_Reduc = 2.5e-3
# Set the reduction rate coefficient
k_reduc = 1e-1
####################
# 3.4.2. For eATRP #
####################
# Set the rate coefficient of electronic reduction.
# This will mimic an eATRP process with constant voltage.
# CuII is converted to CuI at a rate of k_e_reduc*[CuII].
k_e_reduc = 1e-7
###################
# 3.4.3. For SARA #
###################
# set the initial concentration of Cu(0) in cm^2/mL (surface area per unit volume).
# Assume the Cu(0) wire or foil is thick enough that there is no apparent consumption during the polymerization.
c0_Cu0 = 2
# Set the rate coefficients of Cu(0) activation and comproportionation.
# Deactivation by Cu(I) and disproportionation are neglected.
k_comp = 1e-4
k_a_Cu0 = 1e-4
#######################################################
# 3.4.4. For ATRP by continuous feeding of Activators #
#######################################################
# Set the rate coefficient to minic different feeding rate.
# CuI concentration will increase at a constant rate of k_cfa.
# Volume change is neglected.
k_cfa = 1.4e-8
# Initiate the reaction system with null value.
# The rsys_orig is the system of the actual reactions.
# The rsys_pseudo includes the pseudo reactions with pseudo species
# which are used to introduce 1st and 2nd order moments
# and to adjust the reactions to take into account the initiation efficiency, etc.
rsys_orig = ReactionSystem.from_string(
, substance_factory=Substance)
rsys_pseudo = ReactionSystem.from_string(
, substance_factory=Substance)
# Initial concentrations of monomer, dead chains, radicals, and pseudo species of moments.
c0 = defaultdict(float, {'M': c0_M, 'D': 0, 'PnD': 0, 'PnDPn': 0, 'R': 0, 'Pn': 0, 'M1_Pn': 0, \
'M1_PnX': 0, 'M1_PnD': 0, 'M1_PnDPn': 0, 'M2_total': 0})
# Add propagation and termination reactions to the reaction system.
rsys_orig += ReactionSystem.from_string(f
R + M -> Pn; {k_p_R}
Pn + M -> Pn; {k_p}
R + R -> D + D; {k_t_R}
Pn + R -> PnD + D; {k_t_R_Pn}
Pn + Pn -> PnD + PnD; {k_td}
Pn + Pn -> PnDPn; {k_tc}
, substance_factory=Substance)
rsys_pseudo += ReactionSystem.from_string(f
R + M -> R + M + M1_Pn + M2_total; {k_p_R}
Pn + M -> Pn + M + M1_Pn + M2_total; {k_p}
M1_Pn + M -> M1_Pn + M + M2_total; {2*k_p}
M1_Pn + R -> M1_PnD + R; {k_t_R_Pn}
M1_Pn + Pn -> M1_PnD + Pn; {2*k_td}
M1_Pn + Pn -> M1_PnDPn + Pn; {2*k_tc}
M1_Pn + M1_Pn -> M1_Pn + M1_Pn + M2_total; {2*k_tc}
, substance_factory=Substance)
# To include chain transfer to monomer
if k_tr_M !=0:
rsys_orig += ReactionSystem.from_string(f
Pn + M -> PnD + R; {k_tr_M}
, substance_factory=Substance)
rsys_pseudo += ReactionSystem.from_string(f
M1_Pn + M -> M1_PnD + M; {k_tr_M}
, substance_factory=Substance)
# To include the thermal initiaion for styrenic monomers
if k_th_S != 0:
rsys_orig += ReactionSystem.from_string(f
M + M + M -> R + R; {k_th_S}
, substance_factory=Substance)
# For all kinds of ATRP
if Poly_type != 'conven':
c0.update({'RX': c0_RX, 'PnX':0, 'CuI': c0_CuI, 'CuII': c0_CuII})
rsys_orig += ReactionSystem.from_string(f
CuI + RX -> CuII + R; {k_a_0_atrp}
CuII + R -> CuI + RX; {k_d_0_atrp}
CuI + PnX -> CuII + Pn; {k_a_atrp}
CuII + Pn -> CuI + PnX; {k_d_atrp}
, substance_factory=Substance)
rsys_pseudo += ReactionSystem.from_string(f
M1_PnX + CuI -> M1_Pn + CuI; {k_a_atrp}
M1_Pn + CuII -> M1_PnX + CuII; {k_d_atrp}
, substance_factory=Substance)
# For AGET and ARGET ATRP
if Poly_type == 'arget':
c0.update({'Reduc': c0_Reduc, 'ReducX':0})
rsys_orig += ReactionSystem.from_string(f
Reduc + CuII -> ReducX + CuI; {k_reduc}
, substance_factory=Substance)
# For eATRP
if Poly_type == 'eatrp':
c0.update({'elec': 1})
rsys_orig += ReactionSystem.from_string(f
elec + CuII -> CuI + elec; {k_e_reduc}
, substance_factory=Substance)
# For SARA ATRP
if Poly_type == 'sara':
c0.update({'Cu0': c0_Cu0})
rsys_orig += ReactionSystem.from_string(f
Cu0 + CuII -> CuI + CuI + Cu0; {k_comp}
Cu0 + RX -> CuI + R + Cu0; {k_a_Cu0}
, substance_factory=Substance)
# For ATRP by continuous feeding of activators
if Poly_type == 'cfa':
c0.update({'CuIsour': 1})
rsys_orig += ReactionSystem.from_string(f
CuIsour -> CuI + CuIsour; {k_cfa}
, substance_factory=Substance)
# For conventional radical polymerization and ICAR ATRP
# The thermal initiator is consumed with a rate coefficient k_d_TI;
# however, not all of the primary radicals would involve in propagation.
# To take into account the initiation efficiency, pseudo reactions and a pseudo species 'PR' are introduced.
if Poly_type == 'conven' or Poly_type == 'icar':
c0.update({'TI': c0_TI, 'PR': 0})
rsys_orig += ReactionSystem.from_string(f
TI -> R + R; {k_d_TI}
, substance_factory=Substance)
rsys_pseudo += ReactionSystem.from_string(f
TI -> R + R; {-k_d_TI}
TI -> PR + PR; {k_d_TI}
TI -> TI + R + R; {f_TI*k_d_TI}
, substance_factory=Substance)
# Show the reactions and the rate coefficients in the system
rsys_orig
# List the initial concentrations of reagents
for key in c0:
if c0[key] != 0:
print(key, ': ', c0[key])
# Combine the actual and the pseudo reaction systems
rsys = rsys_orig + rsys_pseudo
# Get the differential equation system from the reactions.
odesys, extra = get_odesys(rsys)
# List the differential equations
for index, exp in enumerate(odesys.exprs):
if odesys.names[index] != 'PR':
print(odesys.names[index], ': ', f'dy_{index}/dt', '= ', exp)
# Integration
tout = sorted(np.concatenate((np.linspace(0, react_time), np.logspace(0, np.floor(np.log10(react_time))))))
result = odesys.integrate(tout, c0, integrator='scipy', method='lsoda', atol=1e-12, rtol=1e-6)
# Plot the concentrations of species in the reaction system vs time.
# The change of monomer concentration is not included here since monomer conversion will be plotted later.
labels=['(a)','(b)','(c)']
i=0
fig1, axes = plt.subplots(1, 2, figsize=(12, 5))
for ax in axes:
_ = result.plot(names=[k for k in rsys_orig.substances if k != 'CuIsour' \
and k != 'M' and k!= 'elec' and k != 'Cu0'], ax=ax)
_ = ax.legend(loc='best',prop={'size': 12}) # Set the font size of the legend here.
_ = ax.set_xlabel('Time (s)')
_ = ax.set_ylabel('Concentration')
_ = ax.text(-0.1, 1, labels[i], transform=ax.transAxes, fontweight='bold', va='top', ha='right')
i+=1
axes[0].ticklabel_format(axis="x", style="sci", scilimits=(0,0))
_ = axes[1].set_ylim([1e-10, 1e1])
_ = axes[1].set_xscale('log')
_ = axes[1].set_yscale('log')
_ = fig1.tight_layout()
# Get concentrations and calculate conversion, Mn and Mw/Mn.
ConcM = result[1][:,result.odesys.names.index('M')]
ConcD = result[1][:,result.odesys.names.index('D')]
ConcPnD = result[1][:,result.odesys.names.index('PnD')]
ConcPnDPn = result[1][:,result.odesys.names.index('PnDPn')]
ConcPn = result[1][:,result.odesys.names.index('Pn')]
ConcM2_total = result[1][:,result.odesys.names.index('M2_total')]
if Poly_type != 'conven':
ConcPnX = result[1][:,result.odesys.names.index('PnX')]
else:
ConcPnX = np.zeros(len(result[0]))
ConvM =(ConcM[0]-ConcM)/ConcM[0]
LnM0_M = np.log(ConcM[0]/ConcM)
Mn = np.zeros(len(result[0]))
Mw = np.zeros(len(result[0]))
Mw_Mn = np.ones(len(result[0]))
Mn[1:] = (ConcM[0]-ConcM[1:])/(ConcPnX[1:] + ConcPnD[1:] + ConcPnDPn[1:] + ConcPn[1:])*MM
Mw[1:] = ConcM2_total[1:]/(ConcM[0]-ConcM[1:])*MM
Mw_Mn[1:] = Mw[1:]/Mn[1:]
# Get mole percent of end group loss, i.e., Tmol%.
if Poly_type != 'conven':
x=result.odesys.names.index('RX')
Tmol = 100*(ConcD + ConcPnD + 2*ConcPnDPn)/result[1][0,x]
else:
Tmol = 100*np.ones(len(result[0]))
result_cal = [result[0],ConcM,ConvM,LnM0_M,Mn,Mw_Mn,Tmol]
# Monomer conversion vs. time and first order kinetic plots.
fig2, axes = plt.subplots(1, 2, figsize=(10, 5))
i=0
for ax in axes:
ax.ticklabel_format(axis="x", style="sci", scilimits=(0,0))
_ = ax.plot(result_cal[0], result_cal[i+2])
_ = ax.text(-0.1, 1, labels[i], transform=ax.transAxes, fontweight='bold', va='top', ha='right')
_ = ax.grid()
i += 1
_ = axes[0].set(xlabel = 'Time (s)', ylabel='Conversion')
_ = axes[1].set(xlabel = 'Time (s)', ylabel='Ln([M]0/[M])')
_ = fig2.tight_layout()
# Plot Mn, Mw/Mn and Tmol% vs. conversion.
fig3, axes = plt.subplots(1, 3, figsize=(15, 5))
i=0
for ax in axes:
_ = ax.plot(result_cal[2][1:], result_cal[i+4][1:])
_ = ax.text(-0.1, 1, labels[i], transform=ax.transAxes, fontweight='bold', va='top', ha='right')
_ = ax.grid()
i += 1
_ = axes[0].set(xlabel = 'Conversion', ylabel='Mn')
axes[0].ticklabel_format(axis="y", style="sci", scilimits=(0,0))
_ = axes[1].set(xlabel = 'Conversion', ylabel='Mw/Mn')
_ = axes[2].set(xlabel = 'Conversion', ylabel='Tmol% (%)')
_ = fig3.tight_layout()
if Poly_type == 'conven':
print('Tmol% does not apply to conventional radical polymerization.')
# Export the result to a CSV file.
# If you run this program on your own computer, the CSV file is saved in the same folder as this ipynb file.
# If you run this program online at https://colab.research.google.com/,
# you can find the a menu bar on the left side.
# Click on the fourth one called "Files" and you will see the exported files.
now = datetime.datetime.now()
filename = f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-ATRP-Simulation-{Poly_type}-{Monomer}-{Solvent}-{Ligand}-{Initiator}-{Temperature}C.csv'
with open(filename, 'w', newline='') as f:
thewriter = csv.writer(f)
for rxn in rsys_orig.rxns:
thewriter.writerow([rxn])
if Poly_type == 'conven' or Poly_type == 'icar':
thewriter.writerow([f'The initiation efficiency of the thermal initiator is {f_TI}.'])
for index, exp in enumerate(odesys.exprs):
if odesys.names[index] != 'PR':
thewriter.writerow([f'{odesys.names[index]}:',f'dy_{index}/dt = {exp}'])
if Poly_type == 'conven':
thewriter.writerow(['Tmol% does not apply to conventional radical polymerization.'])
thewriter.writerow(['time (s)']+[k for k in rsys.substances if k != 'PR']+['conversion']+['ln([M]0/[M])']\
+['Mn']+['Mw/Mn']+['Tmol% (%)'])
i=0
for concen in result[1]:
thewriter.writerow([result_cal[0][i]]+[concen[result.odesys.names.index(k)] for k in rsys.substances \
if k != 'PR'] +[ConvM[i]]+[LnM0_M[i]]+[Mn[i]]+[Mw_Mn[i]]+[Tmol[i]])
i+=1
# Export the plots as pdf and png files.
fig1.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-concentration.pdf', bbox_inches='tight')
fig1.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-concentration.png', bbox_inches='tight', dpi=600)
fig2.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-conversion.pdf', bbox_inches='tight')
fig2.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-conversion.png', bbox_inches='tight', dpi=600)
fig3.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-molecular_weight.pdf', bbox_inches='tight')
fig3.savefig(f'{now.strftime("%Y-%m-%d-%Hh%Mm%Ss")}-molecular_weight.png', bbox_inches='tight', dpi=600)
from IPython.display import HTML
HTML(
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSf76pXZ_WKhBv8CZ7z4T2TUs5iQYyyfT_oXzmqOpd28bi9XfA/viewform?embedded=true"
width="60%"
height="1000px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Reaction conditions
Step17: 3. Construct the reaction system
Step18: Show the reaction equations and the initial concentrations of reagents
Step19: 4. Simulation
Step20: The differential equation system includes not only the real species, i.e., those taht appear in the actual reactions, but also the pseudo species, e.g., the 1st and 2nd order moments.
Step21: 5. Results
Step22: Meanings of the species produced during the polymerization
Step23: 5.3. Molecular weight, molecular weight distribution and mole percent loss of chain-ends.
Step24: Note
Step26: 7. Feedback
|
2,954
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('data/running_example.csv', sep=';')
df
# number of cases
len(df['case_id'].unique())
# number of events
len(df)
import pm4py
log = pm4py.format_dataframe(df, case_id='case_id',activity_key='activity',
timestamp_key='timestamp')
log
pm4py.get_start_activities(log)
pm4py.get_end_activities(log)
log_xes = pm4py.read_xes('data/running_example.xes')
pm4py.get_start_activities(log_xes)
pm4py.get_end_activities(log_xes)
log.to_csv('running_example_exported.csv')
pm4py.write_xes(log, 'running_example_csv_exported_as_xes.xes')
df = pm4py.convert_to_dataframe(log_xes)
df.to_csv('running_example_xes_exported_as_csv.csv')
pm4py.write_xes(log_xes, 'running_example_exported.xes')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's inspect the small event log.
Step2: Formatting Data Frames
Step3: Observe that the column names are updated as expected.
Step4: The pm4py.get_start_activities() and pm4py.get_end_activities() both return a dictionary containing the activities
Step5: Exporting Event Data
Step6: Storing a Pandas DataFrame as a .xes file
Step7: Storing an Event Log object as a .csv file
Step8: Storing an Event Log Object as a .xes File
|
2,955
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import tweepy
import numpy as np
import pandas as pd
from collections import Counter
from datetime import datetime
# Turn on retina mode for high-quality inline plot resolution
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
# Version of Python
import platform
platform.python_version()
# Import Twitter API keys
from credentials import *
# Helper function to connect to Twitter API
def twitter_setup():
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET)
api = tweepy.API(auth)
return api
# Extract Twitter data
extractor = twitter_setup()
# Twitter user
twitter_handle = 'fastforwardlabs'
# Get most recent two hundred tweets
tweets = extractor.user_timeline(screen_name=twitter_handle, count=200)
print('Number of tweets extracted: {}.\n'.format(len(tweets)))
# Inspect attributes of tweepy object
print(dir(tweets[0])) # look at the first element/record
# What format is it in? answer: GMT, according to Twitter API
print(tweets[0].created_at)
# Create datetime index: convert to GMT then to Eastern daylight time EDT
tweet_dates = pd.DatetimeIndex([tweet.created_at for tweet in tweets], tz='GMT').tz_convert('US/Eastern')
# Count the number of tweets per hour
num_per_hour = pd.DataFrame( { 'counts': Counter(tweet_dates.hour) })
# Create hours data frame
hours = pd.DataFrame({'hours': np.arange(24)})
# Merge data frame objects on common index, peform left outer join and fill NaN with zero-values
hour_counts = pd.merge(hours, num_per_hour, left_index=True, right_index=True, how='left').fillna(0)
hour_counts
# Count the number of tweets by day of the week
num_per_day = pd.DataFrame( { 'counts': Counter(tweet_dates.weekday) })
# Create days data frame
days = pd.DataFrame({'day': np.arange(7)})
# Merge data frame objects on common index, perform left outer join and fill NaN with zero-values
daily_counts = pd.merge(days, num_per_day, left_index=True, right_index=True, how='left').fillna(0)
# Flag the weekend from weekday tweets
weekend = np.where(tweet_dates.weekday < 5, 'weekday', 'weekend')
# Construct multiply-indexed DataFrame obj indexed by weekday/weekend and by hour
by_time = pd.DataFrame([tweet.created_at for tweet in tweets],
columns=['counts'],
index=tweet_dates).groupby([weekend, tweet_dates.hour]).count()
# Optionally, set the names attribute of the index
by_time.index.names=['daytype', 'hour']
# Show two-dimensional view of multiply-indexed DataFrame
by_time.unstack()
# Merge DataFrame on common index, perform left outer join and fill NaN with zero-values
by_time = pd.merge(hours, by_time.unstack(level=0), left_index=True, right_index=True, how='left').fillna(0)
# Show last five records
by_time.tail()
# Optional: Create xtick labels in Standard am/pm time format
xticks = pd.date_range('00:00', '23:00', freq='H', tz='US/Eastern').map(lambda x: pd.datetime.strftime(x, '%I %p'))
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# Plot
ax = hour_counts.plot(x='hours', y='counts', kind='line', figsize=(12, 8))
ax.set_xticks(np.arange(24))
#ax.set_xticklabels(xticks, rotation=50)
#ax.set_title('Number of Tweets per hour')
#ax.set_xlabel('Hour')
#ax.set_ylabel('No. of Tweets')
#ax.set_yticklabels(labels=['0 ', '5 ', '10 ', '15 ', '20 ', '25 ', '30 ', '35 ', '40 '])
ax.tick_params(axis='both', which='major', labelsize=14)
ax.axhline(y=0, color='black', linewidth=1.3, alpha=0.7)
ax.set_xlim(left=-1, right=24)
ax.xaxis.label.set_visible(False)
now = datetime.strftime(datetime.now(), '%a, %Y-%b-%d at %I:%M %p EDT')
ax.text(x=-2.25, y=-5.5,
s = u"\u00A9" + 'THE_KLEI {} Source: Twitter, Inc. '.format(now),
fontsize=14, color='#f0f0f0', backgroundcolor='grey')
ax.text(x=-2.35, y=44, s="When does @{} tweet? - time of the day".format(twitter_handle),
fontsize=26, weight='bold', alpha=0.75)
ax.text(x=-2.35, y=42,
s='Number of Tweets per hour based-on 200 most-recent tweets as of {}'.format(datetime.strftime(datetime.now(), '%b %d, %Y')),
fontsize=19, alpha=0.85)
plt.show()
# Plot
daily_counts.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']
daily_counts['counts'].plot(title='Daily tweet counts', figsize=(12, 8), legend=True)
plt.show()
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# Plot
fig, ax = plt.subplots(2, 1, figsize=(14, 12))
# weekdays
by_time.loc[:, [('counts', 'weekday')]].plot(ax=ax[0], title='Weekdays', kind='line')
# weekends
by_time.loc[:, [('counts', 'weekend')]].plot(ax=ax[1], title='Weekend', kind='line')
ax[0].set_xticks(np.arange(24))
#ax[0].set_xticklabels(xticks, rotation=50)
ax[1].set_xticks(np.arange(24))
#ax[1].set_xticklabels(xticks, rotation=50)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tweet activity
Step2: Hmmm, what's this created_at attribute?
Step3: Hourly counts
Step4: Because there are hours of the day where there are no tweets, one must explicitly add any zero-count hours to the index.
Step5: Day of the week counts
Step6: Weekday vs weekend hourly counts
Step7: Visualize tweet counts
Step8: Let's see if we can "fancy-it-up" a bit by making it 538 blog-like. Note
Step9: By day of the week
Step10: By weekday and weekend
|
2,956
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
loans
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
print "Number of features (after binarizing categorical variables) = %s" % len(features)
loans_data['grade.A']
print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum()
print "Expexted answer : 6422"
train_data, test_data = loans_data.random_split(.8, seed=1)
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
num_positive = sum([x == 1 for x in labels_in_node])
# Count the number of -1's (risky loans)
## YOUR CODE HERE
num_negative = sum([x == -1 for x in labels_in_node])
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
return min(num_positive, num_negative)
# Test case 1
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans'])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans'])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = float(left_mistakes + right_mistakes) / len(data)
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature, best_error = feature, error
return best_feature # Return the best feature we found
if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == []: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth: ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
## YOUR CODE HERE
splitting_feature = best_splitting_feature(data, remaining_features, 'safe_loans')
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1] ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
## YOUR CODE HERE
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth=6)
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
if annotate:
print 'left split'
return classify(tree['left'], x, annotate)
else:
if annotate:
print 'right split'
return classify(tree['right'], x, annotate)
### YOUR CODE HERE
test_data[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])
classify(my_decision_tree, test_data[0], annotate=True)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
return (prediction != data['safe_loans']).sum() / float(len(data))
evaluate_classification_error(my_decision_tree, test_data)
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['right']['right'], my_decision_tree['right']['splitting_feature'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the lending club dataset
Step2: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
Step4: Let's explore what the dataset looks like.
Step5: Subsample dataset to make sure classes are balanced
Step6: Note
Step7: Let's see what the feature columns look like now
Step8: Let's explore what one of these columns looks like
Step9: This column is set to 1 if the loan grade is A and 0 otherwise.
Step10: Train-test split
Step11: Decision tree implementation
Step12: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step13: Function to pick best feature to split on
Step14: To test your best_splitting_feature function, run the following code
Step15: Building the tree
Step16: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step17: Here is a recursive function to count the nodes in your tree
Step18: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step19: Build the tree!
Step20: Making predictions with a decision tree
Step21: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step22: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step23: Quiz question
Step24: Now, let's use this function to evaluate the classification error on the test set.
Step25: Quiz Question
Step26: Quiz Question
Step27: Exploring the left subtree of the left subtree
Step28: Quiz question
|
2,957
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
transactions = pd.read_csv('mj-clean.csv', parse_dates=[5])
transactions.head()
def GroupByDay(transactions, func=np.mean):
Groups transactions by day and compute the daily mean ppg.
transactions: DataFrame of transactions
returns: DataFrame of daily prices
grouped = transactions[['date', 'ppg']].groupby('date')
daily = grouped.aggregate(func)
daily['date'] = daily.index
start = daily.date[0]
one_year = np.timedelta64(1, 'Y')
daily['years'] = (daily.date - start) / one_year
return daily
def GroupByQualityAndDay(transactions):
Divides transactions by quality and computes mean daily price.
transaction: DataFrame of transactions
returns: map from quality to time series of ppg
groups = transactions.groupby('quality')
dailies = {}
for name, group in groups:
dailies[name] = GroupByDay(group)
return dailies
dailies = GroupByQualityAndDay(transactions)
import matplotlib.pyplot as plt
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i+1)
title = 'Price per gram ($)' if i == 0 else ''
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
plt.xticks(rotation=30)
thinkplot.Config()
else:
thinkplot.Config(xticks=[])
import statsmodels.formula.api as smf
def RunLinearModel(daily):
model = smf.ols('ppg ~ years', data=daily)
results = model.fit()
return model, results
from IPython.display import display
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print(name)
display(results.summary())
def PlotFittedValues(model, results, label=''):
Plots original data and fitted values.
model: StatsModel model object
results: StatsModel results object
years = model.exog[:,1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label='model', color='#ff7f00')
def PlotLinearModel(daily, name):
Plots a linear fit to a sequence of prices, and the residuals.
daily: DataFrame of daily prices
name: string
model, results = RunLinearModel(daily)
PlotFittedValues(model, results, label=name)
thinkplot.Config(title='Fitted values',
xlabel='Years',
xlim=[-0.1, 3.8],
ylabel='Price per gram ($)')
name = 'high'
daily = dailies[name]
PlotLinearModel(daily, name)
series = np.arange(10)
pd.rolling_mean(series, 3)
def PlotRollingMean(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.rolling_mean(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='rolling mean', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
PlotRollingMean(daily, name)
def PlotEWMA(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.ewma(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
PlotEWMA(daily, name)
def FillMissing(daily, span=30):
Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.ewma(reindexed.ppg, span=span)
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed['ewma'] = ewma
reindexed['resid'] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
Plots the EWMA and filled data.
daily: DataFrame of daily prices
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Price per gram ($)')
PlotFilled(daily, name)
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end='\t')
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print('%.2g' % corr, end='\t')
print()
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies['high']
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True)
print('%0.2g, %.2g, %0.2g, %0.2g, %0.2g' %
(acf[0], acf[1], acf[7], acf[30], acf[365]))
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags+1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color='gray')
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
thinkplot.PrePlot(3)
daily = dailies['high']
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
def AddWeeklySeasonality(daily):
Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
fri_or_sat = (daily.index.dayofweek==4) | (daily.index.dayofweek==5)
fake = daily.copy()
fake.ppg.loc[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis,
loc='lower right',
ylabel='correlation',
xlabel='lag (day)')
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis,
loc='lower right',
xlabel='lag (days)')
def GenerateSimplePrediction(results, years):
Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color='#ff7f00')
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)',
loc='upper right')
name = 'high'
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
def SimulateResults(daily, iters=101, func=RunLinearModel):
Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
def GeneratePredictions(result_seq, years, add_resid=False):
Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color='gray')
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color='gray')
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = (results.fittedvalues +
thinkstats2.Resample(results.resid))
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color='gray')
name = 'high'
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
name = 'high'
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Daily change in price per gram ($)')
filled['slope'] = pd.ewma(diffs, span=365)
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel='EWMA of diff ($)')
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(),
filled.index.max() + np.timedelta64(365, 'D'))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted['date'] = predicted.index
one_day = np.timedelta64(1, 'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color='#ff7f00')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Time series analysis
Step3: The following function takes a DataFrame of transactions and compute daily averages.
Step5: The following function returns a map from quality name to a DataFrame of daily averages.
Step6: dailies is the map from quality name to DataFrame.
Step7: The following plots the daily average price for each quality.
Step8: We can use statsmodels to run a linear model of price as a function of time.
Step9: Here's what the results look like.
Step11: Now let's plot the fitted model with the data.
Step13: The following function plots the original data and the fitted curve.
Step14: Here are results for the high quality category
Step15: Moving averages
Step16: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
Step18: The following function plots the rolling mean.
Step19: Here's what it looks like for the high quality category.
Step21: The exponentially-weighted moving average gives more weight to more recent points.
Step24: We can use resampling to generate missing values with the right amount of noise.
Step25: Here's what the EWMA model looks like with missing values filled.
Step26: Serial correlation
Step27: Before computing correlations, we'll fill missing values.
Step28: Here are the serial correlations for raw price data.
Step29: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
Step30: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
Step31: The strongest correlation is a weekly cycle in the medium quality category.
Step33: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
Step35: The following function plots the actual autocorrelation for lags up to 40 days.
Step37: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
Step38: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
Step39: Here's what it would look like if there were a weekly cycle.
Step41: Prediction
Step42: Here's what the prediction looks like for the high quality category, using the linear model.
Step44: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
Step46: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
Step48: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
Step49: Here are the results for the high quality category.
Step51: But there is one more source of uncertainty
Step53: And this function plots the results.
Step54: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
Step55: Exercises
Step56: Exercise
Step57: Worked example
|
2,958
|
<ASSISTANT_TASK:>
Python Code:
l = [] # List
d = {} # Dict
t = () # tuples
s = '' # strings
# ...
fobj = open('test.dat', mode='w')
import numpy as np
ary = np.linspace(0, 2*np.pi, 200)
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
fig = plt.figure(figsize=(20,5)) # Figure object
ax0 = fig.add_subplot(121) # Axes object
ax1 = fig.add_subplot(122) # Axes object
x = np.linspace(0, 2*np.pi, 200)
line, = ax0.plot(x, np.sin(x)) # Line1d object
x = np.linspace(0, 2*np.pi, 200)
y = np.linspace(0, 2*np.pi, 200)
img = ax1.imshow(np.random.random((10,10))) # Image object
fig.tight_layout()
# Containg data
ary = np.random.random(100)
ary[0:10]
# Methods
ary.max() # Maximum element
ary.sum() # Sum of all elements
# ...
# Operators
ary + 2 # Arithmetic
ary[2] # Indexing
ary[3:4] # Slicing
# ...
class MyObject:
pass
a = MyObject()
b = MyObject()
c = MyObject()
a, b, c
import datetime
class Person:
def __init__(self, first_name, last_name, birthday, email_address):
self.first_name = first_name
self.last_name = last_name
self.birthday = birthday
self.email_address = email_address
def age(self):
today = datetime.date.today()
born = datetime.date(*[int(field) for field in self.birthday.split('/')])
return today.year - born.year - ((today.month, today.day) < (born.month, born.day))
def email(self, message):
print('Sending email to {} {} ({})'.format(self.first_name, self.last_name, self.email_address))
print(message)
harry = Person('Harry', 'Potter', '1980/7/31', 'harry@hogwarts.uk')
harry.email('Hello Harry')
harry.age()
contact_book = []
contact_book.append(Person('Harry', 'Potter', '1980/7/31', 'harry@hogwarts.uk'))
contact_book.append(Person('Albert', 'Einstein', '1879/03/14', 'albert@einstein.com'))
contact_book.append(Person('Ernst', 'August', '1771/6/5', 'ernst@koeningreich_hannover.de'))
contact_book.append(Person('Karl', 'Marx', '1818/5/5', 'marx@revolution.su'))
for contact in contact_book:
if contact.age() >= 40:
spam = 'You are {} years old.\nClick here to buy some blue pills.\n'.format(contact.age())
contact.email(spam)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: File objects
Step2: Numpy arrays
Step3: Plots contain many objects
Step4: But what is Object-oriented programming ?
Step5: How to create objects
Step6: Objects are instances of classes
Step7: Building something more useful
Step8: Contact Book
Step9: Send some spam
|
2,959
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
from skdaccess.framework.param_class import *
from skdaccess.finance.timeseries.stream import DataFetcher
stock_ap_list = AutoList(['SPY'])
stockdf = DataFetcher([stock_ap_list], 'daily', '2017-06-01')
dw = stockdf.output()
label, data = next(dw.getIterator())
data.columns
ax = data['4. close'].plot(title='SPY daily closing price');
ax.set_ylabel('price');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select which symbol to retrieve
Step2: Create a data fetcher
Step3: Access the data
Step4: List the columns of the data
Step5: Plot the closing price
|
2,960
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
import numpy as np
from sklearn.naive_bayes import BernoulliNB
# Create three binary features
X = np.random.randint(2, size=(100, 3))
# Create a binary target vector
y = np.random.randint(2, size=(100, 1)).ravel()
# View first ten observations
X[0:10]
# Create Bernoulli Naive Bayes object with prior probabilities of each class
clf = BernoulliNB(class_prior=[0.25, 0.5])
# Train model
model = clf.fit(X, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Binary Feature And Target Data
Step2: View Feature Data
Step3: Train Bernoulli Naive Bayes Classifier
|
2,961
|
<ASSISTANT_TASK:>
Python Code:
import tsvopener
import pandas as pd
import numpy as np
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from scipy.sparse import csr_matrix, vstack
from sklearn.semi_supervised import LabelPropagation, LabelSpreading
regex_categorized = tsvopener.open_tsv("categorized.tsv")
human_categorized = tsvopener.open_tsv("human_categorized.tsv")
# Accuracy Check
#
# match = 0
# no_match = 0
# for key in human_categorized:
# if human_categorized[key] == regex_categorized[key]:
# match += 1
# else:
# no_match += 1
#
# print("accuracy of regex data in {} human-categorized words".format(
# len(human_categorized)))
# print(match/(match+no_match))
#
# accuracy of regex data in 350 human-categorized words
# 0.7857142857142857
# set up targets for the human-categorized data
targets = pd.DataFrame.from_dict(human_categorized, 'index')
targets[0] = pd.Categorical(targets[0])
targets['code'] = targets[0].cat.codes
# form: | word (label) | language | code (1-5)
tmp_dict = {}
for key in human_categorized:
tmp_dict[key] = tsvopener.etymdict[key]
supervised_sents = pd.DataFrame.from_dict(tmp_dict, 'index')
all_sents = pd.DataFrame.from_dict(tsvopener.etymdict, 'index')
vectorizer = CountVectorizer(stop_words='english', max_features=10000)
all_sents.index.get_loc("anyways (adv.)")
# vectorize the unsupervised vectors.
vectors = vectorizer.fit_transform(all_sents.values[:,0])
print(vectors.shape)
# supervised_vectors = vectorizer.fit_transform(supervised_data.values[:,0])
# add labels
# initialize to -1
all_sents['code'] = -1
supervised_vectors = csr_matrix((len(human_categorized),
vectors.shape[1]),
dtype=vectors.dtype)
j = 0
for key in supervised_sents.index:
all_sents.loc[key]['code'] = targets.loc[key]['code']
i = all_sents.index.get_loc(key)
supervised_vectors[j] = vectors[i]
j += 1
# supervised_vectors = csr_matrix((len(human_categorized),
# unsupervised_vectors.shape[1]),
# dtype=unsupervised_vectors.dtype)
# j = 0
# for key in supervised_data.index:
# i = unsupervised_data.index.get_loc(key)
# supervised_vectors[j] = unsupervised_vectors[i]
# j += 1
all_sents.loc['dicky (n.)']
num_points = 1000
num_test = 50
x = vstack([vectors[:num_points], supervised_vectors]).toarray()
t = all_sents['code'][:num_points].append(targets['code'])
x_test = x[-num_test:]
t_test = t[-num_test:]
x = x[:-num_test]
t = t[:-num_test]
label_prop_model = LabelSpreading(kernel='knn')
from time import time
print("fitting model")
timer_start = time()
label_prop_model.fit(x, t)
print("runtime: %0.3fs" % (time()-timer_start))
print("done!")
# unsupervised_data['code'].iloc[:1000]
import pickle
# with open("classifiers/labelspreading_knn_all_but_100.pkl", 'bw') as writefile:
# pickle.dump(label_prop_model, writefile)
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login("trevortds3@gmail.com", "Picardy3")
msg = "Job's done!"
server.sendmail("trevortds3@gmail.com", "trevortds@gmail.com", msg)
server.quit()
targets
from sklearn.metrics import precision_score, accuracy_score, f1_score, recall_score
t_pred = label_prop_model.predict(x_test)
print("Metrics based on 50 hold-out points")
print("Macro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='macro'))
print("recall: %f" % recall_score(t_test, t_pred, average='macro'))
print("f1: %f" % f1_score(t_test, t_pred, average='macro'))
print("\n\nMicro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='micro'))
print("recall: %f" % recall_score(t_test, t_pred, average='micro'))
print("f1: %f" % f1_score(t_test, t_pred, average='micro'))
from sklearn import metrics
import matplotlib.pyplot as pl
labels = ["English", "French", "Greek", "Latin","Norse", "Other"]
labels_digits = [0, 1, 2, 3, 4, 5]
cm = metrics.confusion_matrix(t_test, t_pred, labels_digits)
fig = pl.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
pl.title("Label Spreading with KNN kernel (k=7)")
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
pl.xlabel('Predicted')
pl.ylabel('True')
pl.show()
supervised_vectors
import matplotlib.pyplot as pl
u, s, v = np.linalg.svd(supervised_vectors.toarray())
pca = np.dot(u[:,0:2], np.diag(s[0:2]))
english = np.empty((0,2))
french = np.empty((0,2))
greek = np.empty((0,2))
latin = np.empty((0,2))
norse = np.empty((0,2))
other = np.empty((0,2))
for i in range(pca.shape[0]):
if targets[0].iloc[i] == "English":
english = np.vstack((english, pca[i]))
elif targets[0].iloc[i] == "French":
french = np.vstack((french, pca[i]))
elif targets[0].iloc[i] == "Greek":
greek = np.vstack((greek, pca[i]))
elif targets[0].iloc[i] == "Latin":
latin = np.vstack((latin, pca[i]))
elif targets[0].iloc[i] == "Norse":
norse = np.vstack((norse, pca[i]))
elif targets[0].iloc[i] == "Other":
other = np.vstack((other, pca[i]))
pl.plot( english[:,0], english[:,1], "ro",
french[:,0], french[:,1], "bs",
greek[:,0], greek[:,1], "g+",
latin[:,0], latin[:,1], "c^",
norse[:,0], norse[:,1], "mD",
other[:,0], other[:,1], "kx")
pl.axis([-5,0,-2, 5])
pl.show()
print (s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare Vectors
Step2: Use Scikit's semisupervised learning
Step3: Measuring effectiveness.
Step4: PCA
|
2,962
|
<ASSISTANT_TASK:>
Python Code:
# for item in my_list:
# if item.flavor == 'banana':
# break
# else:
# raise ValueError('No banana flavor found!')
# try:
# dangerous_call()
# after_call()
# except OSError:
# log('OSError...')
# try:
# dangerous_call()
# except OSError:
# log('OSError...')
# else:
# after_call()
with open('with.ipynb') as fp:
src = fp.read(60)
len(src)
fp
fp.closed, fp.encoding
# fp 虽然可用,但不能执行 I/O 操作,
# 因为在 with 末尾,调用 TextIOWrapper.__exit__ 关闭了文件
fp.read(60)
class LookingGlass:
def __enter__(self): # enter 只有一个 self 参数
import sys
self.original_write = sys.stdout.write # 保存供日后使用
sys.stdout.write = self.reverse_write # 打猴子补丁,换成自己方法
return 'JABBERWOCKY' # 返回的字符串讲存入 with 语句的 as 后的变量
def reverse_write(self, text): #取代 sys.stdout.write,反转 text
self.original_write(text[::-1])
# 正常传的参数是 None, None, None,有异常传如下异常信息
def __exit__(self, exc_type, exc_value, traceback):
import sys # 重复导入不会消耗很多资源,Python 会缓存导入模块
sys.stdout.write = self.original_write # 还原 sys.stdout.write 方法
if exc_type is ZeroDivisionError: # 如果有除 0 异样,打印消息
print('Please DO NOT divide by zero')
return True # 返回 True 告诉解释器已经处理了异常
# 如果 __exit__ 方法返回 None,或者 True 之外的值,with 块中的任何异常都会向上冒泡
with LookingGlass() as what:
print('Alice, Kitty and Snowdrop') #打印出的内容是反向的
print(what)
# with 执行完毕,可以看出 __enter__ 方法返回的值 -- 即存储在 what 变量中的值是 'JABBERWOCKY'
what
print('Back to normal') # 输出不再是反向的了
# In [2]: manager = LookingGlass()
# ...: manager
# ...:
# Out[2]: <__main__.LookingGlass at 0x7f586d4aa1d0>
# In [3]: monster = manager.__enter__()
# In [4]: monster == 'JABBERWOCKY'
# Out[4]: eurT
# In [5]: monster
# Out[5]: 'YKCOWREBBAJ'
# In [6]: manager.__exit__(None, None, None)
# In [7]: monster
# Out[7]: 'JABBERWOCKY'
import contextlib
@contextlib.contextmanager
def looking_glass():
import sys
original_write = sys.stdout.write
def reverse_write(text):
original_write(text[::-1])
sys.stdout.write = reverse_write
# 产生一个值,这个值会绑定到 with 语句的 as 子句后的目标变量上
# 执行 with 块中的代码时,这个函数会在这一点暂停
yield 'JABBERWOCKY'
# 控制权一旦跳出 with 块,继续执行 yield 语句后的代码
sys.stdout.write = original_write
with looking_glass() as what:
print('Alice, Kitty and Snowdrop')
print(what)
import contextlib
@contextlib.contextmanager
def looking_glass():
import sys
original_write = sys.stdout.write
def reverse_write(text):
original_write(text[::-1])
sys.stdout.write = reverse_write
msg = ''
try:
yield 'JABBERWOCKY'
except ZeroDivisionError:
msg = 'Please DO NOT divide by zero'
finally:
sys.stdout.write = original_write
if msg:
print(msg)
# import csv
# with inplace(csvfilename, 'r', newline='') as (infh, outfh):
# reader = csv.reader(infh)
# writer = csv.writer(outfh)
# for row in reader:
# row += ['new', 'columns']
# writer.writerow(row)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 一开始你可能觉得没必要在 try/except 中使用 else 子句,毕竟下面代码中只有 dangerous_cal() 不抛出异常 after_call() 才会执行
Step2: 然而,after_call() 不应该放在 try 块中。为了清晰准确,try 块应该只抛出预期异常的语句,因此像下面这样写更好:
Step3: 现在很明确,try 为了捕获的是 dangerous_call() 的异常。
Step4: with 的 as 子句是可选的,对 open 来说,必须加 as 子句,以便获取文件的引用。不过,有些上下文管理器会返回 None,因为没有什么有用的对象能提供给用户
Step5: 在实际应用中,如果程序接管了标准输出,可能会把 sys.stdout 换成类似文件的其他对象,然后再切换成原来的版本。contextlib.redirect_stdout 上下文管理器就是这么做的
Step6: 上面在命令行执行的,因为在 jupyter notebook 的输出有时候有莫名其妙的 bug
Step7: 其实,contextlib.contextmanager 装饰器会把函数包装成实现 __enter__ 和 __exit__ 方法的类
Step8: 前面说过,为了告诉解释器异常已经处理了,__exit__ 方法返回 True,此时解释器会压制异常。如果 __exit__ 方法没有显式返回一个值,那么解释器得到的是 None,然后向上冒泡异常。使用 @contextmanager 装饰器时,默认行为是相反的,装饰器提供的 __exit__ 方法假定发给生成器的所有异常都得到处理了,因此应该压制异常。如果不想让 @contextmanager 压制异常,必须在装饰器的函数中显式重新跑出异常
|
2,963
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
a = np.arange(9).reshape(3, 3)
a
' '.join(str(i) for i in a.ravel(order='C'))
' '.join(str(i) for i in a.ravel(order='F'))
volume = np.load('data/F3_volume_3x3_16bit.npy')
volume.shape
idx = np.indices(volume.shape)
idx.shape
from matplotlib.font_manager import FontProperties
annot = ['data[2, :, :]', 'data[:, 2, :]', 'data[:, :, 2]']
mono = FontProperties()
mono.set_family('monospace')
fig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :5, :5, :5].ravel(order='C')
ax.plot(data, c=f'C{i}')
ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)
ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')
ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)
ax.set_ylim(-1, 5)
_ = plt.suptitle("C order", size=18)
plt.savefig('/home/matt/Pictures/3d-array-corder.png')
fig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :5, :5, :5].ravel(order='F')
ax.plot(data, c=f'C{i}')
ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)
ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')
ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)
ax.set_ylim(-1, 5)
_ = plt.suptitle("Fortran order", size=18)
plt.savefig('/home/matt/Pictures/3d-array-forder.png')
plt.figure(figsize=(15,3))
plt.plot(idx[0, :5, :5, :5].ravel(), zorder=10)
plt.plot(idx[1, :5, :5, :5].ravel(), zorder=9)
plt.plot(idx[2, :5, :5, :5].ravel(), zorder=8)
volume.strides
fig, axs = plt.subplots(ncols=2, figsize=(10,3), facecolor='w')
for i, ax in enumerate(axs):
data = idx[i, :3, :3, 0].ravel(order='C')
ax.plot(data, 'o-', c='gray')
ax.text(0, 1.8, f'axis {i}', color='gray', size=15, ha='left')
plt.savefig('/home/matt/Pictures/2d-array-corder.png')
volume = volume[:190, :190, :190].copy()
def get_slice_3d(volume, x, axis, n=None):
Naive function... but only works on 3 dimensions.
NB Using ellipses slows down last axis.
# Force cube shape
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n]
if axis == 1:
data = volume[:n, x, :n]
if axis == 2:
data = volume[:n, :n, x]
return data + 1
%timeit get_slice_3d(volume, 150, axis=0)
%timeit get_slice_3d(volume, 150, axis=1)
%timeit get_slice_3d(volume, 150, axis=2)
volumef = np.asfortranarray(volume)
%timeit get_slice_3d(volumef, 150, axis=0)
%timeit get_slice_3d(volumef, 150, axis=1)
%timeit get_slice_3d(volumef, 150, axis=2)
from scipy.signal import welch
%timeit s = [welch(tr, fs=500) for tr in volume[:, 10]]
%timeit s = [welch(tr, fs=500) for tr in volumef[:, 10]]
del(volume)
del(volumef)
def makend(n, s, equal=True, rev=False, fortran=False):
Make an n-dimensional hypercube of randoms.
if equal:
incr = np.zeros(n, dtype=int)
elif rev:
incr = list(reversed(np.arange(n)))
else:
incr = np.arange(n)
shape = incr + np.ones(n, dtype=int)*s
a = np.random.random(shape)
m = f"Shape: {tuple(shape)} "
m += f"Memory: {a.nbytes/1e6:.0f}MB "
m += f"Order: {'F' if fortran else 'C'}"
print (m)
if fortran:
return np.asfortranarray(a)
else:
return a
def get_slice_2d(volume, x, axis, n=None):
Naive function... but only works on 2 dimensions.
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n]
if axis == 1:
data = volume[:n, x]
return data + 1
dim = 2
v = makend(dim, 6000, fortran=False)
for n in range(dim):
%timeit get_slice_2d(v, 3001, n)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
for n in range(dim):
%timeit get_slice_2d(v, 3001, n)
del v
def convolve(data, kernel=np.arange(10), axis=0):
func = lambda tr: np.convolve(tr, kernel, mode='same')
return np.apply_along_axis(func, axis=axis, arr=data)
dim = 2
v = makend(dim, 6000, fortran=False)
%timeit convolve(v, axis=0)
%timeit convolve(v, axis=1)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
%timeit convolve(v, axis=0)
%timeit convolve(v, axis=1)
del v
a = [[ 2, 4],
[10, 20]]
np.mean(a, axis=0), np.mean(a, axis=1)
dim = 2
v = makend(dim, 6000, fortran=False)
%timeit np.mean(v, axis=0)
%timeit np.mean(v, axis=1)
del v
dim = 2
v = makend(dim, 6000, fortran=True)
%timeit np.mean(v, axis=0)
%timeit np.mean(v, axis=1)
del v
v = makend(dim, 6000, fortran=False)
%timeit np.median(v, axis=0)
%timeit np.median(v, axis=1)
del v
v = makend(dim, 6000, fortran=False)
%timeit v.mean(axis=0)
%timeit v.mean(axis=1)
del v
dim = 3
v = makend(dim, 600)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
dim = 3
v = makend(dim, 600, equal=False, rev=True)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
dim = 3
v = makend(dim, 600, fortran=True)
for n in range(dim):
%timeit get_slice_3d(v, 201, n)
del v
def get_slice_4d(volume, x, axis, n=None):
Naive function... but only works on 4 dimensions.
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n]
if axis == 3:
data = volume[:n, :n, :n, x]
return data + 1
dim = 4
v = makend(dim, 100, equal=True)
for n in range(dim):
%timeit get_slice_4d(v, 51, n)
del v
dim = 4
v = makend(dim, 100, equal=True, fortran=True)
for n in range(dim):
%timeit get_slice_4d(v, 51, n)
del v
def get_slice_5d(volume, x, axis, n=None):
Naive function... but only works on 5 dimensions.
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n, :n]
if axis == 3:
data = volume[:n, :n, :n, x, :n]
if axis == 4:
data = volume[:n, :n, :n, :n, x]
return data + 1
dim = 5
v = makend(dim, 40)
for n in range(dim):
%timeit get_slice_5d(v, 21, n)
del v
dim = 5
v = makend(dim, 40, fortran=True)
for n in range(dim):
%timeit get_slice_5d(v, 21, n)
del v
dim = 5
v = makend(dim, 40, fortran=True)
for n in range(dim):
%timeit np.mean(v, axis=n)
del v
def get_slice_6d(volume, x, axis, n=None):
Naive function... but only works on 6 dimensions.
if n is None and not np.sum(np.diff(volume.shape)):
n = np.min(volume.shape)
if axis == 0:
data = volume[x, :n, :n, :n, :n, :n]
if axis == 1:
data = volume[:n, x, :n, :n, :n, :n]
if axis == 2:
data = volume[:n, :n, x, :n, :n, :n]
if axis == 3:
data = volume[:n, :n, :n, x, :n, :n]
if axis == 4:
data = volume[:n, :n, :n, :n, x, :n]
if axis == 5:
data = volume[:n, :n, :n, :n, :n, x]
return data + 1
dim = 6
v = makend(dim, 23)
for n in range(dim):
%timeit get_slice_6d(v, 12, n)
del v
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A tiny example
Step2: A seismic volume
Step3: Let's look at how the indices vary
Step4: We can't easily look at the indices for 190 × 190 × 190 samples (6 859 000 samples). So let's look at a small subset
Step5: At the risk of making it more confusing, it might help to look at the plots together. Shown here is the C ordering
Step6: This organization is reflected in ndarray.strides, which tells us how many bytes must be traversed to get to the next index in each axis. Each 2-byte step through memory gets me to the next index in axis 2, but I must strude 72200 bytes to get to the next index of axis 0
Step7: Aside
Step9: Accessing the seismic data
Step10: Let's check that changing the memory layout to Fortran ordering makes the last dimension fastest
Step11: Axes 0 and 1 are > 10 times faster than axis 2.
Step12: No practical difference. Hm.
Step14: Fake data in n dimensions
Step16: I tried implementing this as a context manager, so you wouldn't have to delete the volume each time after using it. I tried the @contextmanager decorator, and I tried making a class with __enter__() and __exit__() methods. Each time, I tried putting the del command as part of the exit routine. They both worked fine... except they did not delete the volume from memory.
Step17: This has been between 3.3 and 12 times faster.
Step18: Speed is double on fast axis, i.e. second axis on default C order.
Step19: Let's see how this looks on our data
Step20: We'd expect the difference to be even more dramatic with median because it has to sort every row or column
Step21: 3D arrays
Step22: Non-equal axes doesn't matter.
Step23: Fortran order results in a fast last axis, as per. But the middle axis is pretty fast too.
Step25: For C ordering, the last dimension is more than 20x slower than the other two.
Step27: 5 dimensions
Step28: What about when we're doing something like getting the mean on an array?
Step30: 6 dimensions and beyond
|
2,964
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from visualize import drawCoefficient
from gridlod import interp, coef, util, fem, world, linalg
from gridlod.world import World
import femsolverCoarse
import pg_rand
import buildcoef2d
def result(pglod, world, A, R, f, k, String):
print "------------------ " + String + " ------------------------"
NWorldFine = world.NWorldFine
NWorldCoarse = world.NWorldCoarse
NCoarseElement = world.NCoarseElement
boundaryConditions = world.boundaryConditions
NpFine = np.prod(NWorldFine+1)
NpCoarse = np.prod(NWorldCoarse+1)
#new Coefficient
ANew = R.flatten()
Anew = coef.coefficientFine(NWorldCoarse, NCoarseElement, ANew)
# tolerance = 0
vis, eps = pglod.updateCorrectors(Anew, 0, f, 1, Computing = False)
elemente = np.arange(np.prod(NWorldCoarse))
plt.figure("Error indicators")
plt.plot(elemente,eps,label=String)
plt.ylabel('$e_{u,T}$')
plt.xlabel('Element')
plt.subplots_adjust(left=0.09,bottom=0.09,right=0.99,top=0.95,wspace=0.2,hspace=0.2)
plt.legend(loc='upper right') #Legende
plt.grid()
bg = 0.05 #background
val = 1 #values
#fine World
NWorldFine = np.array([256, 256])
NpFine = np.prod(NWorldFine+1)
#coarse World
NWorldCoarse = np.array([16,16])
NpCoarse = np.prod(NWorldCoarse+1)
#ratio between Fine and Coarse
NCoarseElement = NWorldFine/NWorldCoarse
boundaryConditions = np.array([[0, 0],
[0, 0]])
world = World(NWorldCoarse, NCoarseElement, boundaryConditions)
#righthandside
f = np.ones(NpCoarse)
CoefClass = buildcoef2d.Coefficient2d(NWorldFine,
bg = 0.05,
val = 1,
length = 8,
thick = 8,
space = 8,
probfactor = 1,
right = 1,
down = 0,
diagr1 = 0,
diagr2 = 0,
diagl1 = 0,
diagl2 = 0,
LenSwitch = None,
thickSwitch = None,
equidistant = True,
ChannelHorizontal = None,
ChannelVertical = None,
BoundarySpace = True)
A = CoefClass.BuildCoefficient()
ABase = A.flatten()
plt.figure("Original Coefficient")
drawCoefficient(NWorldFine, ABase)
plt.title("Original Coefficient")
plt.show()
numbers = [2,70,97,153,205]
value1 = 3
R1 = CoefClass.SpecificValueChange( ratio = value1,
Number = numbers,
probfactor = 1,
randomvalue = None,
negative = None,
ShapeRestriction = True,
ShapeWave = None,
Original = True)
plt.figure("Change in value to 3")
drawCoefficient(NWorldFine, R1)
plt.title("Change in value to 3")
plt.show()
value2 = 50
R2 = CoefClass.SpecificValueChange( ratio = value2,
Number = numbers,
probfactor = 1,
randomvalue = None,
negative = None,
ShapeRestriction = True,
ShapeWave = None,
Original = True)
plt.figure("Change in value to 50")
drawCoefficient(NWorldFine, R2)
plt.title("Change in value to 50")
plt.show()
D = CoefClass.SpecificVanish( Number = numbers,
probfactor = 1,
PartlyVanish = None,
Original = True)
plt.figure("Disappearance")
drawCoefficient(NWorldFine, D)
plt.title("Disappearance")
plt.show()
E2 = CoefClass.SpecificMove( Number = numbers,
steps = 3,
randomstep = None,
randomDirection = None,
Right = 0,
BottomRight = 0,
Bottom = 0,
BottomLeft = 0,
Left = 0,
TopLeft = 1,
Top = 0,
TopRight = 0,
Original = True)
plt.figure("Shift one step")
drawCoefficient(NWorldFine, E2)
plt.title("Shift one step")
plt.show()
E3 = CoefClass.SpecificMove( Number = numbers,
steps = 7,
randomstep = None,
randomDirection = None,
Right = 0,
BottomRight = 0,
Bottom = 0,
BottomLeft = 0,
Left = 0,
TopLeft = 1,
Top = 0,
TopRight = 0,
Original = True)
plt.figure("Shift two steps")
drawCoefficient(NWorldFine, E3)
plt.title("Shift two steps")
plt.show()
NWorldFine = world.NWorldFine
NWorldCoarse = world.NWorldCoarse
NCoarseElement = world.NCoarseElement
boundaryConditions = world.boundaryConditions
NpFine = np.prod(NWorldFine+1)
NpCoarse = np.prod(NWorldCoarse+1)
#interpolant
IPatchGenerator = lambda i, N: interp.L2ProjectionPatchMatrix(i, N, NWorldCoarse, NCoarseElement, boundaryConditions)
#old Coefficient
Aold = coef.coefficientFine(NWorldCoarse, NCoarseElement, ABase)
k = 5
pglod = pg_rand.VcPetrovGalerkinLOD(Aold, world, k, IPatchGenerator, 0)
pglod.originCorrectors(clearFineQuantities=False)
# Change in value 1
result(pglod ,world, A, R1, f, k, 'Change in value to ' + str(value1))
# Change in value 2
result(pglod ,world, A, R2, f, k, 'Change in value to ' + str(value2))
# Disappearance
result(pglod, world, A, D, f, k, 'Disappearance')
# Shift one step
result(pglod, world, A, E2, f, k, 'Shift one step')
# Shift two steps
result(pglod, world, A, E3, f, k, 'Shift two steps')
plt.title('Error indicator')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot function
Step2: Coefficient
Step3: Perturbations
Step4: Change in value to 50
Step5: Disappearance
Step6: Shift one step
Step7: Shift two steps
Step8: Precomputations
Step9: Results
|
2,965
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
from skdaccess.astro.tess.simulated.cache import DataFetcher as TESS_DF
from skdaccess.framework.param_class import *
import numpy as np
tess_fetcher = TESS_DF([AutoList([376664523])])
tess_dw = tess_fetcher.output()
label, data = next(tess_dw.getIterator())
valid_index = data['PDCSAP_FLUX'] != 0.0
data.loc[valid_index, 'RELATIVE_PDCSAP_FLUX'] = data.loc[valid_index, 'PDCSAP_FLUX'] / np.median(data.loc[valid_index, 'PDCSAP_FLUX'])
plt.gcf().set_size_inches(6,2);
plt.scatter(data.loc[valid_index, 'TIME'], data.loc[valid_index, 'RELATIVE_PDCSAP_FLUX'], s=2, edgecolor='none');
plt.xlabel('Time');
plt.ylabel('Relative PDCSAP Flux');
plt.title('Simulated Data TID: ' + str(int(label)));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TESS End-to-End 6 Simulated Light Curve Time Series<br>
Step2: Normalize flux
Step3: Plot Relative PDCSAP Flux vs time
|
2,966
|
<ASSISTANT_TASK:>
Python Code:
# Import the pydna package functions
from pydna.all import *
# Give your email address to Genbank, so they can contact you.
# This is a requirement for using their services
gb=Genbank("bjornjobb@gmail.com")
# download the SAAT CDS from Genbank
# We know from inspecting the
saat = gb.nucleotide("AF193791 REGION: 78..1895")
# The representation of the saat Dseqrecord object contains a link to Genbank
saat
# design two new primers for SAAT
saat_amplicon = primer_design(saat)
fw="aa"+saat_amplicon.forward_primer
rv=saat_amplicon.reverse_primer
# We can set the primer identities to something descriptive
fw.id, rv.id = "fw_saat_cds", "rv_saat_cds"
saat_pcr_prod = pcr(fw,rv, saat)
# The result is an object of the Amplicon class
saat_pcr_prod
# The object has several useful methods like .figure()
# which shows how the primers anneal
saat_pcr_prod.figure()
# read the cloning vector from a local file
pYPKa=read("pYPKa.gb")
# This is a GenbankFile object, its representation include a link to the local file:
pYPKa
# import the restriction enzyme AjiI from Biopython
from Bio.Restriction import AjiI
# cut the vector with the .linearize method. This will give an error is more than one
# fragment is formed
pYPKa_AjiI = pYPKa.linearize(AjiI)
# The result from the digestion is a linear Dseqrecord object
pYPKa_AjiI
# clone the PCR product by adding the linearized vector to the insert
# and close it using the .looped() method.
pYPKa_A_saat = ( pYPKa_AjiI + saat_pcr_prod ).looped()
pYPKa_A_saat
# read promoter vector from a local file
pYPKa_Z_prom = read("pYPKa_Z_TEF1.gb")
# read terminator vector from a local file
pYPKa_E_term = read("pYPKa_E_TPI1.gb")
pYPKa_Z_prom
pYPKa_E_term
[pYPKa_Z_prom,pYPKa_Z_prom]
# Standard primers
p567,p577,p468,p467,p568,p578 = parse_primers('''
>567_pCAPsAjiIF (23-mer)
GTcggctgcaggtcactagtgag
>577_crp585-557 (29-mer)
gttctgatcctcgagcatcttaagaattc
>468_pCAPs_release_fw (25-mer)
gtcgaggaacgccaggttgcccact
>467_pCAPs_release_re (31-mer)
ATTTAAatcctgatgcgtttgtctgcacaga
>568_pCAPsAjiIR (22-mer)
GTGCcatctgtgcagacaaacg
>578_crp42-70 (29-mer)
gttcttgtctcattgccacattcataagt''')
p567
# Promoter amplified using p577 and p567
p = pcr(p577, p567, pYPKa_Z_prom)
# Gene amplified using p468 and p467
g = pcr(p468, p467, pYPKa_A_saat)
# Terminator amplified using p568 and p578
t = pcr(p568, p578, pYPKa_E_term)
# Yeast backbone vector read from a local file
pYPKpw = read("pYPKpw.gb")
from Bio.Restriction import ZraI
# Vector linearized with ZraI
pYPKpw_lin = pYPKpw.linearize(ZraI)
# Assembly simulation between four linear DNA fragments:
# plasmid, promoter, gene and terminator
# Only one circular product is formed (8769 bp)
asm = Assembly( (pYPKpw_lin, p, g, t) )
asm
# Inspect the only circular product
candidate = asm.assemble_circular()[0]
candidate.figure()
# Synchronize vectors
pYPK0_TDH3_FaPDC_TEF1 = candidate.synced(pYPKa)
# Write new vector to local file
pYPK0_TDH3_FaPDC_TEF1.write("pYPK0_TDH3_FaPDC_TPI1.gb")
from Bio.Restriction import PvuI
#PYTEST_VALIDATE_IGNORE_OUTPUT
%matplotlib inline
from pydna.gel import Gel, weight_standard_sample
standard = weight_standard_sample('1kb+_GeneRuler')
Gel( [ standard,
pYPKpw.cut(PvuI),
pYPK0_TDH3_FaPDC_TEF1.cut(PvuI) ] ).run()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the cell below, primers relevant to the Yeast Pathway Kit are read into six sequence objects. These are similar to the ones created in cell [3]
Step2: The final vector pYPKa_TDH3_FaPDC_TEF1 has 8769 bp.
|
2,967
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
logger = phoebe.logger(clevel='WARNING')
b = phoebe.default_binary()
b.set_value(qualifier='teff', component='primary', value=6500)
b.add_dataset('lc', compute_times=phoebe.linspace(0,1,101))
b.run_compute()
print(b.get_value(qualifier='fluxes', context='model'))
afig, mplfig = b.plot(show=True)
b = phoebe.default_binary()
b
b.filter(context='compute')
b.contexts
b.filter(context='compute').components
b.filter(context='compute').filter(component='primary')
b.filter(context='compute', component='primary').qualifiers
b.filter(context='compute', component='primary', qualifier='ntriangles')
b.filter(context='compute', component='primary', qualifier='ntriangles').get_parameter()
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_value()
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_description()
print(b.filter(context='compute', component='primary').info)
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_limits()
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').set_value(2000)
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_description()
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_choices()
try:
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('blah')
except Exception as e:
print(e)
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('rotstar')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
b.filter(context='compute', component='primary')
b['primary@compute']
b['compute@primary']
b.filter(context='compute', component='primary', qualifier='distortion_method')
b['distortion_method@primary@compute']
b['distortion_method@primary@compute'] = 'roche'
print(b['distortion_method@primary@compute'])
print(b['value@distortion_method@primary@compute'])
print(b['description@distortion_method@primary@compute'])
b['compute'].twigs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's get started with some basic imports
Step2: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
Step3: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
Step4: This object holds all the parameters and their respective values. We'll see in this tutorial and the next tutorial on constraints how to search through these parameters and set their values.
Step5: Next, we need to define our datasets via b.add_dataset. This will be the topic of the following tutorial on datasets.
Step6: We'll then want to run our forward model to create a synthetic model of the observables defined by these datasets using b.run_compute, which will be the topic of the computing observables tutorial.
Step7: We can access the value of any parameter, including the arrays in the synthetic model just generated. To export arrays to a file, we could call b.export_arrays
Step8: We can then plot the resulting model with b.plot, which will be covered in the plotting tutorial.
Step9: And then lastly, if we wanted to solve the inverse problem and "fit" parameters to observational data, we may want to add distributions to our system so that we can run estimators, optimizers, or samplers.
Step10: The Bundle is just a collection of Parameter objects along with some callable methods. Here we can see that the default binary Bundle consists of over 100 individual parameters.
Step11: If we want to view or edit a Parameter in the Bundle, we first need to know how to access it. Each Parameter object has a number of tags which can be used to filter (similar to a database query). When filtering the Bundle, a ParameterSet is returned - this is essentially just a subset of the Parameters in the Bundle and can be further filtered until eventually accessing a single Parameter.
Step12: Here we filtered on the context tag for all Parameters with context='compute' (i.e. the options for computing a model). If we want to see all the available options for this tag in the Bundle, we can use the plural form of the tag as a property on the Bundle or any ParameterSet.
Step13: Although there is no strict hierarchy or order to the tags, it can be helpful to think of the context tag as the top-level tag and is often very helpful to filter by the appropriate context first.
Step14: This then tells us what can be used to filter further.
Step15: The qualifier tag is the shorthand name of the Parameter itself. If you don't know what you're looking for, it is often useful to list all the qualifiers of the Bundle or a given ParameterSet.
Step16: Now that we know the options for the qualifier within this filter, we can choose to filter on one of those. Let's look filter by the 'ntriangles' qualifier.
Step17: Once we filter far enough to get to a single Parameter, we can use get_parameter to return the Parameter object itself (instead of a ParameterSet).
Step18: As a shortcut, get_parameter also takes filtering keywords. So the above line is also equivalent to the following
Step19: Each Parameter object contains several keys that provide information about that Parameter. The keys "description" and "value" are always included, with additional keys available depending on the type of Parameter.
Step20: We can also see a top-level view of the filtered parameters and descriptions (note
Step21: Since the Parameter for ntriangles is a FloatParameter, it also includes a key for the allowable limits.
Step22: In this case, we're looking at the Parameter called ntriangles with the component tag set to 'primary'. This Parameter therefore defines how many triangles should be created when creating the mesh for the star named 'primary'. By default, this is set to 1500 triangles, with allowable values above 100.
Step23: If we choose the distortion_method qualifier from that same ParameterSet, we'll see that it has a few different keys in addition to description and value.
Step24: Since the distortion_method Parameter is a ChoiceParameter, it contains a key for the allowable choices.
Step25: We can only set a value if it is contained within this list - if you attempt to set a non-valid value, an error will be raised.
Step26: Parameter types include
Step27: However, this dictionary-style twig access will never return a ParameterSet with a single Parameter, instead it will return the Parameter itself. This can be seen in the different output between the following two lines
Step28: Because of this, this dictionary-style twig access can also set the value directly
Step29: And can even provide direct access to the keys/attributes of the Parameter (value, description, limits, etc)
Step30: As with the tags, you can call .twigs on any ParameterSet to see the "smallest unique twigs" of the contained Parameters
|
2,968
|
<ASSISTANT_TASK:>
Python Code:
import faps as fp
import numpy as np
import pandas as pd
print("Created using FAPS version {}.".format(fp.__version__))
np.random.seed(867)
allele_freqs = np.random.uniform(0.3,0.5,50)
adults = fp.make_parents(100, allele_freqs, family_name='a')
progeny = fp.make_sibships(adults, 0, [1,2,3], 5, 'x')
mothers = adults.subset(progeny.mothers)
patlik = fp.paternity_array(progeny, mothers, adults, mu = 0.0015, missing_parents=0.01)
sc = fp.sibship_clustering(patlik)
%timeit fp.sibship_clustering(patlik, ndraws=100)
%timeit fp.sibship_clustering(patlik, ndraws=1000)
%timeit fp.sibship_clustering(patlik, ndraws=10000)
from scipy.cluster.hierarchy import dendrogram
import matplotlib.pyplot as plt
dendrogram(sc.linkage_matrix)
plt.show()
sc.partitions
sc.partitions[2]
print(sc.lik_partitions) # log likelihood of each partition
print(np.exp(sc.prob_partitions)) # probabilities of each partition
np.exp(sc.prob_partitions).sum()
sc.mlpartition
sibmat = sc.full_sib_matrix()
plt.pcolor(np.exp(sibmat))
plt.colorbar()
plt.show()
# Lists indexing sires and dams
sires = [1]*5 + [2]*4 + [3]*3 + [4]*2 +[5]*1
dam = [0] * len(sires)
np.random.seed(542)
allele_freqs = np.random.uniform(0.3,0.5,30)
adults = fp.make_parents(1000, allele_freqs, family_name='a')
progeny = fp.make_offspring(adults, dam_list=dam, sire_list=sires)
mothers = adults.subset(progeny.mothers)
patlik = fp.paternity_array(progeny, mothers, adults, mu= 0.0015, missing_parents=0.01)
sc = fp.sibship_clustering(patlik)
sc.nfamilies()
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
#ax.bar(np.arange(0.5, len(sc.nfamilies())+0.5), sc.nfamilies())
ax.bar(np.arange(1,16), sc.nfamilies())
ax.set_xlabel('Number of full sibships')
ax.set_ylabel('Posterior probability')
plt.show()
sc.family_size()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(np.arange(len(sires))+0.5, sc.family_size())
plt.show()
np.exp(
pd.DataFrame({
"Before clustering" : patlik.prob_array()[:, 1],
"After clustering" : sc.posterior_paternity_matrix()[:, 1]
})
)
sc.sires()
np.unique(patlik.fathers, return_counts=True)
sc.paternity()
progeny.fathers
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: FAPS uses information in a paternityArray to generate plausible full-sibship configurations. This information is stored as a sibshipCluster object, and is the basis for almost all inference about biological processes in FAPS.
Step2: We take the first individal as the mother and mate her to four males, to create three full sibships of five offspring. We then generate a paternityArray based on the genotype data.
Step3: It is straightforward to cluster offspring into full sibships. For now we'll stick with the default number of Monte Carlo draws.
Step4: The default number of Monte Carlo draws is 1000, which seems to work in most cases. I have dropped it to 100 in cases where I wanted to call sibship_clustering many times, such as in an MCMC loop, when finding every possible candidate wasn't a priority. You could also use more draws if you really wanted to be sure you had completely sampled the space of compatible candidate fathers. Speeds are unlikely to increase linearly with number of draws
Step5: We discussed this in figure 5 of the FAPS paper should you be interested in more on this.
Step6: Offspring individuals are labelled by their index in the array. Since full sibships are of size five we should expect to see clusters of {0,1,2,3,4}, {5,6,7,8,9} and {10,11,12,13,14}. This is indeed what we do see. What is difficult to see on the dendrogram are the branches connecting full siblings at the very bottom of the plot. If we bisect this dendrogram at different places on the y-axis we can infer different ways to partition the offspring into full siblings.
Step7: What is key about partition structures is that each symbol represents a unique but arbitrary family identifier. For example in the third row we see the true partition structure, with individuals grouped into three groups of five individuals.
Step8: Beyond denoting who is in a family with whom, the labels are arbitrary, with no meaningful order. This partition would be identical to [0,0,0,0,0,1,1,1,1,1,2,2,2,2,2] or [10,10,10,10,10,7,7,7,7,7,22,22,22,22,22] for example.
Step9: We also see that the first and second partitions have non-zero, but small likelihoods. Parititons 5-8 have negative infinity log likelihood - they are incompatible with the data. These partitions split up true full siblings, and there is no way to reconcile this with the data. In real world situations such partitions might have non-zero likelihoods if they were an unrelated candidate male compatible with one or more offspring through chance alone.
Step10: You can directly call the most likely partition. This is somewhat against the spirit of fractional analyses though...
Step11: For information about fine scale relationships, sc.full_sib_matrix() returns an $n*n$ matrix, where $n$ is the number of offspring. Each element describes the (log) probability that a pair of individuals are full siblings, averaged over partition structures and paternity configurations. If we plot this using a heatmap you can clearly see the five full sibships jump out as blocks of yellow (>90% probability of being full siblings) against a sea of purple (near zero probability of being full siblings).
Step12: Note that real datasets seldom look this tidy!
Step13: Number of families
Step14: We could show the same information graphically. Its clear that almost all the probability denisty is around $x=5$ families.
Step15: Family size
Step16: Plotting this shows that we are roughly equally likely to observe a family of sizes one, two, three, four and five.
Step17: Identifying fathers
Step18: Since posterior_paternity_matrix returns a matrix, some wrangling is required to get the output into a format that is useful for a human to read. The following sections describe two helper functions that use posterior_paternity_matrix to create (1) a summary of probable mating events and (2) a summary of the paternity of each offspring.
Step19: The columns in the output tell you several bits of information. The most interesting of these are
Step20: The first five rows of the table above show that these fathers have posterior probabilities of paternity of one or close to one, and seem to have sired the correct numbers of offspring each. Of note is that although a_1 to a_4 have posterior probabilities of exactly one, the posterior probability for a_5 is slightly less than one. This is because the first four fathers sired multiple offspring, and there is shared information between siblings about the father, but this is not the case for father a_5.
Step21: In this case, the data are simulated, so we know who the real fathers are, and can print them with progeny.fathers. We see that the top candidate for all offspring is indeed the true father, with strong support (log probabilitites close to zero).
|
2,969
|
<ASSISTANT_TASK:>
Python Code:
import jax
import jax.numpy as jnp
x = jnp.arange(10)
print(x)
x
long_vector = jnp.arange(int(1e7))
%timeit jnp.dot(long_vector, long_vector).block_until_ready()
def sum_of_squares(x):
return jnp.sum(x**2)
sum_of_squares_dx = jax.grad(sum_of_squares)
x = jnp.asarray([1.0, 2.0, 3.0, 4.0])
print(sum_of_squares(x))
print(sum_of_squares_dx(x))
def sum_squared_error(x, y):
return jnp.sum((x-y)**2)
sum_squared_error_dx = jax.grad(sum_squared_error)
y = jnp.asarray([1.1, 2.1, 3.1, 4.1])
print(sum_squared_error_dx(x, y))
jax.grad(sum_squared_error, argnums=(0, 1))(x, y) # Find gradient wrt both x & y
jax.value_and_grad(sum_squared_error)(x, y)
def squared_error_with_aux(x, y):
return sum_squared_error(x, y), x-y
jax.grad(squared_error_with_aux)(x, y)
jax.grad(squared_error_with_aux, has_aux=True)(x, y)
import numpy as np
x = np.array([1, 2, 3])
def in_place_modify(x):
x[0] = 123
return None
in_place_modify(x)
x
in_place_modify(jnp.array(x)) # Raises error when we cast input to jnp.ndarray
def jax_in_place_modify(x):
return x.at[0].set(123)
y = jnp.array([1, 2, 3])
jax_in_place_modify(y)
y
import numpy as np
import matplotlib.pyplot as plt
xs = np.random.normal(size=(100,))
noise = np.random.normal(scale=0.1, size=(100,))
ys = xs * 3 - 1 + noise
plt.scatter(xs, ys);
def model(theta, x):
Computes wx + b on a batch of input x.
w, b = theta
return w * x + b
def loss_fn(theta, x, y):
prediction = model(theta, x)
return jnp.mean((prediction-y)**2)
def update(theta, x, y, lr=0.1):
return theta - lr * jax.grad(loss_fn)(theta, x, y)
theta = jnp.array([1., 1.])
for _ in range(1000):
theta = update(theta, xs, ys)
plt.scatter(xs, ys)
plt.plot(xs, model(theta, xs))
w, b = theta
print(f"w: {w:<.2f}, b: {b:<.2f}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far, everything is just like NumPy. A big appeal of JAX is that you don't need to learn a new API. Many common NumPy programs would run just as well in JAX if you substitute np for jnp. However, there are some important differences which we touch on at the end of this section.
Step2: One useful feature of JAX is that the same code can be run on different backends -- CPU, GPU and TPU.
Step3: Tip
Step4: Applying jax.grad to sum_of_squares will return a different function, namely the gradient of sum_of_squares with respect to its first parameter x.
Step5: You can think of jax.grad by analogy to the $\nabla$ operator from vector calculus. Given a function $f(x)$, $\nabla f$ represents the function that computes $f$'s gradient, i.e.
Step6: To find the gradient with respect to a different argument (or several), you can set argnums
Step7: Does this mean that when doing machine learning, we need to write functions with gigantic argument lists, with an argument for each model parameter array? No. JAX comes equipped with machinery for bundling arrays together in data structures called 'pytrees', on which more in a later guide. So, most often, use of jax.grad looks like this
Step8: which returns a tuple of, you guessed it, (value, grad). To be precise, for any f,
Step9: This is because jax.grad is only defined on scalar functions, and our new function returns a tuple. But we need to return a tuple to return our intermediate results! This is where has_aux comes in
Step10: has_aux signifies that the function returns a pair, (out, aux). It makes jax.grad ignore aux, passing it through to the user, while differentiating the function as if only out was returned.
Step11: The side-effectful function modifies its argument, but returns a completely unrelated value. The modification is a side-effect.
Step12: Helpfully, the error points us to JAX's side-effect-free way of doing the same thing via the jax.numpy.ndarray.at index update operators (be careful jax.ops.index_* functions are deprecated). They are analogous to in-place modification by index, but create a new array with the corresponding modifications made
Step13: Note that the old array was untouched, so there is no side-effect
Step14: Side-effect-free code is sometimes called functionally pure, or just pure.
Step16: Therefore, our model is $\hat y(x; \theta) = wx + b$.
Step17: The loss function is $J(x, y; \theta) = (\hat y - y)^2$.
Step18: How do we optimize a loss function? Using gradient descent. At each update step, we will find the gradient of the loss w.r.t. the parameters, and take a small step in the direction of steepest descent
Step19: In JAX, it's common to define an update() function that is called every step, taking the current parameters as input and returning the new parameters. This is a natural consequence of JAX's functional nature, and is explained in more detail in The Problem of State.
|
2,970
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import poisson, norm, chi2
from scipy.optimize import minimize, brentq
import warnings; warnings.simplefilter('ignore') # ignore some numerical errors
E = np.logspace(0,2,30+1)
E = (E[1:] + E[:-1]) / 2 # bin centers
def signal_shape(E=E, mu=10, var=2):
return norm.pdf(E, mu, var)
def background_shape(E=E, gamma=-1):
return E**gamma
def generate_data(bkg_norm, sig_norm, sig_mu=10, sig_var=2, E=E, seed=None):
np.random.seed(seed)
return np.random.poisson(sig_norm * signal_shape(E, mu=sig_mu, var=sig_var) +
bkg_norm * background_shape(E))
def visualise_model(bkg_norm, sig_norm, sig_mu=10, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
x = np.logspace(0,2,200)
b = bkg_norm*background_shape(x)
s = sig_norm*signal_shape(x, mu=sig_mu)
ax.plot(x, b, label='Background')
ax.plot(x, s, label='Signal')
ax.plot(x, s+b, color='black', linestyle='dotted', label='S+B')
N = generate_data(bkg_norm, sig_norm, sig_mu=sig_mu)
ax.errorbar(E, N, yerr=np.sqrt(N), fmt='o', color='grey', label='Data')
ax.set_ylim(0.4, 2*np.maximum(s.max(), b.max()))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('E')
ax.set_ylabel('dN/dE')
ax.set_title(title)
ax.legend(frameon=False)
return N
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
data_small_sig = visualise_model(bkg_norm=1000, sig_norm=10, ax=axes[0], title='Small signal');
data_large_sig = visualise_model(bkg_norm=1000, sig_norm=600, ax=axes[1], title='Large signal');
def lnLike(bkg_norm, sig_norm, data, gamma=-1, mu=10, var=2):
s = sig_norm*signal_shape(mu=mu, var=var)
b = bkg_norm*background_shape(gamma=gamma)
return np.log(poisson.pmf(data, mu=s+b)).sum()
def TS_sig(sig_norm, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator.fun - bestfit.fun
def visualise_TS_sig(data, siglim=(0, 1000), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
x = np.linspace(*siglim, 100)
ts = np.array([TS_sig(s, data, bestfit=bestfit) for s in x])
ax.plot(x, ts)
ax.set_ylim(0,10)
ax.set_xlim(*siglim)
ax.set_title(title)
ax.set_xlabel('Signal Normalisation')
ax.set_ylabel('TS')
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
visualise_TS_sig(data_small_sig, siglim=(-90,130), ax=axes[0], title='Small signal')
axes[0].scatter(10, 0.5, color='black', marker='x')
visualise_TS_sig(data_large_sig, siglim=(400,720), ax=axes[1], title='Large signal');
axes[1].scatter(600, 0.5, color='black', marker='x');
def TS_2d(bkg_norm, sig_norm, data, bestfit=None):
numerator = -2*lnLike(bkg_norm, sig_norm, data)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator - bestfit.fun
def visualise_TS_2d(data, siglim=(-100, 1000), bkglim=(800,1200), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
bkg_norms = np.linspace(*bkglim, 100)
sig_norms = np.linspace(*siglim, 100)
ts = [[TS_2d(b, s, data, bestfit=bestfit) for s in sig_norms] for b in bkg_norms]
X, Y = np.meshgrid(bkg_norms, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*bkglim)
ax.set_ylim(*siglim)
ax.set_xlabel('Background Normalisation')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=2, figsize=(12,4))
visualise_TS_2d(data_small_sig, ax=axes[0], title='Small signal')
axes[0].scatter(1000, 10, color='white', marker='x')
visualise_TS_2d(data_large_sig, ax=axes[1], title='Large signal')
axes[1].scatter(1000, 600, color='white', marker='x');
from functools import partial
def threshold(cl, cdf):
return brentq(lambda x: cl-cdf(x), 0, 10)
threshold_1d = partial(threshold, cdf=partial(chi2.cdf, df=1))
threshold_2d = partial(threshold, cdf=partial(chi2.cdf, df=2))
print('68%% and 95%% thresholds for 1D: %.3f and %.3f'
% tuple([threshold_1d(x) for x in [0.68, 0.95]]))
print('68%% and 95%% thresholds for 2D: %.3f and %.3f'
% tuple([threshold_2d(x) for x in [0.68, 0.95]]))
def confidence_interval(data, CL=0.68, bestfit=None,
ts=TS_sig, threshold_fun=threshold_1d):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_fun(CL)
# Simple way to find starting points for the root finder.
# We need (a, b) for which TS-ts_threshold have different sign.
step = 10+bestfit.x[1]/2
u = bestfit.x[1] + step
while ts(u, data, bestfit=bestfit) <= threshold:
u += step
# The TS tend to be symmetrical which we can use to do a better initial guess.
l = 2*bestfit.x[1] - u
while ts(l, data, bestfit=bestfit) <= threshold:
l -= step
upper_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
bestfit.x[1], u)
lower_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
l, bestfit.x[1])
return lower_bound, upper_bound
print(confidence_interval(data_small_sig))
print(confidence_interval(data_large_sig))
def coverage_check(sig_norm, CL=0.68, bkg_norm=1000, n=100):
covered = 0
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
l, u = confidence_interval(d, CL=CL)
if l < sig_norm and u > sig_norm:
covered += 1
return covered/n
print('Coverage small signal: %.3f' % coverage_check(10))
print('Coverage large signal: %.3f' % coverage_check(600))
def mc(sig_norm, bkg_norm=1000, n=100):
ts = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
bf = minimize(lambda x: -2*lnLike(x[0], x[1], d), (1e3,1e3))
ts.append(minimize(lambda s: TS_sig(s, d, bestfit=bf), sig_norm).fun)
return np.array(ts)
mc_small_signal = mc(sig_norm=10)
x = np.linspace(np.min(mc_small_signal), np.max(mc_small_signal), 100)
plt.hist(mc_small_signal, bins=20, normed=True, alpha=0.5, label='MC')
plt.plot(x, chi2.pdf(x, df=1), lw=4, label='chi2 df=1')
plt.legend(frameon=False)
plt.xlabel('TS');
def TS_upper_limit(sig_norm, data, bestfit=None):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
if sig_norm < bestfit.x[1]:
return 0.0
else:
return TS_sig(sig_norm, data, bestfit=bestfit)
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data_small_sig), (1e3,1e3))
x = np.linspace(-100, 100, 30)
y = [TS_sig(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y);
y = [TS_upper_limit(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y)
plt.xlabel('Signal Normalisation')
plt.ylabel('TS');
threshold_ul = partial(threshold, cdf = lambda x: 0.5 + 0.5*chi2.cdf(x, df=1))
print('Threshold for 90%% CL upper limit: %.3f' % threshold_ul(0.90))
def upper_limit(data, bestfit=None, CL=0.90):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_ul(CL)
return brentq(lambda x: TS_upper_limit(x, data, bestfit=bestfit)-threshold,
-1000, 1000)
print('90%% CL upper limit for small signal: %.2f' % upper_limit(data_small_sig))
print('90%% CL upper limit for large signal: %.2f' % upper_limit(data_large_sig))
def coverage_check_ul(sig_norm, CL=0.90, bkg_norm=1000, n=100):
upper_limits = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
upper_limits.append(upper_limit(d, CL=CL))
upper_limits = np.array(upper_limits)
not_excluded = (upper_limits >= sig_norm).sum()
return not_excluded/n
print('Coverage small signal: %.3f' % coverage_check_ul(10))
print('Coverage large signal: %.3f' % coverage_check_ul(600))
def find_bestfit(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
start = points[np.argmin(ts),:]
return minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), start)
def TS_pos(sig_norm, sig_pos, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data, mu=sig_pos), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]),
(1e3, 1e3, 1e1))
return numerator.fun - bestfit.fun
def visualise_TS_pos(data, signormlim=(500, 1500), sigposlim=(1,100), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
def starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
return points[np.argmin(ts),:]
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), starting_point(data))
print(bestfit)
sig_pos = np.logspace(*np.log10(sigposlim), 50)
sig_norms = np.linspace(*signormlim, 100)
ts = [[TS_pos(n, p, data, bestfit=bestfit) for n in sig_norms] for p in sig_pos]
X, Y = np.meshgrid(sig_pos, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*sigposlim)
ax.set_ylim(*signormlim)
ax.set_xscale('log')
ax.set_xlabel('Signal Position')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(14,8))
for i, p in enumerate([3, 10, 20]):
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=p,
ax=axes[0,i], title='Peak at E=%i' % p)
visualise_TS_pos(d, ax=axes[1,i], title='')
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=20,
ax=axes[0,i], title='Peak at E=%i' % p)
def find_starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
i = np.argmin(ts)
return (i, points[i,:], np.min(ts))
find_starting_point(d)
#bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], d, mu=x[2]), (1e3,1e3,2e1),
# bounds=[(0, 2000)]*2+[(1,100)])
#print(bestfit)
np.argmin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need something to place a limit on, and in our first example we are looking for a bump in an otherwise smooth background. This could be a gamma-ray line from dark matter annihilation, or maybe a particle resonance at the LHC.
Step2: We define a function to visualise two realisations of this model.
Step3: On the left the signal is buried below the background and is completely invisible, and placing an upper exclusion limit on the signal normalisation $\theta_s$ would be natural. In the right panel however the signal is large enough to be visible in the data, and an upper limit would not be the thing to do.
Step4: Our test statistic of choice is the logarithm of the maximum likelihood ratio which we defined as
Step5: The optional bestfit arguments is to spare us the burden of finding the minima everytime we evaluate the test statistic. As always, it is useful to visualise, and as examples we take the two realisations we already plotted.
Step6: Where the black x marks the true parameter values of each of the data sets. We see that the test statistic is small in the region of the true values, not perfectly so as we have statistical noise.
Step7: where the white x marks the true parameter values of each of the data sets. We see that, as expected, the test statistic is smaller closer to the true value.
Step8: With this we can now find the 68% CL intervals of $\theta_s$ for our two examples.
Step9: Check coverage
Step10: Not bad, with more experiments this should be better. Try it yourself!
Step11: As we can see, in this case the empirical distribution is well-described by a $\chi_{k=1}^2$ distribution.
Step12: This is now distributed as
Step13: We can now use a root finder to find for which $\theta_s$ our new TS has this threshold value, and this is our upper limit!
Step14: Check coverage for the upper limit
Step15: Again, not bad for the number of trials.
|
2,971
|
<ASSISTANT_TASK:>
Python Code:
#@title Imports & Utils
# Imports
!pip install -q git+https://www.github.com/google/jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import vmap
from jax import lax
vectorize = np.vectorize
from functools import partial
from collections import namedtuple
import base64
import IPython
from google.colab import output
import os
from jax_md import space, smap, energy, minimize, quantity, simulate, partition, util
from jax_md.util import f32
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
dark_color = [56 / 256] * 3
light_color = [213 / 256] * 3
axis_color = 'white'
def format_plot(x='', y='', grid=True):
ax = plt.gca()
ax.spines['bottom'].set_color(axis_color)
ax.spines['top'].set_color(axis_color)
ax.spines['right'].set_color(axis_color)
ax.spines['left'].set_color(axis_color)
ax.tick_params(axis='x', colors=axis_color)
ax.tick_params(axis='y', colors=axis_color)
ax.yaxis.label.set_color(axis_color)
ax.xaxis.label.set_color(axis_color)
ax.set_facecolor(dark_color)
plt.grid(grid)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().patch.set_facecolor(dark_color)
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
# Progress Bars
from IPython.display import HTML, display
import time
def ProgressIter(iter_fun, iter_len=0):
if not iter_len:
iter_len = len(iter_fun)
out = display(progress(0, iter_len), display_id=True)
for i, it in enumerate(iter_fun):
yield it
out.update(progress(i + 1, iter_len))
def progress(value, max):
return HTML(
<progress
value='{value}'
max='{max}',
style='width: 45%'
>
{value}
</progress>
.format(value=value, max=max))
normalize = lambda v: v / np.linalg.norm(v, axis=1, keepdims=True)
# Rendering
renderer_code = IPython.display.HTML('''
<canvas id="canvas"></canvas>
<script>
Rg = null;
Ng = null;
var current_scene = {
R: null,
N: null,
is_loaded: false,
frame: 0,
frame_count: 0,
boid_vertex_count: 0,
boid_buffer: [],
predator_vertex_count: 0,
predator_buffer: [],
disk_vertex_count: 0,
disk_buffer: null,
box_size: 0
};
google.colab.output.setIframeHeight(0, true, {maxHeight: 5000});
async function load_simulation() {
buffer_size = 400;
max_frame = 800;
result = await google.colab.kernel.invokeFunction(
'notebook.GetObstacles', [], {});
data = result.data['application/json'];
if(data.hasOwnProperty('Disk')) {
current_scene = put_obstacle_disk(current_scene, data.Disk);
}
for (var i = 0 ; i < max_frame ; i += buffer_size) {
console.log(i);
result = await google.colab.kernel.invokeFunction(
'notebook.GetBoidStates', [i, i + buffer_size], {});
data = result.data['application/json'];
current_scene = put_boids(current_scene, data);
}
current_scene.is_loaded = true;
result = await google.colab.kernel.invokeFunction(
'notebook.GetPredators', [], {});
data = result.data['application/json'];
if (data.hasOwnProperty('R'))
current_scene = put_predators(current_scene, data);
result = await google.colab.kernel.invokeFunction(
'notebook.GetSimulationInfo', [], {});
current_scene.box_size = result.data['application/json'].box_size;
}
function initialize_gl() {
const canvas = document.getElementById("canvas");
canvas.width = 640;
canvas.height = 640;
const gl = canvas.getContext("webgl2");
if (!gl) {
alert('Unable to initialize WebGL.');
return;
}
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.clearColor(0.2, 0.2, 0.2, 1.0);
gl.enable(gl.DEPTH_TEST);
const shader_program = initialize_shader(
gl, VERTEX_SHADER_SOURCE_2D, FRAGMENT_SHADER_SOURCE_2D);
const shader = {
program: shader_program,
attribute: {
vertex_position: gl.getAttribLocation(shader_program, 'vertex_position'),
},
uniform: {
screen_position: gl.getUniformLocation(shader_program, 'screen_position'),
screen_size: gl.getUniformLocation(shader_program, 'screen_size'),
color: gl.getUniformLocation(shader_program, 'color'),
},
};
gl.useProgram(shader_program);
const half_width = 200.0;
gl.uniform2f(shader.uniform.screen_position, half_width, half_width);
gl.uniform2f(shader.uniform.screen_size, half_width, half_width);
gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);
return {gl: gl, shader: shader};
}
var loops = 0;
function update_frame() {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
if (!current_scene.is_loaded) {
window.requestAnimationFrame(update_frame);
return;
}
var half_width = current_scene.box_size / 2.;
gl.uniform2f(shader.uniform.screen_position, half_width, half_width);
gl.uniform2f(shader.uniform.screen_size, half_width, half_width);
if (current_scene.frame >= current_scene.frame_count) {
if (!current_scene.is_loaded) {
window.requestAnimationFrame(update_frame);
return;
}
loops++;
current_scene.frame = 0;
}
gl.enableVertexAttribArray(shader.attribute.vertex_position);
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.boid_buffer[current_scene.frame]);
gl.uniform4f(shader.uniform.color, 0.0, 0.35, 1.0, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.boid_vertex_count);
if(current_scene.predator_buffer.length > 0) {
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.predator_buffer[current_scene.frame]);
gl.uniform4f(shader.uniform.color, 1.0, 0.35, 0.35, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.predator_vertex_count);
}
if(current_scene.disk_buffer) {
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.disk_buffer);
gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.disk_vertex_count);
}
current_scene.frame++;
if ((current_scene.frame_count > 1 && loops < 5) ||
(current_scene.frame_count == 1 && loops < 240))
window.requestAnimationFrame(update_frame);
if (current_scene.frame_count > 1 && loops == 5 && current_scene.frame < current_scene.frame_count - 1)
window.requestAnimationFrame(update_frame);
}
function put_boids(scene, boids) {
const R = decode(boids['R']);
const R_shape = boids['R_shape'];
const theta = decode(boids['theta']);
const theta_shape = boids['theta_shape'];
function index(i, b, xy) {
return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy;
}
var steps = R_shape[0];
var boids = R_shape[1];
var dimensions = R_shape[2];
if(dimensions != 2) {
alert('Can only deal with two-dimensional data.')
}
// First flatten the data.
var buffer_data = new Float32Array(boids * 6);
var size = 8.0;
for (var i = 0 ; i < steps ; i++) {
var buffer = gl.createBuffer();
for (var b = 0 ; b < boids ; b++) {
var xi = index(i, b, 0);
var yi = index(i, b, 1);
var ti = i * boids + b;
var Nx = size * Math.cos(theta[ti]); //N[xi];
var Ny = size * Math.sin(theta[ti]); //N[yi];
buffer_data.set([
R[xi] + Nx, R[yi] + Ny,
R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,
R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx,
], b * 6);
}
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.boid_buffer.push(buffer);
}
scene.boid_vertex_count = boids * 3;
scene.frame_count += steps;
return scene;
}
function put_predators(scene, boids) {
// TODO: Unify this with the put_boids function.
const R = decode(boids['R']);
const R_shape = boids['R_shape'];
const theta = decode(boids['theta']);
const theta_shape = boids['theta_shape'];
function index(i, b, xy) {
return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy;
}
var steps = R_shape[0];
var boids = R_shape[1];
var dimensions = R_shape[2];
if(dimensions != 2) {
alert('Can only deal with two-dimensional data.')
}
// First flatten the data.
var buffer_data = new Float32Array(boids * 6);
var size = 18.0;
for (var i = 0 ; i < steps ; i++) {
var buffer = gl.createBuffer();
for (var b = 0 ; b < boids ; b++) {
var xi = index(i, b, 0);
var yi = index(i, b, 1);
var ti = theta_shape[1] * i + b;
var Nx = size * Math.cos(theta[ti]);
var Ny = size * Math.sin(theta[ti]);
buffer_data.set([
R[xi] + Nx, R[yi] + Ny,
R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,
R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx,
], b * 6);
}
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.predator_buffer.push(buffer);
}
scene.predator_vertex_count = boids * 3;
return scene;
}
function put_obstacle_disk(scene, disk) {
const R = decode(disk.R);
const R_shape = disk.R_shape;
const radius = decode(disk.D);
const radius_shape = disk.D_shape;
const disk_count = R_shape[0];
const dimensions = R_shape[1];
if (dimensions != 2) {
alert('Can only handle two-dimensional data.');
}
if (radius_shape[0] != disk_count) {
alert('Inconsistent disk radius count found.');
}
const segments = 32;
function index(o, xy) {
return o * R_shape[1] + xy;
}
// TODO(schsam): Use index buffers here.
var buffer_data = new Float32Array(disk_count * segments * 6);
for (var i = 0 ; i < disk_count ; i++) {
var xi = index(i, 0);
var yi = index(i, 1);
for (var s = 0 ; s < segments ; s++) {
const th = 2 * s / segments * Math.PI;
const th_p = 2 * (s + 1) / segments * Math.PI;
const rad = radius[i] * 0.8;
buffer_data.set([
R[xi], R[yi],
R[xi] + rad * Math.cos(th), R[yi] + rad * Math.sin(th),
R[xi] + rad * Math.cos(th_p), R[yi] + rad * Math.sin(th_p),
], i * segments * 6 + s * 6);
}
}
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.disk_vertex_count = disk_count * segments * 3;
scene.disk_buffer = buffer;
return scene;
}
// SHADER CODE
const VERTEX_SHADER_SOURCE_2D = `
// Vertex Shader Program.
attribute vec2 vertex_position;
uniform vec2 screen_position;
uniform vec2 screen_size;
void main() {
vec2 v = (vertex_position - screen_position) / screen_size;
gl_Position = vec4(v, 0.0, 1.0);
}
`;
const FRAGMENT_SHADER_SOURCE_2D = `
precision mediump float;
uniform vec4 color;
void main() {
gl_FragColor = color;
}
`;
function initialize_shader(
gl, vertex_shader_source, fragment_shader_source) {
const vertex_shader = compile_shader(
gl, gl.VERTEX_SHADER, vertex_shader_source);
const fragment_shader = compile_shader(
gl, gl.FRAGMENT_SHADER, fragment_shader_source);
const shader_program = gl.createProgram();
gl.attachShader(shader_program, vertex_shader);
gl.attachShader(shader_program, fragment_shader);
gl.linkProgram(shader_program);
if (!gl.getProgramParameter(shader_program, gl.LINK_STATUS)) {
alert(
'Unable to initialize shader program: ' +
gl.getProgramInfoLog(shader_program)
);
return null;
}
return shader_program;
}
function compile_shader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert('An error occured compiling shader: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
// SERIALIZATION UTILITIES
function decode(sBase64, nBlocksSize) {
var chrs = atob(atob(sBase64));
var array = new Uint8Array(new ArrayBuffer(chrs.length));
for(var i = 0 ; i < chrs.length ; i++) {
array[i] = chrs.charCodeAt(i);
}
return new Float32Array(array.buffer);
}
// RUN CELL
load_simulation();
gl_and_shader = initialize_gl();
var gl = gl_and_shader.gl;
var shader = gl_and_shader.shader;
update_frame();
</script>
''')
def encode(R):
return base64.b64encode(onp.array(R, onp.float32).tobytes())
def render(box_size, states, obstacles=None, predators=None):
if isinstance(states, Boids):
R = np.reshape(states.R, (1,) + states.R.shape)
theta = np.reshape(states.theta, (1,) + states.theta.shape)
elif isinstance(states, list):
if all([isinstance(x, Boids) for x in states]):
R, theta = zip(*states)
R = onp.stack(R)
theta = onp.stack(theta)
if isinstance(predators, list):
R_predators, theta_predators, *_ = zip(*predators)
R_predators = onp.stack(R_predators)
theta_predators = onp.stack(theta_predators)
def get_boid_states(start, end):
R_, theta_ = R[start:end], theta[start:end]
return IPython.display.JSON(data={
"R_shape": R_.shape,
"R": encode(R_),
"theta_shape": theta_.shape,
"theta": encode(theta_)
})
output.register_callback('notebook.GetBoidStates', get_boid_states)
def get_obstacles():
if obstacles is None:
return IPython.display.JSON(data={})
else:
return IPython.display.JSON(data={
'Disk': {
'R': encode(obstacles.R),
'R_shape': obstacles.R.shape,
'D': encode(obstacles.D),
'D_shape': obstacles.D.shape
}
})
output.register_callback('notebook.GetObstacles', get_obstacles)
def get_predators():
if predators is None:
return IPython.display.JSON(data={})
else:
return IPython.display.JSON(data={
'R': encode(R_predators),
'R_shape': R_predators.shape,
'theta': encode(theta_predators),
'theta_shape': theta_predators.shape
})
output.register_callback('notebook.GetPredators', get_predators)
def get_simulation_info():
return IPython.display.JSON(data={
'frames': R.shape[0],
'box_size': box_size
})
output.register_callback('notebook.GetSimulationInfo', get_simulation_info)
return renderer_code
Boids = namedtuple('Boids', ['R', 'theta'])
# Simulation Parameters:
box_size = 800.0 # A float specifying the side-length of the box.
boid_count = 200 # An integer specifying the number of boids.
dim = 2 # The spatial dimension in which we are simulating.
# Create RNG state to draw random numbers (see LINK).
rng = random.PRNGKey(0)
# Define periodic boundary conditions.
displacement, shift = space.periodic(box_size)
# Initialize the boids.
rng, R_rng, theta_rng = random.split(rng, 3)
boids = Boids(
R = box_size * random.uniform(R_rng, (boid_count, dim)),
theta = random.uniform(theta_rng, (boid_count,), maxval=2. * np.pi)
)
display(render(box_size, boids))
@vmap
def normal(theta):
return np.array([np.cos(theta), np.sin(theta)])
def dynamics(energy_fn, dt, speed):
@jit
def update(_, state):
R, theta = state['boids']
dstate = quantity.force(energy_fn)(state)
dR, dtheta = dstate['boids']
n = normal(state['boids'].theta)
state['boids'] = Boids(shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
return state
return update
update = dynamics(energy_fn=lambda state: 0., dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
def align_fn(dR, N_1, N_2, J_align, D_align, alpha):
dR = lax.stop_gradient(dR)
dr = space.distance(dR) / D_align
energy = J_align / alpha * (1. - dr) ** alpha * (1 - np.dot(N_1, N_2)) ** 2
return np.where(dr < 1.0, energy, 0.)
#@title Alignment Energy
N_1 = np.array([1.0, 0.0])
angles = np.linspace(0, np.pi, 60)
N_2 = vmap(lambda theta: np.array([np.cos(theta), np.sin(theta)]))(angles)
distances = np.linspace(0, 1, 5)
dRs = vmap(lambda r: np.array([r, 0.]))(distances)
fn = partial(align_fn, J_align=1., D_align=1., alpha=2.)
energy = vmap(vmap(fn, (None, None, 0)), (0, None, None))(dRs, N_1, N_2)
for d, e in zip(distances, energy):
plt.plot(angles, e, label='r = {}'.format(d), linewidth=3)
plt.xlim([0, np.pi])
format_plot('$\\theta$', '$E(r, \\theta)$')
plt.legend()
finalize_plot()
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=0.5, D_align=45., alpha=3.)
# Map the align energy over all pairs of boids. While both applications
# of vmap map over the displacement matrix, each acts on only one normal.
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return 0.5 * np.sum(E_align(dR, N, N))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
def avoid_fn(dR, J_avoid, D_avoid, alpha):
dr = space.distance(dR) / D_avoid
return np.where(dr < 1.,
J_avoid / alpha * (1 - dr) ** alpha,
0.)
#@title Avoidance Energy
dr = np.linspace(0, 2., 60)
dR = vmap(lambda r: np.array([0., r]))(dr)
Es = vmap(partial(avoid_fn, J_avoid=1., D_avoid=1., alpha=3.))(dR)
plt.plot(dr, Es, 'r', linewidth=3)
plt.xlim([0, 2])
format_plot('$r$', '$E$')
finalize_plot()
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
# New Avoidance Code
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
#
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return 0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
def cohesion_fn(dR, N, J_cohesion, D_cohesion, eps=1e-7):
dR = lax.stop_gradient(dR)
dr = np.linalg.norm(dR, axis=-1, keepdims=True)
mask = dr < D_cohesion
N_com = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)
return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
# New Cohesion Code
E_cohesion = partial(cohesion_fn, J_cohesion=0.005, D_cohesion=40.)
#
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) +
np.sum(E_cohesion(dR, N)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.1, D_cohesion=40.) # Raised to 0.05.
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) +
np.sum(E_cohesion(dR, N)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
def field_of_view_mask(dR, N, theta_min, theta_max):
dr = space.distance(dR)
dR_hat = dR / dr
ctheta = np.dot(dR_hat, N)
# Cosine is monotonically decreasing on [0, pi].
return np.logical_and(ctheta > np.cos(theta_max),
ctheta < np.cos(theta_min))
def cohesion_fn(dR, N, mask, # New mask parameter.
J_cohesion, D_cohesion, eps=1e-7):
dR = lax.stop_gradient(dR)
dr = space.distance(dR)
mask = np.reshape(mask, mask.shape + (1,))
dr = np.reshape(dr, dr.shape + (1,))
# Updated Masking Code
mask = np.logical_and(dr < D_cohesion, mask)
#
N_com = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)
return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
# New FOV code.
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
# As before, we have to vmap twice over the displacement matrix, but only once
# over the normal.
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
#
return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
Obstacle = namedtuple('Obstacle', ['R', 'D'])
N_obstacle = 5
R_rng, D_rng = random.split(random.PRNGKey(5))
obstacles = Obstacle(
box_size * random.uniform(R_rng, (N_obstacle, 2)),
random.uniform(D_rng, (N_obstacle,), minval=30.0, maxval=100.0)
)
def obstacle_fn(dR, N, D, J_obstacle):
dr = space.distance(dR)
dR = dR / np.reshape(dr, dr.shape + (1,))
return np.where(dr < D,
J_obstacle * (1 - dr / D) ** 2 * (1 + np.dot(N, dR)) ** 2,
0.)
def energy_fn(state):
boids = state['boids']
d = space.map_product(displacement)
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = d(boids.R, boids.R)
N = normal(boids.theta)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
# New obstacle code
obstacles = state['obstacles']
dR_o = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
#
return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_o, N, D)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids,
'obstacles': obstacles
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer, obstacles))
Predator = namedtuple('Predator', ['R', 'theta'])
predators = Predator(R=np.array([[box_size / 2., box_size /2.]]),
theta=np.array([0.0]))
def boid_predator_fn(R_boid, N_boid, R_predator, J, D, alpha):
N = N_boid
dR = displacement(lax.stop_gradient(R_predator), R_boid)
dr = np.linalg.norm(dR, keepdims=True)
dR_hat = dR / dr
return np.where(dr < D,
J / alpha * (1 - dr / D) ** alpha * (1 + np.dot(dR_hat, N)),
0.)
def predator_boid_fn(R_predator, N_predator, R_boids, J, D, eps=1e-7):
# It is most convenient to define the predator_boid energy function
# for a single predator and a whole flock of boids. As such we expect shapes,
# R_predator : (spatial_dim,)
# N_predator : (spatial_dim,)
# R_boids : (n, spatial_dim,)
N = N_predator
# As such, we need to vectorize over the boids.
d = vmap(displacement, (0, None))
dR = d(lax.stop_gradient(R_boids), R_predator)
dr = space.distance(dR)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
# Here as well.
fov = vmap(fov, (0, None))
mask = np.logical_and(dr < D, fov(dR, N))
mask = mask[:, np.newaxis]
boid_count = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=0) / (np.sum(boid_count, axis=0) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, keepdims=True)
return f32(0.5) * J * (1 - np.dot(dR_com, N)) ** 2
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state):
dstate = quantity.force(energy_fn)(state)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
state['predators'] = Predator(*move(state['predators'],
dstate['predators'],
predator_speed))
return state
return update
def energy_fn(state):
boids = state['boids']
d = space.map_product(displacement)
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = d(boids.R, boids.R)
N = normal(boids.theta)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
obstacles = state['obstacles']
dR_bo = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
# New predator code.
predators = state['predators']
E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)
E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))
N_predator = normal(predators.theta)
E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)
E_predator_boid = vmap(E_predator_boid, (0, 0, None))
dR_po = -d(predators.R, obstacles.R)
#
E_boid = (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_bo, N, D)) +
np.sum(E_boid_predator(boids.R, N, predators.R)))
E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) +
np.sum(E_predator_boid(predators.R, N_predator, boids.R)))
return E_boid + E_predator
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
Predator = namedtuple('Predator', ['R', 'theta', 'dt'])
predators = Predator(R=np.array([[box_size / 2., box_size /2.]]),
theta=np.array([0.0]),
dt=np.array([0.]))
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta, *_ = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state):
dstate = quantity.force(energy_fn)(state)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
# New code to accelerate the predators.
D_sprint = 65.
T_sprint = 300.
tau_sprint = 50.
sprint_speed = 2.0
# First we find the distance from each predator to the nearest boid.
d = space.map_product(space.metric(displacement))
predator = state['predators']
dr_min = np.min(d(state['boids'].R, predator.R), axis=1)
# Check whether there is a near enough boid to bother sprinting and if
# enough time has elapsed since the last sprint.
mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)
predator_dt = np.where(mask, 0., predator.dt + dt)
# Adjust the speed according to whether or not we're sprinting.
speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)
predator_R, predator_theta = move(state['predators'],
dstate['predators'],
speed)
state['predators'] = Predator(predator_R, predator_theta, predator_dt)
#
return state
return update
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
# Simulation Parameters.
box_size = 2400.0 # A float specifying the side-length of the box.
boid_count = 2000 # An integer specifying the number of boids.
obstacle_count = 10 # An integer specifying the number of obstacles.
predator_count = 10 # An integer specifying the number of predators.
dim = 2 # The spatial dimension in which we are simulating.
# Create RNG state to draw random numbers.
rng = random.PRNGKey(0)
# Define periodic boundary conditions.
displacement, shift = space.periodic(box_size)
# Initialize the boids.
# To generate normal vectors that are uniformly distributed on S^N note that
# one can generate a random normal vector in R^N and then normalize it.
rng, R_rng, theta_rng = random.split(rng, 3)
boids = Boids(
R = box_size * random.uniform(R_rng, (boid_count, dim)),
theta = random.uniform(theta_rng, (boid_count,), maxval=2 * np.pi)
)
rng, R_rng, D_rng = random.split(rng, 3)
obstacles = Obstacle(
R = box_size * random.uniform(R_rng, (obstacle_count, dim)),
D = random.uniform(D_rng, (obstacle_count,), minval=100, maxval=300.)
)
rng, R_rng, theta_rng = random.split(rng, 3)
predators = Predator(
R = box_size * random.uniform(R_rng, (predator_count, dim)),
theta = random.uniform(theta_rng, (predator_count,), maxval=2 * np.pi),
dt = np.zeros((predator_count,))
)
neighbor_fn = partition.neighbor_list(displacement,
box_size,
r_cutoff=45.,
dr_threshold=10.,
capacity_multiplier=1.5,
format=partition.Sparse)
neighbors = neighbor_fn.allocate(boids.R)
print(neighbors.idx.shape)
from jax import ops
def cohesion_fn(dR, N, mask, # New mask parameter.
neighbor, J_cohesion, D_cohesion, eps=1e-7):
dR = lax.stop_gradient(dR)
dr = space.distance(dR)
count = len(neighbor.reference_position)
mask = np.reshape(mask, mask.shape + (1,))
dr = np.reshape(dr, dr.shape + (1,))
mask = (dr < D_cohesion) & mask
N_com = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = ops.segment_sum(dR_com, neighbor.idx[0], count)
dR_com /= (ops.segment_sum(N_com, neighbor.idx[0], count) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)
return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2
def energy_fn(state, neighbors):
boids = state['boids']
d = space.map_product(displacement)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(fov)
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(E_align)
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(E_avoid)
E_cohesion = partial(cohesion_fn, neighbor=neighbors,
J_cohesion=0.05, D_cohesion=40.)
# New code to extract displacement vector to neighbors and normals.
senders, receivers = neighbors.idx
Ra = boids.R[senders]
Rb = boids.R[receivers]
dR = -space.map_bond(displacement)(Ra, Rb)
N = normal(boids.theta)
Na, Nb = N[senders], N[receivers]
#
# New code to add a mask over neighbors as well as field-of-view.
neighbor_mask = partition.neighbor_list_mask(neighbors)
fov_mask = np.logical_and(neighbor_mask, fov(dR, Na))
#
obstacles = state['obstacles']
dR_bo = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
predators = state['predators']
E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)
E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))
N_predator = normal(predators.theta)
E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)
E_predator_boid = vmap(E_predator_boid, (0, 0, None))
dR_po = -d(predators.R, obstacles.R)
E_boid = (0.5 * np.sum(E_align(dR, Na, Nb) * fov_mask +
E_avoid(dR) * neighbor_mask) +
np.sum(E_cohesion(dR, N, fov_mask)) +
np.sum(E_obstacle(dR_bo, N, D)) +
np.sum(E_boid_predator(boids.R, N, predators.R)))
E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) +
np.sum(E_predator_boid(predators.R, N_predator, boids.R)))
return E_boid + E_predator
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta, *_ = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state_and_neighbors):
state, neighbors = state_and_neighbors
# New code to update neighbor list.
neighbors = neighbors.update(state['boids'].R)
dstate = quantity.force(energy_fn)(state, neighbors)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
# Predator acceleration.
D_sprint = 65.
T_sprint = 300.
tau_sprint = 50.
sprint_speed = 2.0
d = space.map_product(space.metric(displacement))
predator = state['predators']
dr_min = np.min(d(state['boids'].R, predator.R), axis=1)
mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)
predator_dt = np.where(mask, 0., predator.dt + dt)
speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)
speed = speed[:, np.newaxis]
predator_R, predator_theta = move(state['predators'],
dstate['predators'],
speed)
state['predators'] = Predator(predator_R, predator_theta, predator_dt)
#
return state, neighbors
return update
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(800)):
new_state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))
# If the neighbor list can't fit in the allocation, rebuild it but bigger.
if neighbors.did_buffer_overflow:
print('REBUILDING')
neighbors = neighbor_fn.allocate(state['boids'].R)
state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))
assert not neighbors.did_buffer_overflow
else:
state = new_state
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
print(neighbors.idx.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a href="https
Step2: Warning
Step3: We can instantiate a collection of boids randomly in a box of side length $L$. We will use periodic boundary conditions for our simulation which means that boids will be able to wrap around the sides of the box. To do this we will use the space.periodic command in JAX, MD.
Step4: Dynamics
Step5: Now we can run a simulation and save the boid positions to a boids_buffer which will just be a list.
Step6: Alignment
Step7: We can plot the energy for different alignments as well as different distances between boids. We see that the energy goes to zero for large distances and when the boids are aligned.
Step8: We can now our simulation with the alignment energy alone.
Step9: Now the boids align with one another and already the simulation is displaying interesting behavior!
Step10: Plotting the energy we see that it is highest when boids are overlapping and then goes to zero smoothly until $||\Delta R|| = D_{\text{Align}}$.
Step11: We can now run a version of our simulation with both alignment and avoidance.
Step12: The avoidance term in the energy stops the boids from collapsing on top of one another.
Step13: Now the boids travel in tighter, more cohesive, packs. By tuning the range of the cohesive interaction and its strength you can change how strongly the boids attempt to stick together. However, if we raise it too high it can have some undesireable consequences.
Step14: Looking Ahead
Step15: We can then adapt the cohesion function to incorporate an arbitrary mask,
Step16: And finally run a simulation incorporating the field of view.
Step17: Extras
Step18: Then we can instantiate some obstacles.
Step19: In a similar spirit to the energy functions above, we would like an energy function that encourages the boids to avoid obstacles. For this purpose we will pick an energy function that is similar in form to the alignment function above,
Step20: Now we can run a simulation that includes obstacles.
Step21: The boids are now successfully navigating obstacles in their environment.
Step22: The predators will also follow similar dynamics to the boids, swimming in whatever direction they are pointing at some speed that we can choose. Unlike in the previous versions of the simulation, predators naturally introduce some asymmetry to the system. In particular, we would like the boids to flee from the predators, but we want the predators to chase the boids. To achieve this behavior, we will consider a system reminiscient of a two-player game in which the boids move to minize an energy,
Step23: For the predator-boid function we can borrow the cohesion energy that we developed above to have predators that turn towards the center-of-mass of boids in their field of view.
Step24: Now we can modify our dynamics to also update predators.
Step25: Finally, we can put everything together and run the simulation.
Step26: We see that our predator now moves around chasing the boids.
Step27: Scaling Up
Step28: We see that dispite having 2000 boids, they each only have about 13 neighbors apiece at the start of the simulation. Of course this will grow over time and we will have to rebuild the neighbor list as it does. Next we make some minimal modifications to our energy function to rewrite the energy of our simulation to operate on neighbors. This mostly involves changing some of the vectorization patterns with vmap and creating a mask of which neighbors in the $n\times n_\text{max neighbors}$ arrays are filled. Finally, we have to make a slightly updated version of the cohesion function that takes into account the neighbors.
Step29: Next we have to update our simulation to use and update the neighbor list.
Step30: And now we can conduct our larger simulation.
Step31: At the end of the simulation we can see how large our neighbor list had to be to accomodate all of the boids.
|
2,972
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
url = "http://ec.europa.eu/eurostat/estat-navtree-portlet-prod/BulkDownloadListing?file=data/"
file = "demo_mlifetable.tsv.gz"
import pyensae.datasource
local = pyensae.datasource.download_data("demo_mlifetable.tsv.gz", url=url)
local = local[0]+".gz"
import gzip
with gzip.open(local, 'rb') as f:
file_content = f.read()
content = str(file_content, encoding="utf8")
with open("mortalite.txt", "w", encoding="utf8") as f:
f.write(content)
import pandas
dff = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8")
dff.head()
dff.shape
def format_age(s):
if s.startswith("Y_") :
if s.startswith("Y_LT"): s = "Y00_LT" + s[4:]
elif s.startswith("Y_GE"): s = "Y85_GE" + s[4:]
else: raise FormatError(s)
else:
i = int(s.strip("Y"))
return "Y%02d" % i
def format_value(s):
if s.strip() == ":" : return -1
else : return float(s.strip(" ebp"))
if False: # sur les données complètes, c'est plutôt long, réduire la taille pour essayer
dfsmall = dff.head(n = 1000) # on réduit la taille pour
df = dfsmall # implémenter la transformation
else:
df = dff
print("étape 1", df.shape)
dfi = df.reset_index().set_index("indic_de,sex,age,geo\\time")
dfi = dfi.drop('index', axis=1)
dfs = dfi.stack()
dfs = pandas.DataFrame({"valeur": dfs } )
print("étape 2", dfs.shape)
dfs["valeur"] = dfs["valeur"].astype(str)
dfs["valeur"] = dfs["valeur"].apply( format_value )
dfs = dfs[ dfs.valeur >= 0 ].copy()
dfs = dfs.reset_index()
dfs.columns = ["index", "annee", "valeur"]
print("étape 3", dfs.shape)
dfs["age"] = dfs["index"].apply ( lambda i : format_age(i.split(",")[2]))
dfs["indicateur"] = dfs["index"].apply ( lambda i : i.split(",")[0])
dfs["genre"] = dfs["index"].apply ( lambda i : i.split(",")[1])
dfs["pays"] = dfs["index"].apply ( lambda i : i.split(",")[3])
print("étape 4")
dfy = dfs.drop('index', axis=1)
dfy.to_csv("mortalite_5column.txt", sep="\t", encoding="utf8", index=False)
dfy.head()
view = dfs [ (dfs.pays=="FR") &
(dfs.age == "Y80") &
(dfs.indicateur == "DEATHRATE") &
(dfs.genre == "T") ]
view = view.sort_values("annee")
view.plot(x="annee", y="valeur");
import sqlite3
con = sqlite3.connect("mortalite_sqlite3_y2.db3")
dfy.to_sql("table_mortalite",con)
con.close() # il faut fermer la base qui sinon reste ouverte tant que le notebook
# n'est pas fermé --> elle n'est pas modifiable pas d'autre que ce notebook
import os
[ _ for _ in os.listdir(".") if "sqlite3" in _ ]
con = sqlite3.connect("mortalite_sqlite3_y2.db3")
view = pandas.read_sql(SELECT * FROM table_mortalite WHERE pays=="FR"
AND age == "Y80"
AND indicateur == "DEATHRATE"
AND genre == "T"
ORDER BY annee, con)
con.close()
view.plot(x="annee", y="valeur");
from pyensae.sql import import_flatfile_into_database
import_flatfile_into_database("mortalite.db3", "mortalite_5column.txt")
sql = SELECT * FROM mortalite_5column WHERE pays=="FR"
AND age == "Y80"
AND indicateur == "DEATHRATE"
AND genre == "T"
ORDER BY annee
from pyensae.sql import Database
db = Database("mortalite.db3", LOG = lambda *l : None)
db.connect()
view = db.to_df(sql)
view
view.plot(x="annee", y="valeur")
import pandas
df = pandas.read_csv("mortalite_5column.txt", sep="\t", encoding="utf8")
df.shape
df.head(n=2)
cols = [_ for _ in df.columns if _ != "valeur"]
cols
df.shape
df = df.dropna()
df.shape
dup = df.groupby(cols).count().sort_values("valeur", ascending=False)
dup = dup[dup.valeur > 1]
dup.head(n=2)
dup.shape
dfi = df.set_index(cols, verify_integrity=True)
dfi.head(n=2)
type(dfi.index)
import xarray
cube = xarray.Dataset.from_dataframe(dfi) # ou dfi.to_xarray()
cube
back_to_pandas = cube.to_dataframe().reset_index(drop=True)
back_to_pandas.head(n=2)
cube.max(dim=["age", "annee", "pays"]).to_dataframe().reset_index().pivot("indicateur", "genre", "valeur")
cube.to_dataframe().groupby('indicateur').max()
try:
cube.groupby("indicateur").max().to_dataframe().head()
except ValueError as e:
# It used to be working in 0.12 but not in 0.13...
print(e)
cube["valeur"].sel(indicateur="LIFEXP", genre="F", annee=slice(1980, 1985)).to_dataframe().head()
cube["valeur"].max(dim=["age"]).to_dataframe().head()
cube["max_valeur"] = cube["valeur"] / cube["valeur"].max(dim=["age"])
cube.to_dataframe().head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Récupération des données
Step2: Puis on le charge sous forme de dataframe
Step3: La première colonne contient une aggrégation de champs. On souhaite transformer cette table de telle sorte qu'on ait un nombre réduit de colonnes
Step4: Graphe d'une coupe de la table de mortalité
Step5: SQLite
Step7: On utilise une requête SQL pour récupérer les données équivalent au code pandas cité ci-dessous
Step8: version 2
Step10: Ensuite, on peut facilement consulter les données avec le logiciel (sous Windows) SQLiteSpy ou l'extension sqlite-manager pour Firefox sous toutes les plates-formes. Pour cet exercice, on exécute
Step11: Visuellement, cela donne
Step12: Cube de données
Step13: On passe du côté index toutes les colonnes excepté valeur.
Step14: On laisse tomber les valeurs manquantes.
Step15: On vérifie qu'il n'y a pas de doublons car la conversion en cube ne fonctionne pas dans ce cas puisque deux valeurs seraient indexées avec les mêmes coordonnées.
Step16: On vérifie que xarray est installé.
Step17: Et on convertit en cube.
Step18: Et on prend le maximum par indicateur et genre.
Step19: On ajoute une colonne avec un ratio où on divise par le maximum sur une classe d'âge.
|
2,973
|
<ASSISTANT_TASK:>
Python Code:
%%bash
pip freeze | grep tensor
!pip3 install tensorflow-hub==0.7.0
!pip3 install --upgrade tensorflow==1.15.3
!pip3 install google-cloud-bigquery==1.10
import os
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import shutil
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.15.3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
categories_list = open("categories.txt").read().splitlines()
authors_list = open("authors.txt").read().splitlines()
content_ids_list = open("content_ids.txt").read().splitlines()
mean_months_since_epoch = 523
embedded_title_column = hub.text_embedding_column(
key="title",
module_spec="https://tfhub.dev/google/nnlm-de-dim50/1",
trainable=False)
content_id_column = tf.feature_column.categorical_column_with_hash_bucket(
key="content_id",
hash_bucket_size= len(content_ids_list) + 1)
embedded_content_column = tf.feature_column.embedding_column(
categorical_column=content_id_column,
dimension=10)
author_column = tf.feature_column.categorical_column_with_hash_bucket(key="author",
hash_bucket_size=len(authors_list) + 1)
embedded_author_column = tf.feature_column.embedding_column(
categorical_column=author_column,
dimension=3)
category_column_categorical = tf.feature_column.categorical_column_with_vocabulary_list(
key="category",
vocabulary_list=categories_list,
num_oov_buckets=1)
category_column = tf.feature_column.indicator_column(category_column_categorical)
months_since_epoch_boundaries = list(range(400,700,20))
months_since_epoch_column = tf.feature_column.numeric_column(
key="months_since_epoch")
months_since_epoch_bucketized = tf.feature_column.bucketized_column(
source_column = months_since_epoch_column,
boundaries = months_since_epoch_boundaries)
crossed_months_since_category_column = tf.feature_column.indicator_column(tf.feature_column.crossed_column(
keys = [category_column_categorical, months_since_epoch_bucketized],
hash_bucket_size = len(months_since_epoch_boundaries) * (len(categories_list) + 1)))
feature_columns = [embedded_content_column,
embedded_author_column,
category_column,
embedded_title_column,
crossed_months_since_category_column]
record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]]
column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"]
label_key = "next_content_id"
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
features = dict(zip(column_keys, columns))
label = features.pop(label_key)
return features, label
# Create list of files that match pattern
file_list = tf.io.gfile.glob(filename)
# TODO 1: Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def model_fn(features, labels, mode, params):
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
predicted_classes = tf.argmax(logits, 1)
from tensorflow.python.lib.io import file_io
with file_io.FileIO('content_ids.txt', mode='r') as ifp:
content = tf.constant([x.rstrip() for x in ifp])
predicted_class_names = tf.gather(content, predicted_classes)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'class_names' : predicted_class_names[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
labels = table.lookup(labels)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# TODO 2: Compute evaluation metrics.
accuracy = tf.metrics.accuracy(labels=labels,
predictions=predicted_classes,
name='acc_op')
top_10_accuracy = tf.metrics.mean(tf.nn.in_top_k(predictions=logits,
targets=labels,
k=10))
metrics = {
'accuracy': accuracy,
'top_10_accuracy' : top_10_accuracy}
tf.summary.scalar('accuracy', accuracy[1])
tf.summary.scalar('top_10_accuracy', top_10_accuracy[1])
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
outdir = 'content_based_model_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
#tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir = outdir,
params={
'feature_columns': feature_columns,
'hidden_units': [200, 100, 50],
'n_classes': len(content_ids_list)
})
# TODO 3: Provide input data for training
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("training_set.csv", tf.estimator.ModeKeys.TRAIN),
max_steps = 2000)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 30,
throttle_secs = 60)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
%%bash
head -5 training_set.csv > first_5.csv
head first_5.csv
awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids
output = list(estimator.predict(input_fn=read_dataset("first_5.csv", tf.estimator.ModeKeys.PREDICT)))
import numpy as np
recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output]
content_ids = open("first_5_content_ids").read().splitlines()
from google.cloud import bigquery
recommended_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(recommended_content_ids[0])
current_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(content_ids[0])
recommended_title = bigquery.Client().query(recommended_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
current_title = bigquery.Client().query(current_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
print("Current title: {} ".format(current_title))
print("Recommended title: {}".format(recommended_title))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's make sure you install the necessary version of tensorflow-hub. After doing the pip install below, click "Restart the kernel" on the notebook so that the Python environment picks up the new packages.
Step2: Note
Step3: Build the feature columns for the model
Step4: In the cell below you'll define the feature columns to use in your model. If necessary, remind yourself the various feature columns to use.
Step5: Create the input function
Step6: Create the model and train/evaluate
Step7: Train and Evaluate
Step8: This takes a while to complete but in the end, you will get about 30% top 10 accuracies.
Step9: Recall, to make predictions on the trained model you pass a list of examples through the input function. Complete the code below to make predictions on the examples contained in the "first_5.csv" file you created above.
Step12: Finally, you map the content id back to the article title. Let's compare your model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned.
|
2,974
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q flax
from typing import Sequence
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax.training import train_state
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
import functools
import scipy as sp
import math
rng = jax.random.PRNGKey(0)
@functools.partial(jax.jit, static_argnums=(1,))
def real_data(rng, batch_size):
mog_mean = jnp.array(
[
[1.50, 1.50],
[1.50, 0.50],
[1.50, -0.50],
[1.50, -1.50],
[0.50, 1.50],
[0.50, 0.50],
[0.50, -0.50],
[0.50, -1.50],
[-1.50, 1.50],
[-1.50, 0.50],
[-1.50, -0.50],
[-1.50, -1.50],
[-0.50, 1.50],
[-0.50, 0.50],
[-0.50, -0.50],
[-0.50, -1.50],
]
)
temp = jnp.tile(mog_mean, (batch_size // 16 + 1, 1))
mus = temp[0:batch_size, :]
return mus + 0.02 * jax.random.normal(rng, shape=(batch_size, 2))
def plot_on_ax(ax, values, contours=None, bbox=None, xlabel="", ylabel="", title="", cmap="Blues"):
kernel = sp.stats.gaussian_kde(values.T)
ax.axis(bbox)
ax.set_aspect(abs(bbox[1] - bbox[0]) / abs(bbox[3] - bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks([])
ax.set_yticks([])
xx, yy = jnp.mgrid[bbox[0] : bbox[1] : 300j, bbox[2] : bbox[3] : 300j]
positions = jnp.vstack([xx.ravel(), yy.ravel()])
f = jnp.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap=cmap)
if contours is not None:
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
cx, cy = jnp.meshgrid(x, y)
new_set = ax.contour(
cx, cy, contours.squeeze().reshape(cx.shape), levels=20, colors="k", linewidths=0.8, alpha=0.5
)
ax.set_title(title)
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, x):
for feat in self.features[:-1]:
x = jax.nn.relu(nn.Dense(features=feat)(x))
x = nn.Dense(features=self.features[-1])(x)
return x
@jax.jit
def discriminator_step(disc_state, gen_state, latents, real_examples):
def loss_fn(disc_params):
fake_examples = gen_state.apply_fn(gen_state.params, latents)
real_logits = disc_state.apply_fn(disc_params, real_examples)
fake_logits = disc_state.apply_fn(disc_params, fake_examples)
disc_real = -jax.nn.log_sigmoid(real_logits)
# log(1 - sigmoid(x)) = log_sigmoid(-x)
disc_fake = -jax.nn.log_sigmoid(-fake_logits)
return jnp.mean(disc_real + disc_fake)
disc_loss, disc_grad = jax.value_and_grad(loss_fn)(disc_state.params)
disc_state = disc_state.apply_gradients(grads=disc_grad)
return disc_state, disc_loss
@jax.jit
def generator_step(disc_state, gen_state, latents):
def loss_fn(gen_params):
fake_examples = gen_state.apply_fn(gen_params, latents)
fake_logits = disc_state.apply_fn(disc_state.params, fake_examples)
disc_fake = -jax.nn.log_sigmoid(fake_logits)
return jnp.mean(disc_fake)
gen_loss, gen_grad = jax.value_and_grad(loss_fn)(gen_state.params)
gen_state = gen_state.apply_gradients(grads=gen_grad)
return gen_state, gen_loss
@jax.jit
def train_step(disc_state, gen_state, latents, real_examples):
disc_state, disc_loss = discriminator_step(disc_state, gen_state, latents, real_examples)
gen_state, gen_loss = generator_step(disc_state, gen_state, latents)
return disc_state, gen_state, disc_loss, gen_loss
batch_size = 512
latent_size = 32
discriminator = MLP(features=[25, 25, 1])
generator = MLP(features=[25, 25, 2])
# Initialize parameters for the discriminator and the generator
latents = jax.random.normal(rng, shape=(batch_size, latent_size))
real_examples = real_data(rng, batch_size)
disc_params = discriminator.init(rng, real_examples)
gen_params = generator.init(rng, latents)
# Plot real examples
bbox = [-2, 2, -2, 2]
plot_on_ax(plt.gca(), real_examples, bbox=bbox, title="Data")
plt.tight_layout()
plt.savefig("gan_gmm_data.pdf")
plt.show()
# Create train states for the discriminator and the generator
lr = 0.05
disc_state = train_state.TrainState.create(
apply_fn=discriminator.apply, params=disc_params, tx=optax.sgd(learning_rate=lr)
)
gen_state = train_state.TrainState.create(apply_fn=generator.apply, params=gen_params, tx=optax.sgd(learning_rate=lr))
# x and y grid for plotting discriminator contours
x = jnp.arange(-2.0, 2.0, 0.1)
y = jnp.arange(-2.0, 2.0, 0.1)
X, Y = jnp.meshgrid(x, y)
pairs = jnp.stack((X, Y), axis=-1)
pairs = jnp.reshape(pairs, (-1, 2))
# Latents for testing generator
test_latents = jax.random.normal(rng, shape=(batch_size * 10, latent_size))
num_iters = 20001
n_save = 2000
draw_contours = False
history = []
for i in range(num_iters):
rng_iter = jax.random.fold_in(rng, i)
data_rng, latent_rng = jax.random.split(rng_iter)
# Sample minibatch of examples
real_examples = real_data(data_rng, batch_size)
# Sample minibatch of latents
latents = jax.random.normal(latent_rng, shape=(batch_size, latent_size))
# Update both the generator
disc_state, gen_state, disc_loss, gen_loss = train_step(disc_state, gen_state, latents, real_examples)
if i % n_save == 0:
print(f"i = {i}, Discriminator Loss = {disc_loss}, " + f"Generator Loss = {gen_loss}")
# Generate examples using the test latents
fake_examples = gen_state.apply_fn(gen_state.params, test_latents)
if draw_contours:
real_logits = disc_state.apply_fn(disc_state.params, pairs)
disc_contour = -real_logits + jax.nn.log_sigmoid(real_logits)
else:
disc_contour = None
history.append((i, fake_examples, disc_contour, disc_loss, gen_loss))
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
plt.gca(),
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
plt.tight_layout()
plt.savefig(f"gan_gmm_iter_{iter}.pdf")
plt.show()
cols = 3
rows = math.ceil((len(history) + 1) / cols)
bbox = [-2, 2, -2, 2]
fig, axs = plt.subplots(rows, cols, figsize=(cols * 3, rows * 3), dpi=200)
axs = axs.flatten()
# Plot real examples
plot_on_ax(axs[0], real_examples, bbox=bbox, title="Data")
# Plot generated examples from history
for i, hist in enumerate(history):
iter, fake_examples, contours, disc_loss, gen_loss = hist
plot_on_ax(
axs[i + 1],
fake_examples,
contours=contours,
bbox=bbox,
xlabel=f"Disc Loss: {disc_loss:.3f} | Gen Loss: {gen_loss:.3f}",
title=f"Samples at Iteration {iter}",
)
# Remove extra plots from the figure
for i in range(len(history) + 1, len(axs)):
axs[i].remove()
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Generation
Step2: Plotting
Step3: Models and Training
Step4: The loss function for the discriminator is
Step5: The loss function for the generator is
Step6: Perform a training step by first updating the discriminator parameters $\phi$ using the gradient $\nabla_\phi L_D (\phi, \theta)$ and then updating the generator parameters $\theta$ using the gradient $\nabla_\theta L_G (\phi, \theta)$.
Step7: Plot Results
|
2,975
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
2,976
|
<ASSISTANT_TASK:>
Python Code:
import time
import os
from package_api import download_data
import xarray as xr
from netCDF4 import Dataset, num2date
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import matplotlib
import datetime
import warnings
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
API_key = open('APIKEY').read().strip()
def get_start_end(days):
date = datetime.datetime.now() - datetime.timedelta(days=days)
time_start = date.strftime('%Y-%m-%d') + 'T16:00:00'
time_end = date.strftime('%Y-%m-%d') + 'T21:00:00'
return time_start,time_end
latitude_south = 15.9; latitude_north = 69.5
longitude_west = -17.6; longitude_east = 38.6
area = 'europe'
days = 6
time_start,time_end = get_start_end(days)
dataset_key = 'nasa_smap_spl4smau'
variable = 'Analysis_Data__sm_surface_analysis'
folder = os.path.realpath('.') + '/'
def make_image(lon,lat,data,date,latitude_north, latitude_south,longitude_west, longitude_east,unit,**kwargs):
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(lon,lat)
lonmap,latmap = m(lons,lats)
if len(kwargs) > 0:
fig=plt.figure(figsize=(10,8))
plt.subplot(221)
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
plt.subplot(222)
m.drawcoastlines()
m.drawcountries()
plt.title(kwargs['date_later'])
m.pcolormesh(lonmap,latmap,kwargs['data_later'],vmin = 0.01,vmax = 0.35)
else:
fig=plt.figure(figsize=(9,7))
m.drawcoastlines()
m.drawcountries()
c = m.pcolormesh(lonmap,latmap,data,vmin = 0.01,vmax = 0.35)
plt.title(date)
cbar = plt.colorbar(c)
cbar.set_label(unit)
plt.show()
try:
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
except:
days = 7
time_start,time_end = get_start_end(days)
package_key = download_data(folder,dataset_key,API_key,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,variable,area)
filename_europe = package_key + '.nc'
data = xr.open_dataset(filename_europe)
surface_soil_moisture_data = data.Analysis_Data__sm_surface_analysis
unit = surface_soil_moisture_data.units
surface_soil_moisture = data.Analysis_Data__sm_surface_analysis.values[0,:,:]
surface_soil_moisture = np.ma.masked_where(np.isnan(surface_soil_moisture),surface_soil_moisture)
latitude = data.lat; longitude = data.lon
lat = latitude.values
lon = longitude.values
date = str(data.time.values[0])[:-10]
make_image(lon,lat,surface_soil_moisture,date,latitude_north, latitude_south,longitude_west, longitude_east,unit)
iberia_west = -10; iberia_east = 3.3
iberia_south = 35; iberia_north = 45
lon_ib = longitude.sel(lon=slice(iberia_west,iberia_east)).values
lat_ib = latitude.sel(lat=slice(iberia_north,iberia_south)).values
soil_ib = surface_soil_moisture_data.sel(lat=slice(iberia_north,iberia_south),lon=slice(iberia_west,iberia_east)).values[0,:,:]
soil_ib = np.ma.masked_where(np.isnan(soil_ib),soil_ib)
days2 = days -1
time_start, time_end = get_start_end(days2)
try:
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
except:
days2 = days - 2
time_start, time_end = get_start_end(days2)
package_key_iberia = download_data(folder,dataset_key,API_key,iberia_west,iberia_east,iberia_south,iberia_north,time_start,time_end,variable,area)
filename_iberia = package_key_iberia + '.nc'
data_later = xr.open_dataset(filename_iberia)
soil_data_later = data_later.Analysis_Data__sm_surface_analysis
soil_later = data_later.Analysis_Data__sm_surface_analysis.values[0,:,:]
soil_later = np.ma.masked_where(np.isnan(soil_later),soil_later)
latitude_ib = data.lat; longitude_ib = data.lon
lat_ibl = latitude_ib.values
lon_ibl = longitude_ib.values
date_later = str(data_later.time.values[0])[:-10]
make_image(lon_ib,lat_ib,soil_ib,date,iberia_north, iberia_south,iberia_west, iberia_east, unit, data_later = soil_later,date_later = date_later)
if os.path.exists(filename_europe):
os.remove(filename_europe)
if os.path.exists(filename_iberia):
os.remove(filename_iberia)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: Here we define the area we are intrested, time range from where we want to have the data, dataset key to use and variable name.
Step3: This one here is generating working directory, we need it to know where we are going to save data. No worries, we will delete file after using it!
Step4: Now we ara making a function for making images
Step5: Here we are downloading data by using Package-API. If you are intrested how data is downloaded, find the file named package_api.py from notebook folder.
Step6: Now we have data and we are reading it in using xarray
Step7: Here we are making image by using function defined above.
Step8: So let's see Portugal and Spain littlebit closer. For that we need to define the area and we will slice data from this area.
Step9: Let's also import some data from later as well.
Step10: Now we are making two images from the same area - Portugal and Spain.
Step11: Finally, let's delete files we downloaded
|
2,977
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-hh', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
2,978
|
<ASSISTANT_TASK:>
Python Code:
from functions import connect
connect() # Executeu, polsant Majúscules + Enter
from functions import forward, stop # cliqueu ací, i polseu Majúscules + Enter
from time import sleep # per a executar el bloc d'ordres
forward()
sleep(1)
stop()
from functions import disconnect, next_notebook
disconnect()
next_notebook('moviments')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Si apareix un missatge de confirmació, enhorabona, tot funciona. Si no, hi ha algun problema i haureu de cridar el professor de l'aula.
Step2: Tot correcte? Magnífic! Durant el taller voreu més ordres i aprendreu a combinar-les fent programes més complicats. Ara, abans de passar a la pàgina següent, heu de desconnectar el robot del programa d'esta pàgina (només pot haver una connexió simultània).
|
2,979
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: You would like a classifier to separate the blue dots from the red dots.
Step4: 2 - Zero initialization
Step5: Expected Output
Step6: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary
Step8: The model is predicting 0 for every example.
Step9: Expected Output
Step10: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Step12: Observations
Step13: Expected Output
|
2,980
|
<ASSISTANT_TASK:>
Python Code:
t4 = [
[3],
[7, 4],
[2, 4, 6],
[8, 5, 9, 3],
]
t4
t15 = [
[75],
[95, 64],
[17, 47, 82],
[18, 35, 87, 10],
[20, 4, 82, 47, 65],
[19, 1, 23, 75, 3, 34],
[88, 2, 77, 73, 7, 63, 67],
[99, 65, 4, 28, 6, 16, 70, 92],
[41, 41, 26, 56, 83, 40, 80, 70, 33],
[41, 48, 72, 33, 47, 32, 37, 16, 94, 29],
[53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14],
[70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57],
[91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48],
[63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31],
[ 4, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23],
]
len(t15)
from copy import deepcopy
def foo(t):
t = deepcopy(t)
for i in range(len(t))[::-1]:
r = t[i]
try:
nr = t[i+1]
except IndexError:
for j in range(len(t[i])):
t[i][j] = (t[i][j], None)
else:
for j in range(len(t[i])):
dir = (t[i+1][j+1][0] > t[i+1][j+0][0])
t[i][j] = (t[i][j] + t[i+1][j+dir][0], dir)
return t[0][0][0]
n = t4
%timeit foo(n)
foo(n)
n = t15
%timeit foo(n)
foo(n)
def foo(t):
old_row = []
for row in t:
stagger_max = map(max, zip([0] + old_row, old_row + [0]))
old_row = list(map(sum, zip(stagger_max, row)))
return max(old_row)
n = t4
%timeit foo(n)
foo(n)
n = t15
%timeit foo(n)
foo(n)
def foo(t):
old_row = tuple()
for row in t:
stagger_max = map(max, zip((0,) + old_row, old_row + (0,)))
old_row = tuple(map(sum, zip(stagger_max, row)))
return max(old_row)
n = t4
%timeit foo(n)
foo(n)
n = t15
%timeit foo(n)
foo(n)
t4 = tuple(tuple(row) for row in t4)
t15 = tuple(tuple(row) for row in t15)
def foo(t):
old_row = tuple()
for row in t:
stagger_max = map(max, zip((0,) + old_row, old_row + (0,)))
old_row = tuple(map(sum, zip(stagger_max, row)))
return max(old_row)
n = t4
%timeit foo(n)
foo(n)
n = t15
%timeit foo(n)
foo(n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's try a somewhat functional approach.
Step2: Try tuples instead of lists.
Step3: Convert t4 and t15 to be tuples instead of lists.
|
2,981
|
<ASSISTANT_TASK:>
Python Code:
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
sentence_orange = 'Having a tough time finding an orange juice press machine?'.lower().split()
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
sentence_orange = [w for w in sentence_orange if w not in stop_words]
# Prepare a dictionary and a corpus.
from gensim import corpora
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = corpora.Dictionary(documents)
# Convert the sentences into bag-of-words vectors.
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
%%time
import gensim.downloader as api
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_index = WordEmbeddingSimilarityIndex(w2v_model)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
similarity = similarity_matrix.inner_product(sentence_obama, sentence_president, normalized=True)
print('similarity = %.4f' % similarity)
similarity = similarity_matrix.inner_product(sentence_obama, sentence_orange, normalized=True)
print('similarity = %.4f' % similarity)
%%time
from itertools import chain
import json
from re import sub
from os.path import isfile
import gensim.downloader as api
from gensim.utils import simple_preprocess
from nltk.corpus import stopwords
from nltk import download
download("stopwords") # Download stopwords list.
stopwords = set(stopwords.words("english"))
def preprocess(doc):
doc = sub(r'<img[^<>]+(>|$)', " image_token ", doc)
doc = sub(r'<[^<>]+(>|$)', " ", doc)
doc = sub(r'\[img_assist[^]]*?\]', " ", doc)
doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', " url_token ", doc)
return [token for token in simple_preprocess(doc, min_len=0, max_len=float("inf")) if token not in stopwords]
corpus = list(chain(*[
chain(
[preprocess(thread["RelQuestion"]["RelQSubject"]), preprocess(thread["RelQuestion"]["RelQBody"])],
[preprocess(relcomment["RelCText"]) for relcomment in thread["RelComments"]])
for thread in api.load("semeval-2016-2017-task3-subtaskA-unannotated")]))
print("Number of documents: %d" % len(documents))
%%time
from multiprocessing import cpu_count
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.models import Word2Vec
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
dictionary = Dictionary(corpus)
tfidf = TfidfModel(dictionary=dictionary)
w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)
similarity_index = WordEmbeddingSimilarityIndex(w2v_model.wv)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary, tfidf, nonzero_limit=100)
datasets = api.load("semeval-2016-2017-task3-subtaskBC")
from math import isnan
from time import time
from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity
import numpy as np
from sklearn.model_selection import KFold
from wmd import WMD
def produce_test_data(dataset):
for orgquestion in datasets[dataset]:
query = preprocess(orgquestion["OrgQSubject"]) + preprocess(orgquestion["OrgQBody"])
documents = [
preprocess(thread["RelQuestion"]["RelQSubject"]) + preprocess(thread["RelQuestion"]["RelQBody"])
for thread in orgquestion["Threads"]]
relevance = [
thread["RelQuestion"]["RELQ_RELEVANCE2ORGQ"] in ("PerfectMatch", "Relevant")
for thread in orgquestion["Threads"]]
yield query, documents, relevance
def cossim(query, documents):
# Compute cosine similarity between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = MatrixSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
num_features=len(dictionary))
similarities = index[query]
return similarities
def softcossim(query, documents):
# Compute Soft Cosine Measure between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = SoftCosineSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
similarity_matrix)
similarities = index[query]
return similarities
def wmd_gensim(query, documents):
# Compute Word Mover's Distance as implemented in PyEMD by William Mayner
# between the query and the documents.
index = WmdSimilarity(documents, w2v_model)
similarities = index[query]
return similarities
def wmd_relax(query, documents):
# Compute Word Mover's Distance as implemented in WMD by Source{d}
# between the query and the documents.
words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]
indices, words = zip(*sorted((
(index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))
query = dict(tfidf[dictionary.doc2bow(query)])
query = [
(new_index, query[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in query]
documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]
documents = [[
(new_index, document[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in document] for document in documents]
embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)
nbow = dict(((index, list(chain([None], zip(*document)))) for index, document in enumerate(documents)))
nbow["query"] = tuple([None] + list(zip(*query)))
distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors("query")
similarities = [-distance for _, distance in sorted(distances)]
return similarities
strategies = {
"cossim" : cossim,
"softcossim": softcossim,
"wmd-gensim": wmd_gensim,
"wmd-relax": wmd_relax}
def evaluate(split, strategy):
# Perform a single round of evaluation.
results = []
start_time = time()
for query, documents, relevance in split:
similarities = strategies[strategy](query, documents)
assert len(similarities) == len(documents)
precision = [
(num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(
num_total for num_total, (_, relevant) in enumerate(
sorted(zip(similarities, relevance), reverse=True)) if relevant)]
average_precision = np.mean(precision) if precision else 0.0
results.append(average_precision)
return (np.mean(results) * 100, time() - start_time)
def crossvalidate(args):
# Perform a cross-validation.
dataset, strategy = args
test_data = np.array(list(produce_test_data(dataset)))
kf = KFold(n_splits=10)
samples = []
for _, test_index in kf.split(test_data):
samples.append(evaluate(test_data[test_index], strategy))
return (np.mean(samples, axis=0), np.std(samples, axis=0))
%%time
from multiprocessing import Pool
args_list = [
(dataset, technique)
for dataset in ("2016-test", "2017-test")
for technique in ("softcossim", "wmd-gensim", "wmd-relax", "cossim")]
with Pool() as pool:
results = pool.map(crossvalidate, args_list)
from IPython.display import display, Markdown
output = []
baselines = [
(("2016-test", "**Winner (UH-PRHLT-primary)**"), ((76.70, 0), (0, 0))),
(("2016-test", "**Baseline 1 (IR)**"), ((74.75, 0), (0, 0))),
(("2016-test", "**Baseline 2 (random)**"), ((46.98, 0), (0, 0))),
(("2017-test", "**Winner (SimBow-primary)**"), ((47.22, 0), (0, 0))),
(("2017-test", "**Baseline 1 (IR)**"), ((41.85, 0), (0, 0))),
(("2017-test", "**Baseline 2 (random)**"), ((29.81, 0), (0, 0)))]
table_header = ["Dataset | Strategy | MAP score | Elapsed time (sec)", ":---|:---|:---|---:"]
for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \
in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):
if row % (len(strategies) + 3) == 0:
output.extend(chain(["\n"], table_header))
map_score = "%.02f ±%.02f" % (mean_map_score, std_map_score)
duration = "%.02f ±%.02f" % (mean_duration, std_duration) if mean_duration else ""
output.append("%s|%s|%s|%s" % (dataset, technique, map_score, duration))
display(Markdown('\n'.join(output)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
Step3: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the inner_product method.
Step4: Let's compute SCM using the inner_product method.
Step5: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
Step6: Part 2
Step7: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
Step8: Evaluation
Step9: Finally, we will perform an evaluation to compare three unsupervised similarity measures – the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
Step10: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset.
|
2,982
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = np.arange(0.0, 1.1, 0.1).round(decimals=3).tolist()
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
import pickle
with open('../../data/simple_q_learner_fast_learner_10_actions.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
|
2,983
|
<ASSISTANT_TASK:>
Python Code:
def sumatoria_recursiva(n):
# Condición de termino
if n == 1:
return 1
# regla general
return n + sumatoria_recursiva(n-1)
sumatoria_recursiva(20)
20*21/2
def contar_texto(texto, st):
# Condición de termino
if len(texto) < len(st):
return 0
# Dos opciones para la regla general, si comienza o no con st y ambos eliminan caracteres
if texto.startswith(st):
return 1 + contar_texto(texto[len(st):], st)
else:
return contar_texto(texto[1:], st)
contar_texto("universidade del desarrollo", "de")
def pascal(n):
if n == 0:
return [1]
lista = pascal(n-1)
res = [1]
for i in range(len(lista)-1):
res.append(lista[i]+lista[i+1])
res += [1]
return res
pascal(4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ahora implementaremos la funcion contar_texto, que busca cuantas veces se encuentra el texto "st", dentro de "texto".
Step2: Por ultimo, hace pocas clases vimos el triangulo de pascal iterativamente, ahora lo haremos recursivamente.
|
2,984
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
if DEPLOY_GPU:
! docker pull tensorflow/serving:latest-gpu
DEPLOY_IMAGE = "gcr.io/" + PROJECT_ID + "/tf_serving:gpu"
else:
! docker pull tensorflow/serving:latest
DEPLOY_IMAGE = "gcr.io/" + PROJECT_ID + "/tf_serving"
! docker tag tensorflow/serving $DEPLOY_IMAGE
! docker push $DEPLOY_IMAGE
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
job = aip.CustomTrainingJob(
display_name="cifar10_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
local_model.evaluate(x_test, y_test)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
DEPLOYED_NAME = "cifar10-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
endpoint.undeploy_all()
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Set hardware accelerators
Step13: Set pre-built containers
Step14: Container (Docker) image for prediction
Step15: Set machine type
Step16: Tutorial
Step17: Task.py contents
Step18: Store training script on your Cloud Storage bucket
Step19: Create and run custom training job
Step20: Prepare your command-line arguments
Step21: Run the custom training job
Step22: Load the saved model
Step23: Evaluate the model
Step24: Perform the model evaluation
Step25: Serving function for image data
Step26: Get the serving function signature
Step27: Deploy the model
Step28: Get test item
Step29: Prepare the request content
Step30: Make the prediction
Step31: Undeploy the model
Step32: Cleaning up
|
2,985
|
<ASSISTANT_TASK:>
Python Code:
data_train.info()
data_train.describe()
#每个/多个 属性和最后的Survived之间有着什么样的关系
#中文乱码:http://blog.csdn.net/heloowird/article/details/46343519
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签
plt.rcParams['axes.unicode_minus']=False #用来正常显示负号
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
plt.subplot2grid((2,3),(0,0)) # 在一张大图里分列几个小图
data_train.Survived.value_counts().plot(kind='bar')# 柱状图
plt.title("获救情况 (1为获救)") # 标题
plt.ylabel("人数")
plt.subplot2grid((2,3),(0,1))
data_train.Pclass.value_counts().plot(kind="bar")
plt.ylabel(u"人数")
plt.title(u"乘客等级分布")
plt.subplot2grid((2,3),(0,2))
plt.scatter(data_train.Survived, data_train.Age)
plt.ylabel(u"年龄") # 设定纵坐标名称
plt.grid(b=True, which='major', axis='y')
plt.title(u"按年龄看获救分布 (1为获救)")
plt.subplot2grid((2,3),(1,0), colspan=2)
data_train.Age[data_train.Pclass == 1].plot(kind='kde')
data_train.Age[data_train.Pclass == 2].plot(kind='kde')
data_train.Age[data_train.Pclass == 3].plot(kind='kde')
plt.xlabel(u"年龄")# plots an axis lable
plt.ylabel(u"密度")
plt.title(u"各等级的乘客年龄分布")
plt.legend((u'头等舱', u'2等舱',u'3等舱'),loc='best') # sets our legend for our graph.
plt.subplot2grid((2,3),(1,2))
data_train.Embarked.value_counts().plot(kind='bar')
plt.title(u"各登船口岸上船人数")
plt.ylabel(u"人数")
plt.tight_layout()
plt.show()
#看看各乘客等级的获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_0 = data_train.Pclass[data_train.Survived == 0].value_counts()
Survived_1 = data_train.Pclass[data_train.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0})
df.plot(kind='bar', stacked=True)
plt.title(u"各乘客等级的获救情况")
plt.xlabel(u"乘客等级")
plt.ylabel(u"人数")
plt.show()
#看看各性别的获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_m = data_train.Survived[data_train.Sex == 'male'].value_counts()
Survived_f = data_train.Survived[data_train.Sex == 'female'].value_counts()
df=pd.DataFrame({u'男性':Survived_m, u'女性':Survived_f})
df.plot(kind='bar', stacked=True)
plt.title(u"按性别看获救情况")
plt.xlabel(u"性别")
plt.ylabel(u"人数")
plt.show()
#然后我们再来看看各种舱级别情况下各性别的获救情况
fig=plt.figure()
fig.set(alpha=0.65) # 设置图像透明度,无所谓
plt.title(u"根据舱等级和性别的获救情况")
ax1=fig.add_subplot(141)
data_train.Survived[data_train.Sex == 'female'][data_train.Pclass != 3].value_counts().plot(kind='bar', label="female highclass", color='#FA2479')
ax1.set_xticklabels([u"获救", u"未获救"], rotation=0)
ax1.legend([u"女性/高级舱"], loc='best')
ax2=fig.add_subplot(142, sharey=ax1)
data_train.Survived[data_train.Sex == 'female'][data_train.Pclass == 3].value_counts().plot(kind='bar', label='female, low class', color='pink')
ax2.set_xticklabels([u"未获救", u"获救"], rotation=0)
plt.legend([u"女性/低级舱"], loc='best')
ax3=fig.add_subplot(143, sharey=ax1)
data_train.Survived[data_train.Sex == 'male'][data_train.Pclass != 3].value_counts().plot(kind='bar', label='male, high class',color='lightblue')
ax3.set_xticklabels([u"未获救", u"获救"], rotation=0)
plt.legend([u"男性/高级舱"], loc='best')
ax4=fig.add_subplot(144, sharey=ax1)
data_train.Survived[data_train.Sex == 'male'][data_train.Pclass == 3].value_counts().plot(kind='bar', label='male low class', color='steelblue')
ax4.set_xticklabels([u"未获救", u"获救"], rotation=0)
plt.legend([u"男性/低级舱"], loc='best')
plt.tight_layout()
plt.show()
#看看各登船港口获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_0 = data_train.Embarked[data_train.Survived == 0].value_counts()
Survived_1 = data_train.Embarked[data_train.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0})
df.plot(kind='bar', stacked=True)
plt.title(u"各登船港口的获救情况")
plt.xlabel(u"登船港口")
plt.ylabel(u"人数")
plt.show()
#看看堂兄妹个数的获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_0 = data_train.SibSp[data_train.Survived == 0].value_counts()
Survived_1 = data_train.SibSp[data_train.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0})
df.plot(kind='bar', stacked=True)
plt.title(u"堂兄妹的获救情况")
plt.xlabel(u"堂兄妹数")
plt.ylabel(u"人数")
plt.show()
#看看父母孩子数的获救情况
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_0 = data_train.Parch[data_train.Survived == 0].value_counts()
Survived_1 = data_train.Parch[data_train.Survived == 1].value_counts()
df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0})
df.plot(kind='bar', stacked=True)
plt.title(u"父母孩子数的获救情况")
plt.xlabel(u"父母孩子数")
plt.ylabel(u"人数")
plt.show()
#ticket是船票编号,应该是unique的,和最后的结果没有太大的关系,先不纳入考虑的特征范畴把
#cabin只有204个乘客有值,我们先看看它的一个分布
data_train.Cabin.value_counts()
fig = plt.figure()
fig.set(alpha=0.2) # 设定图表颜色alpha参数
Survived_cabin = data_train.Survived[pd.notnull(data_train.Cabin)].value_counts()
Survived_nocabin = data_train.Survived[pd.isnull(data_train.Cabin)].value_counts()
df=pd.DataFrame({u'有':Survived_cabin, u'无':Survived_nocabin}).transpose()
df.plot(kind='bar', stacked=True)
plt.title(u"按Cabin有无看获救情况")
plt.xlabel(u"Cabin有无")
plt.ylabel(u"人数")
plt.show()
from sklearn.ensemble import RandomForestRegressor
### 使用 RandomForestClassifier 填补缺失的年龄属性
def set_missing_ages(df):
# 把已有的数值型特征取出来丢进Random Forest Regressor中
age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]
# 乘客分成已知年龄和未知年龄两部分
known_age = age_df[age_df.Age.notnull()].as_matrix()
unknown_age = age_df[age_df.Age.isnull()].as_matrix()
# y即目标年龄
y = known_age[:, 0]
# X即特征属性值
X = known_age[:, 1:]
# fit到RandomForestRegressor之中
rfr = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1)
rfr.fit(X, y)
# 用得到的模型进行未知年龄结果预测
predictedAges = rfr.predict(unknown_age[:, 1:])
# 用得到的预测结果填补原缺失数据
df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges
return df, rfr
def set_Cabin_type(df):
df.loc[ (df.Cabin.notnull()), 'Cabin' ] = "Yes"
df.loc[ (df.Cabin.isnull()), 'Cabin' ] = "No"
return df
data_train, rfr = set_missing_ages(data_train)
data_train = set_Cabin_type(data_train)
print(data_train.head())
#因为逻辑回归建模时,需要输入的特征都是数值型特征,我们通常会先对类目型的特征因子化。
dummies_Cabin = pd.get_dummies(data_train['Cabin'], prefix= 'Cabin')
dummies_Embarked = pd.get_dummies(data_train['Embarked'], prefix= 'Embarked')
dummies_Sex = pd.get_dummies(data_train['Sex'], prefix= 'Sex')
dummies_Pclass = pd.get_dummies(data_train['Pclass'], prefix= 'Pclass')
df = pd.concat([data_train, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass], axis=1)
df.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'], axis=1, inplace=True)
df
import sklearn.preprocessing as preprocessing
scaler = preprocessing.StandardScaler()
age_scale_param = scaler.fit(df['Age'])
df['Age_scaled'] = age_scale_param.fit_transform(df['Age'])
fare_scale_param = scaler.fit(df['Fare'])
df['Fare_scaled'] = fare_scale_param.fit_transform(df['Fare'])
df
#选择线性回归
from sklearn import linear_model
# 用正则取出我们要的属性值
train_df = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*')
train_np = train_df.as_matrix()
# y即Survival结果
y = train_np[:, 0]
# X即特征属性值
X = train_np[:, 1:]
# fit到RandomForestRegressor之中
clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6)
clf.fit(X, y)
clf
pd.DataFrame({"columns":list(train_df.columns)[1:], "coef":list(clf.coef_.T)})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 从整体数据信息来看,总共包含891个顾客信息,总共有714个顾客有年龄信息,船舱信息缺失比较严重。
Step2: 从上可以看出,头等舱顾客比较少,不到25%,平均年龄不到30,看起来都比较年轻啊,家里人平均数都不到1,
Step3: 这个时候我们可能会有一些想法了:
Step4: 头等舱明显获救的概率高
Step5: 歪果盆友果然很尊重lady,lady first践行得不错。性别无疑也要作为重要特征加入最后的模型之中。
Step6: 数据预处理
Step7: 有一种临近结果的宠宠欲动感吧,莫急莫急,我们还得做一些处理,仔细看看Age和Fare两个属性,乘客的数值幅度变化,也忒大了吧!!如果大家了解逻辑回归与梯度下降的话,会知道,各属性值之间scale差距太大,将对收敛速度造成几万点伤害值!甚至不收敛!所以我们先用scikit-learn里面的preprocessing模块对这俩货做一个标准化。可以参考机器学习之特征工程-数据预处理
|
2,986
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing()
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = Symbol('x', real=True)
A = Matrix(3,3, [x,x,0,0,x,x,0,0,x])
A
A.exp()
A = Matrix(4,4,[0,1,0,0,-1,0,1,0,0,0,0,1,1,0,-3, 0])
A
A.eigenvals()
%time Phi = (x*A).exp() # Fundamentalsystem für das System
% time len(latex(Phi))
t = Symbol('t', real=True)
mu = list(A.eigenvals())
mu
phi = [exp(mm*t) for mm in mu]
phi
def element(i, j):
f = phi[j]
return f.diff(t, i)
Phi = Matrix(4, 4, element)
Phi
P1 = Phi**(-1)
len(latex(P1))
P4 = Phi.inv()
len(latex(P4))
A3 = P1*Phi
A3[0,0].n()
A4 = simplify(A3)
A4[0,0].n()
A3[0,0].simplify()
Out[44].n()
len(latex(A3[0,0]))
A2 = simplify(P1*Phi)
A2[0,0]
A2[0,0].n()
P2 = simplify(P1.expand())
len(latex(P2))
P2
(P2*Phi).simplify()
A = Out[31]
A[0,0].n()
B = Matrix([0, cos(2*t), 0, 0])
B
P2*B
P3 = Integral(P2*B, t).doit()
P3
tmp = (Phi*P3)[0]
tmp = tmp.simplify()
expand(tmp).collect([sin(2*t), cos(2*t)])
psi2 = (Phi*P3)[2]
psi2.simplify().expand()
im(psi2.simplify()).expand()
M = Matrix([0,1,t])
Integral(M, t).doit()
x = Symbol('x')
y = Function('y')
dgl = Eq(y(x).diff(x,2), -sin(y(x)))
dgl
#dsolve(dgl) # NotImplementedError
def F(x, y):
y0, y1 = y
w0 = y1
w1 = -mpmath.sin(y0)
return [w0, w1]
F(0,[0,1])
ab = [mpmath.pi/2, 0]
x0 = 0
phi = mpmath.odefun(F, x0, ab)
phi(1)
xn = np.linspace(0, 25, 200)
wn = [phi(xx)[0] for xx in xn]
dwn = [phi(xx)[1] for xx in xn]
plt.plot(xn, wn, label="$y$")
plt.plot(xn, dwn, label="$y'$")
plt.legend();
%time phi(50)
%time phi(60)
%time phi(40)
dgl
eta = Symbol('eta')
y0 = Symbol('y0')
H = Integral(-sin(eta), eta).doit()
H
E = y(x).diff(x)**2/2 - H.subs(eta, y(x)) # Energie
E
E.diff(x)
E.diff(x).subs(dgl.lhs, dgl.rhs)
E0 = E.subs({y(x): y0, y(x).diff(x): 0})
E0
dgl_E = Eq(E, E0)
dgl_E
# dsolve(dgl_E) # abgebrochen
Lsg = solve(dgl_E, y(x).diff(x))
Lsg
h = Lsg[0].subs(y(x), eta)
h
I1 = Integral(1/h, eta).doit()
I1
I2 = Integral(1/h, (eta, y0, -y0))
I2
def T(ypsilon0):
return 2*re(I2.subs(y0, ypsilon0).n())
T(pi/2)
phi(T(pi/2)), mpmath.pi/2
xn = np.linspace(0.1, .95*np.pi, 5)
wn = [T(yy) for yy in xn]
plt.plot(xn, wn);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matrixexponentiale
Step2: Gekoppelte Pendel
Step3: Fundamentalsystem
Step4: Das Fundamentalsystem wird leider zu kompliziert
Step5: Numerische Lösungen
Step6: die Funktion mpmath.odefun löst die Differentialgleichung $[y_0', \dots, y_n'] = F(x, [y_0, \dots, y_n])$.
Step7: Ergebnisse werden intern gespeichert (Cache)
Step8: Die Pendelgleichung
Step9: Wir lösen die AWA $y'' = -\sin(y)$, $y(0) = y_0$, $y'(0) = 0$.
Step10: Die Energie ist eine Erhaltungsgröße.
Step11: Lösen wir mit der Methode der Trennung der Variablen.
Step12: In der Tat nicht elementar integrierbar.
|
2,987
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from sklearn.model_selection import train_test_split
my_color_map = ['green','aqua','pink','blue','red','black','yellow','teal','orange','grey']
tr_data = pd.read_csv('../input/train.csv')
te_data = pd.read_csv('../input/test.csv')
print('train shape is: {} \r\n\ test shape is: {}'.format(tr_data.shape, te_data.shape))
tr_data.describe()
#set number of rows and columns to see
pd.options.display.max_rows = 200
pd.options.display.max_columns = 50
#use transposed view of the features
desc = tr_data.describe().T
desc['count_nonzero'] = np.count_nonzero(tr_data,axis=0)[:-1]
desc['num_uniques'] = [len(tr_data[x].unique()) for x in tr_data.columns[:-1]]
desc['uniques'] = [(tr_data[x].unique()) for x in tr_data.columns[:-1]]
desc
print('the value counts of the target are:')
print(tr_data.iloc[:,-1].value_counts())
print(tr_data.iloc[:,-1].value_counts().plot(kind = 'bar'))
for i,feat in enumerate(tr_data.columns[1:-1]): #we start from the second feature as the first one is the item id
print('the value counts of feature {} are:'.format(feat))
print(tr_data[feat].value_counts())
def value_counts_plots(dat,rows = 4, cols = 4):
_,ax = plt.subplots(rows,cols,sharey='row',sharex='col',figsize = (cols*5,rows*5))
for i,feat in enumerate(dat.columns[:(rows*cols)]):
dat[feat].value_counts().iloc[:20].plot(kind = 'bar',ax=ax[int(i/cols), int(i%cols)],title='value_counts {}'.format(feat))
value_counts_plots(tr_data.iloc[:,1:9],2,4)
tr_data['parsed_target'] = [int(x.split('_')[1]) for x in tr_data.target]
tr_data.drop('target',axis=1,inplace=True)
def target_bar_plots(dat,cols = 4, rows = 4):
_,ax = plt.subplots(rows,cols,sharey='row',sharex='col',figsize = (cols*5,rows*5))
for i,feat in enumerate(dat.columns[:(rows*cols)]):
try:
dat.pivot_table(index=['parsed_target'],values=dat.columns[i],aggfunc=np.count_nonzero).plot(
kind = 'bar' ,ax=ax[int(i/cols), int(i%cols)],title =
'non_zero values by category for {}'.format(feat))
except:
pass
target_bar_plots(tr_data,4,4)
tr_data['source'] = 'train'
te_data['source'] = 'test'
all_data = pd.concat([tr_data,te_data],axis=0)
tr_data.drop('source',axis=1,inplace=True)
te_data.drop('source',axis=1,inplace=True)
molten = pd.melt(all_data, id_vars = 'source',value_vars = ['feat_'+str(x) for x in [13,14,27]])
plt.subplots(figsize = (20,8))
sns.violinplot(data=molten, x= 'variable',y='value',hue='source',split = True,palette=my_color_map)
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(tr_data.iloc[:,1:-1],tr_data.parsed_target,test_size = 0.2,random_state =2017)
%%time
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, log_loss
knn = KNeighborsClassifier(n_jobs=4,n_neighbors=4)
knn.fit(X_train,y_train)
knn4_pred = knn.predict(X_val)
knn4_pred_proba = knn.predict_proba(X_val)
print(confusion_matrix(y_pred=knn4_pred,y_true=y_val))
print('log loss: {}'.format(log_loss(y_pred=np.clip(knn4_pred_proba,a_max=0.999,a_min=0.001),y_true=pd.get_dummies(y_val-1))))
sns.heatmap(xticklabels=range(1,10),yticklabels=range(1,10),data = confusion_matrix(y_pred=knn4_pred,y_true=y_val),cmap='Greens')
from sklearn.metrics import classification_report
print('classification report results:\r\n' + classification_report(y_pred=knn4_pred,y_true=y_val))
%%time
#this will give higher importance to successfully classifying the 4th class items
class_weights = {1:8,2:1,3:2,4:5,5:5,6:1,7:5,8:2,9:3}
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(#class_weight=class_weights,
max_depth=15,max_features=92,min_samples_split=2,random_state=12345)
dtc.fit(X_train,y_train)
tree_pred = dtc.predict(X_val)
tree_pred_proba = dtc.predict_proba(X_val)
print(confusion_matrix(y_pred=tree_pred,y_true=y_val))
print('log loss: {}'.format(log_loss(y_pred=np.clip(tree_pred_proba,a_max=0.999,a_min=0.001),y_true=pd.get_dummies(y_val-1))))
sns.heatmap(confusion_matrix(y_pred=tree_pred,y_true=y_val),cmap='Greens',xticklabels=range(1,10),yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=tree_pred,y_true=y_val))
from sklearn.svm import SVC
svc = SVC(kernel='linear',C=0.1,max_iter=100,random_state=12345)
svc.fit(X_train,y_train)
svc_pred = svc.predict(X_val)
print(confusion_matrix(y_pred=svc_pred,y_true=y_val))
sns.heatmap(confusion_matrix(y_pred=svc_pred,y_true=y_val),cmap='Greens',xticklabels=range(1,10),
yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=svc_pred,y_true=y_val))
#this cell takes some time to run its not relevant for the rest of the notebook
from sklearn.preprocessing import MinMaxScaler
svc = SVC(kernel='linear',C=0.1,max_iter=10000,random_state=12345)
mms = MinMaxScaler()
mms.fit(X_train)
X_train_scaled = mms.transform(X_train)
X_val_scaled = mms.transform(X_val)
svc.fit(X_train_scaled,y_train)
svc_pred = svc.predict(X_val_scaled)
print(confusion_matrix(y_pred=svc_pred,y_true=y_val))
sns.heatmap(confusion_matrix(y_pred=svc_pred,y_true=y_val),cmap='Greens',xticklabels=range(1,10),
yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=svc_pred,y_true=y_val))
%%time
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_jobs=4,n_estimators=100)
rfc.fit(X_train,y_train)
rfc_pred = rfc.predict(X_val)
rfc_pred_proba = rfc.predict_proba(X_val)
print(confusion_matrix(y_pred=rfc_pred,y_true=y_val))
print('log loss: {}'.format(log_loss(y_pred=np.clip(rfc_pred_proba,a_max=0.999,a_min=0.001),y_true=pd.get_dummies(y_val-1))))
sns.heatmap(confusion_matrix(y_pred=rfc_pred,y_true=y_val),cmap='Greens',xticklabels=range(1,10),yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=rfc_pred,y_true=y_val))
%%time
from sklearn.ensemble import GradientBoostingClassifier
gbc = GradientBoostingClassifier(n_estimators=100,max_depth=6)
gbc.fit(X_train,y_train)
gbc_pred = gbc.predict(X_val)
gbc_pred_proba = gbc.predict_proba(X_val)
print(confusion_matrix(y_pred=gbc_pred,y_true=y_val))
print('log loss: {}'.format(log_loss(y_pred=np.clip(gbc_pred_proba,a_max=0.999,a_min=0.001),y_true=pd.get_dummies(y_val-1))))
sns.heatmap(confusion_matrix(y_pred=gbc_pred,y_true=y_val),cmap='Greens',xticklabels=range(1,10),yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=gbc_pred,y_true=y_val))
%%time
import xgboost as xgb
dtrain = xgb.DMatrix(data=X_train,label=y_train-1) #xgb classes starts from zero
dval = xgb.DMatrix(data=X_val,label=y_val-1) #xgb classes starts from zero
watchlist = [ (dtrain,'train'),(dval,'eval'),]
xgb_params = {
'eta': 0.05,
'max_depth': 7,
'subsample': 0.9,
'colsample_bytree': 0.9,
'colsample_bylevel': 0.7,
'lambda':0.1,
'objective': 'multi:softmax',
'eval_metric': 'mlogloss',
'min_child_weight':2,
'num_class' : 9
}
bst = xgb.train(params=xgb_params,dtrain=dtrain,num_boost_round=400,evals=watchlist,verbose_eval=10)
xgb_pred = bst.predict(dval)
xgb_params = {
'eta': 0.05,
'max_depth': 7,
'subsample': 0.9,
'colsample_bytree': 0.9,
'colsample_bylevel': 0.7,
'lambda':0.1,
'objective': 'multi:softprob',
'eval_metric': 'mlogloss',
'min_child_weight':2,
'num_class' : 9
}
bst = xgb.train(params=xgb_params,dtrain=dtrain,num_boost_round=400,evals=watchlist,verbose_eval=10)
xgb_pred_proba = bst.predict(dval)
print(confusion_matrix(y_pred=xgb_pred,y_true=y_val))
print('log loss: {}'.format(log_loss(y_pred=np.clip(xgb_pred_proba,a_max=0.999,a_min=0.001),y_true=pd.get_dummies(y_val-1))))
sns.heatmap(confusion_matrix(y_pred=xgb_pred+1,y_true=y_val),cmap='Greens',xticklabels=range(1,10),yticklabels=range(1,10))
print('classification report results:\r\n' + classification_report(y_pred=xgb_pred+1,y_true=y_val))
test_pred = bst.predict(xgb.DMatrix(te_data.iloc[:,1:]))
subm = pd.DataFrame(test_pred)
subm.columns = ['class_'+ str(x) for x in range(1,10)]
subm.index = te_data.id
subm.to_csv('../subm/xgboost_classification_submission.csv')
#lets make sure our prediction fits the desired format:
print(subm.head())
print('submission shape: {}'.format(subm.shape))
print('')
print("great! we're good to go on and submit our results")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: now lets load our data set for this tutorial
Step2: pandas has lots of great features that can help us get insights to the data with very little effort
Step3: this format is somewhat problematic since
Step4: another great feature of the pandas package is the simplisity
Step5: while examining these plots we can already make some assumptions
Step6: as we can see our assamption was indeed correct - categories 6,8 and 2 are those easiest to predict
Step7: lets see if support vector machines will do any better
Step8: we can see that we get less accurate results on the whole,
Step9: yes!
Step10: wow! we got an average F1 score of 82% this looks great!
|
2,988
|
<ASSISTANT_TASK:>
Python Code:
def solve(n ) :
initial_array =[[ 0 for i in range(n - 1 ) ] for j in range(n - 1 ) ]
final_array =[[ 0 for i in range(n ) ] for j in range(n ) ]
for i in range(n - 1 ) :
initial_array[0 ][i ] = i + 1
for i in range(1 , n - 1 ) :
for j in range(n - 1 ) :
initial_array[i ][j ] = initial_array[i - 1 ] [(j + 1 ) %(n - 1 ) ]
for i in range(n - 1 ) :
for j in range(n - 1 ) :
final_array[i ][j ] = initial_array[i ][j ]
for i in range(n ) :
final_array[i ][n - 1 ] = final_array[n - 1 ][i ] = 0
for i in range(n ) :
t0 = final_array[i ][i ]
t1 = final_array[i ][n - 1 ]
temp = final_array[i ][i ]
final_array[i ][i ] = final_array[i ][n - 1 ]
final_array[i ][n - 1 ] = temp
final_array[n - 1 ][i ] = t0
for i in range(n ) :
for j in range(n ) :
print(final_array[i ][j ] , end = "▁ ")
print("", end ▁ = ▁ "")
if __name__== ' __main __' :
n = 5
solve(n )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
2,989
|
<ASSISTANT_TASK:>
Python Code:
# inline plots
%matplotlib inline
# import matplotlib as plt acronym
import matplotlib.pylab as plt
# import numpy as np acronym
import numpy as np
# synthetic data
x = np.linspace(-10, 10, 100)**3
# plotting
plt.plot(x)
plt.show()
# synthetic data
x = np.random.normal(0, 2, 20)
y = np.random.normal(0, 2, 20)
# plotting
plt.plot(x, y, "o")
plt.show()
# synthetic data
x = np.linspace(-10, 10, 100)
y1 = x**3
y2 = x**2
# plotting
plt.figure(figsize=(12,5)) # create figure with size in inches
plt.plot(x, y1, label="$y=x^3$") # plot y1
plt.plot(x, y2, label="$y=x^3$") # plot y2
plt.title("$y=f(x)$") # main title
plt.xlabel("x [-]") # x axis label
plt.ylabel("y [-]") # y axis label
plt.xlim(-7.5, 10) # limits of x axis
plt.ylim(-750, 750) # limits of y axis
plt.grid() # show grid
plt.legend() # show legend
plt.show()
# synthetic data
x = np.linspace(-10, 10, 25)
y1 = x**3
y2 = x**2
y3 = x**4 / 5
# plotting
plt.figure(figsize=(12,7)) # set size
plt.plot(x, y1, "ro", label="$y=x^3$") # plot y1
plt.plot(x, y2, "b^-", linewidth=6, markersize=15, label="$y=x^3$") # plot y2
plt.plot(x, y3, "k:", linewidth=5, label="$y=x^4/5$") # plot y3
plt.legend() # show legend
plt.show()
# synthetic data
values = [121, 56, 41, 31] # values of bars
years = [2015, 2016, 2017, 2018] # position of bars
# plotting
plt.bar(years, values, align='center')
plt.xticks(years, years)
plt.show()
# synthetic data with normal distribution
x = np.random.normal(0, 2, 1000)
# create and plot histogram
plt.hist(x, bins=20)
plt.show()
# synthetic data with normal distribution
x = np.random.normal(0, 2, 1000)
y = np.random.normal(1, 1, 1000)
# basic plot
plt.boxplot([x,y], labels=["x", "y"])
plt.show()
labels = ['apples', 'oranges', 'pears'] # classes
values = [121, 56, 41] # values for classes
plt.pie(values, labels=labels) # pie chart
plt.legend()
plt.axis('equal') # unscale to 1:1
plt.show()
# synthetic data
x = np.random.normal(0, 2, 20)
y = np.random.normal(0, 2, 20)
plt.figure(figsize=(8,5)) # set size
# plotting
plt.subplot(121)
plt.plot(x, y, "ob")
plt.subplot(122)
plt.plot(y, x, "or")
plt.show()
# synthetic data
xb = np.random.normal(0, 2, 1000)
yb = np.random.normal(1, 1, 1000)
x0 = np.random.normal(0, 2, 20)
y0 = np.random.normal(0, 2, 20)
plt.figure(figsize=(10,10)) # set size
# first row
plt.subplot(311)
plt.plot(x0, "-xk")
plt.xlabel("x [-]")
plt.ylabel("y [-]")
# second row
plt.subplot(323)
plt.hist(xb, bins=20, color="b")
plt.xlabel("x [-]")
plt.ylabel("y [-]")
plt.subplot(324)
plt.hist(yb, bins=20, color="r")
plt.xlabel("x [-]")
plt.ylabel("y [-]")
# third row
plt.subplot(337)
plt.pie(values, autopct='%1.1f%%',
shadow=True,startangle=140)
plt.subplot(338)
plt.boxplot([x,y], labels=["x", "y"])
plt.subplot(339)
plt.pie(values, labels=labels)
# adjust plot placement to make it nicer
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
plt.show()
# set style ggplot
plt.style.use('ggplot')
# syntetic data
x = np.linspace(-10, 10, 100)
y1 = x**3
y2 = x**2
# plotting
plt.figure(figsize=(12,5)) # create figure with size in inches
plt.plot(x, y1, label="$y=x^3$") # plot y1
plt.plot(x, y2, label="$y=x^3$") # plot y2
plt.title("$y=f(x)$") # main title
plt.xlabel("x [-]") # x axis label
plt.ylabel("y [-]") # y axis label
plt.xlim(-7.5, 10) # limits of x axis
plt.ylim(-750, 750) # limits of y axis
plt.legend() # show legend
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kreslení bodů a řad
Step2: Popisky, legenda, titulek, velikost, rozsah, mřížka
Step3: Styl značek, spojnic
Step4: Ostatní druhy grafů
Step5: Histogram
Step6: Boxplot
Step7: Koláčový graf
Step8: Více grafů v jednom okně
Step9: Následuje příklad, který využívá postupné změny počtu řádků a sloupců k tomu aby vytvořil složitější sestavu.
Step10: Poznámka
|
2,990
|
<ASSISTANT_TASK:>
Python Code:
dic = {'name': 'Dabar', 'name': 'Dabar New', 'band': 'Honey'}
print(dic)
print(len(dic))
print(dic.items())
for i, j in {"a": "test", "b": "test2"}.items():
print(i, j)
# Progs and their albums
progs = {'Yes': ['Close To The Edge', 'Fragile'],
'Genesis': ['Foxtrot', 'The Nursery Crime'],
'ELP': ['Brain Salad Surgery']}
# More progs
progs['King Crimson'] = ['Red', 'Discipline']
# items() returns a list of
# tuples with key and value
for singer, album in progs.items():
print(singer, ":=>", album)
for albums in progs.values():
print(albums)
for prog in progs:
print(prog, "=>", progs[prog])
# If there is 'ELP', removes
if 'ELP' in progs:
del progs['ELP']
print(progs)
multid = {'school': 'DMS',
'students_details': {
1001: {
"name": "Mayank",
"age": 41
},
1002: {
"name" : "Vishal",
"age": 42
},
1003: {
"name": "Rajeev Chaturvedi",
"age": 41
}
}
}
print(multid)
print(multid['students_details'][1001])
print(multid['students_details'][1002]['name'])
print(multid['students_details'][1004]['name'])
multid = {'school': 'DMS',
'students_details': {
"students":
[
"Mayank",
"Vishal",
"Rajeev"
]
}}
print(multid)
dupli = {
"meme" : "mjmj",
"test" : "TESt value",
"meme" : "wewe"
}
print(dupli)
for k in dupli:
print(k)
# Matrix in form of string
matrix = '''0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 4 0 0
0 0 0 0 0 0 0 3 0 0 0 0
0 0 0 0 0 0 5 0 0 0 0 0
0 0 0 0 6 0 0 0 0 0 0 0'''
mat = {}
# split the matrix in lines
for row, line in enumerate(matrix.splitlines()):
# Splits the line int cols
for col, column in enumerate(line.split()):
column = int(column)
# Places the column in the result,
# if it is differente from zero
if column:
mat[row, col] = column
print (mat)
# The counting starts with zero
print ('Complete matrix size:', (row + 1) * (col + 1))
print ('Sparse matrix size:', len(mat))
names = dict(mayank="johri", ashwini="johri", Rahul="Johri")
print(names)
names = dict([("mayank","johri"), ("ashwini", "johri"), ("Rahul","Johri")])
print(names)
d = dict()
d[10.1] = "TEST"
d[10] = "test"
d[10.5] = "really testing"
d[20] = "Testing completed"
print(d)
d = dict()
d[10.0] = "TEST"
d[10] = "test"
d[10.5] = "really testing"
d[20] = "Testing completed"
print(d)
hash(10.0) == hash(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE
Step2: NOTE
Step3: Lets check below two examples and see what is happening
Step4: NOTE
|
2,991
|
<ASSISTANT_TASK:>
Python Code:
# Ваш ответ здесь
import numpy as np
def bagging(sample, sample_answers, subsamples_count):
'''
Делаем subsamples_count выборок с повторениями из sample и соответствующих им sample_answers.
'''
subsamples = np.empty([subsamples_count, sample.shape[0], sample.shape[1]])
subsamples_answers = np.empty([subsamples_count, sample_answers.shape[0]])
for i in np.arange(subsamples_count):
subsample_indexes = np.random.choice(sample.shape[0], sample.shape[0], replace=True)
subsample = sample[subsample_indexes]
subsample_answers = sample_answers[subsample_indexes]
subsamples[i] = subsample
subsamples_answers[i] = subsample_answers
return subsamples, subsamples_answers
import sklearn
from sklearn.tree import DecisionTreeClassifier
class RandomForest(sklearn.base.BaseEstimator, sklearn.base.ClassifierMixin):
# Сделал по умолчанию max_depth=None вместо np.inf для совместимости с sklearn.tree.DecisionTreeClassifier
def __init__(self, trees_count, max_features=None, max_depth=None, criterion='gini'):
self.trees_count = trees_count
self.max_features = max_features
self.max_depth = max_depth
self.criterion = criterion
self.trees = np.empty(self.trees_count, dtype=DecisionTreeClassifier)
def fit(self, train_sample, train_sample_answers):
'''
Создает деревья, используя bagging и random subspaces method.
'''
train_subsamples, train_subsamples_answers = bagging(sample, sample_answers, self.trees_count)
for i in np.arange(self.trees_count):
classifier = DecisionTreeClassifier(max_features=self.max_features, max_depth=self.max_depth, criterion=self.criterion)
classifier.fit(train_subsamples[i], train_subsamples_answers[i])
self.trees[i] = classifier
return self
def predict(self, test_sample):
'''
Предсказать класс, используя обученные деревья
'''
test_sample_answers = np.empty([self.trees_count, test_sample.shape[0]], dtype=np.int64)
for i in np.arange(self.trees_count):
test_sample_answers[i] = self.trees[i].predict(test_sample)
# https://stackoverflow.com/a/19202117
bincount = lambda tree_test_sample_answers: np.bincount(tree_test_sample_answers, minlength=4)
bins_by_column = np.apply_along_axis(bincount, 0, test_sample_answers)
classes = bins_by_column.argmax(axis=0)
return classes
import pandas as pd
from sklearn.model_selection import train_test_split
raw_dataframe = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
sample = raw_dataframe.values[:, 1:]
sample_answers = raw_dataframe.values[:, 0]
train_sample, test_sample, train_sample_answers, test_sample_answers = train_test_split(sample, sample_answers,
test_size=0.2,
random_state=42)
classifier = RandomForest(trees_count=10)
classifier.fit(train_sample, train_sample_answers)
print(classifier.predict(test_sample) == test_sample_answers)
from sklearn.metrics import accuracy_score
# Можно увеличить до 200, ничего не поменяется. Просто не хочется загромождать вывод.
for trees_count in np.arange(1, 40):
classifier = RandomForest(trees_count=trees_count)
classifier.fit(train_sample, train_sample_answers)
print('При {:2} деревьях точность {:.3f}.'.format(trees_count, accuracy_score(classifier.predict(test_sample), test_sample_answers)))
from sklearn.ensemble import RandomForestClassifier
for trees_count in np.arange(1, 40):
my_classifier = RandomForest(trees_count=trees_count)
my_classifier.fit(train_sample, train_sample_answers)
their_classifier = RandomForestClassifier(n_estimators=trees_count)
their_classifier.fit(train_sample, train_sample_answers)
print('При {:2} деревьях точность моего составляет {:.3f}, а точность библиотечного — {:.3f}.'\
.format(trees_count,
accuracy_score(my_classifier.predict(test_sample), test_sample_answers),
accuracy_score(their_classifier.predict(test_sample), test_sample_answers)))
import sklearn
from sklearn.tree import DecisionTreeClassifier
class ModifiedRandomForest(sklearn.base.BaseEstimator, sklearn.base.ClassifierMixin):
# Сделал по умолчанию max_depth=None вместо np.inf для совместимости с sklearn.tree.DecisionTreeClassifier
def __init__(self, trees_count, max_features=None, max_depth=None, criterion='gini'):
self.trees_count = trees_count
self.max_features = max_features
self.max_depth = max_depth
self.criterion = criterion
self.trees = np.empty(self.trees_count, dtype=DecisionTreeClassifier)
self.feature_indexes = np.empty([self.trees_count, self.max_features], dtype=np.int64)
def fit(self, train_sample, train_sample_answers):
'''
Создает деревья, используя bagging и random subspaces method.
'''
raw_train_subsamples, train_subsamples_answers = bagging(sample, sample_answers, self.trees_count)
for i in np.arange(self.trees_count):
if self.max_features is not None:
self.feature_indexes[i] = np.random.choice(raw_train_subsamples.shape[2], self.max_features)
else:
self.feature_indexes[i] = np.arange(raw_train_subsamples.shape[2])
train_subsamples = raw_train_subsamples[:, :, self.feature_indexes[i]]
classifier = DecisionTreeClassifier(max_features=None, max_depth=self.max_depth, criterion=self.criterion)
classifier.fit(train_subsamples[i], train_subsamples_answers[i])
self.trees[i] = classifier
return self
def predict(self, test_sample):
'''
Предсказать класс, используя обученные деревья
'''
test_sample_answers = np.empty([self.trees_count, test_sample.shape[0]], dtype=np.int64)
for i in np.arange(self.trees_count):
test_sample_answers[i] = self.trees[i].predict(test_sample[:, self.feature_indexes[i]])
# https://stackoverflow.com/a/19202117
bincount = lambda tree_test_sample_answers: np.bincount(tree_test_sample_answers, minlength=4)
bins_by_column = np.apply_along_axis(bincount, 0, test_sample_answers)
classes = bins_by_column.argmax(axis=0)
return classes
average_accuracy = 0
average_modified_accuracy = 0
iterations_count = 40
max_trees_count = 40
for i in np.arange(iterations_count):
for trees_count in np.arange(1, max_trees_count):
classifier = RandomForest(trees_count=trees_count, max_features=10)
classifier.fit(train_sample, train_sample_answers)
average_accuracy += accuracy_score(classifier.predict(test_sample), test_sample_answers) / iterations_count / max_trees_count
modified_classifier = ModifiedRandomForest(trees_count=trees_count, max_features=10)
modified_classifier.fit(train_sample, train_sample_answers)
average_modified_accuracy += accuracy_score(modified_classifier.predict(test_sample), test_sample_answers) / iterations_count / max_trees_count
if i == 0:
print('При {:2} деревьях точность моего составляет {:.3f}, а точность модифицированного — {:.3f}.'\
.format(trees_count,
accuracy_score(classifier.predict(test_sample), test_sample_answers),
accuracy_score(modified_classifier.predict(test_sample), test_sample_answers)))
print('В среднем точность обычного леса составляет {:.7f}, а точность модифицированного — {:.7f}.'\
.format(average_accuracy, average_modified_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Задача 3 (10% баллов)
Step2: Тестирование (15%)
Step3: Покажите, как менялись значения критерия качества accuracy при увеличении параметра num_trees. Видны ли следы переобучения?
Step4: Следы переобучения не видны.
Step5: Сейчас мой ансамбль оказался лучше, однако это может измениться при использовании прочих параметров.
|
2,992
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sys
import matplotlib.pyplot as plt
import sympy as sym
import pandas as pd
import magnitude as mag
from magnitude import mg
mag.new_mag('lbm', mag.Magnitude(0.45359237, kg=1))
mag.new_mag('lbf', mg(4.4482216152605, 'N'))
mag.new_mag('mph', mg(0.44704, 'm/s'))
from IPython.display import display, Markdown, Latex
def printBig(*message): # like print(), but with header2 formatting
display(Markdown( '## ' + ' '.join([str(x) for x in message]) ))
# drag force:
def dragf(rho, A, Cd, v): return(mg(1/2)*rho*A*Cd*v**2)
# second area moment of an annulus:
def I_annulus(OD, ID): return( mg(np.pi/4)*( (OD/mg(2))**4 - (ID/mg(2))**4 ) )
# area of an annulus:
def annularArea(ID, OD): return(mg(np.pi)*(OD**2/mg(4)-ID**2/mg(4)))
print(sys.version)
%matplotlib inline
sym.init_printing()
FS= mg(1) # factor of safety (NOT SURE IF BUILDING CODES HAVE ONE BUILT IN)
g= mg(9.81,'m/s2') # gravitational acceleration
rho_air= mg(1.225,'kg/m3') # density of air at 0 C
mu_air= mg(1.7e-5,'Pa s') # dynamic viscosity of air at 0 C
nu_air= mu_air/rho_air # kinematic viscosity of air at 0 C
rho_steel= mg(7.8e3,'kg/m3') # density of steel
rho_al= mg(2.7e3,'kg/m3') # density of aluminum
rho_fg= mg(1520,'kg/m3') # density of fiberglass
rho_concrete= mg(2300,'kg/m3') # density of concrete
sigma_y_al= mg(276,'MPa') # tensile yield strength of 6061-T6 aluminum
Cd= mg(1.2) # Drag coefficient of a cylinder/antenna at relevant speeds, (THIS IS A GUESS)
# Fg= mg(300,'lbf') # weight of the ground station (THIS IS A GUESS)
v_wind_actual= mg(100,'mph') # max expexted wind speed (WHAT'S THE JUSTIFICATION HERE?)
v_wind= v_wind_actual*FS # max expexted wind speed (WHAT'S THE JUSTIFICATION HERE?)
# legs
Lxy_leg= mg(77,'inch') # length of a leg projected in the XY plane
L_leg= mg(78.75,'inch') # length of a leg
L_truss= mg(48.75,'inch') # length of the beam fixing a leg
# feet
Lxy_foot= mg(18.5,'inch') # side length of the foot's footprint
Lz_foot= mg(6,'inch') # height of the feet
# antennas
A_70cm= mg(4.0,'ft2') # frontal area of the 70 cm antenna
# A_70cm= mg(0,'ft2')
m_70cm= mg(7.5,'lbm') # mass of the 70 cm antenna
A_2m= mg(5.0,'ft2') # frontal area of the 2 m antenna
# A_2m= mg(0,'ft2')
m_2m= mg(9.5, 'lbm') # mass fo the 2 m antenna
# mast
L_mast= mg(180,'inch') # length of the mast
OD_mast= mg(1.9,'inch') # outer diameter of the mast
ID_mast= mg(1.48,'inch') # inner diameter of the mast
# cross boom
OD_crossBoom= mg(2.0,'inch') # outer diameter of the cross boom
ID_crossBoom= mg(1.5,'inch') # inner diameter of the cross boom
L_crossBoom= mg(60+60+10.75,'inch') # length of the cross boom
rho_crossBoom= rho_fg # density of the cross boom
# legs
OD_leg= mg(1.0,'inch') # outer diameter of the legs
ID_leg= OD_leg-mg(2*0.2,'inch') # inner diameter of the legs (GUESSING A 0.2" wall)
# geometry
# L_leg^2 = Lz_leg^2 + Lxy_leg^2
Lz_leg= np.sqrt(L_leg**2 - Lxy_leg**2)
Lx_leg= Lxy_leg*mg(np.cos(np.pi/6)) # length of a leg projected in the X axis
Ly_leg= Lxy_leg*mg(np.cos(np.pi/3)) # length of a leg projected in the X axis
# frontal areas
A_mast= L_mast*OD_mast
A_crossBoom= L_crossBoom*OD_crossBoom
A_legs= mg(2)*(Lx_leg*OD_leg) +(Lz_leg*OD_leg)
A_total= A_70cm+A_2m+A_mast+A_crossBoom+A_legs
# masses
m_foot= rho_concrete*Lxy_foot**2*Lz_foot
m_mast= rho_al*L_mast*annularArea(ID=ID_mast, OD=OD_mast)
m_leg= rho_steel*L_leg*annularArea(ID=ID_leg, OD=OD_mast)
m_crossBoom= rho_fg*L_crossBoom*annularArea(ID=ID_crossBoom, OD=OD_crossBoom)
m_structure= m_mast+mg(3)*m_leg+m_crossBoom+m_70cm+m_2m
# centers of mass
Lz_CoM_70cm= L_mast
Lz_CoM_2m= L_mast
Lz_CoM_mast= L_mast/mg(2)
Lz_CoM_foot= Lz_foot/mg(2)
Lz_CoM_leg= Lz_foot+Lz_leg/mg(2)
Lz_CoM_crossBoom= L_mast
print(
'area contributions',
'70cm:', A_70cm.ounit('m2'),
'2m:', A_2m.ounit('m2'),
'mast:', A_mast,
'cross boom:', A_crossBoom,
'legs:', A_legs,
'total:', A_total,
sep='\n')
# center of pressure for the whole station
# assuming the CoP for each element is at the CoM (no lift on elem, constant density of elem)
Lz_CoP= (
A_mast*Lz_CoM_mast
+A_70cm*Lz_CoM_70cm
+A_2m*Lz_CoM_2m
+A_crossBoom*Lz_CoM_crossBoom
+A_legs*Lz_CoM_leg
)/A_total
# height of the CoM of the whole station
# (only useful for finding critical tipping angle)
Lz_CoM_total= (
Lz_CoM_mast*m_mast
+Lz_CoM_2m*m_2m
+Lz_CoM_70cm*m_70cm
+Lz_CoM_crossBoom*m_crossBoom
+Lz_CoM_leg*m_leg*mg(3)
)/m_structure
# drag on the station
#D_wind= mg(1/2)*rho_air*A_total*Cd*v_wind**2
D_wind= dragf(rho=rho_air, A=A_total, Cd=Cd, v=v_wind)
# Moments exerted on the station
M_wind= -D_wind*(Lz_CoP-Lz_foot) # negative sign from CW direction
M_structure= m_structure*g*Lx_leg
M_foot= m_foot*g*(Lx_leg+Lxy_leg)
# 0 == M_wind + M_structure + M_foot + M_balast
M_ballast= -M_wind-M_structure-M_foot
# M_ballast == m_ballast*g*(Lx_leg+Lxy_leg)
m_ballast= M_ballast/g/(Lx_leg+Lxy_leg) # ballast on the foot
# m_ballast= M_ballast/g/(Lx_leg) # ballast on the mast
# # I= pi/4*(r_o^2 - r_i^2)
# I_mast= I_annulus(OD= OD_mast, ID= ID_mast)
# M_bending= D_wind*(Lz_CoP - Lz_leg)
# # sigma_max= M*y/I
# # max stress in the mast, assuming it's anchored at the height of the foot
# sigma_max_mast= M_bending*(OD_mast/mg(2))/I_mast
# print(
# 'tension from bending, if the mast is welded at the height of the feet:',
# sigma_max_mast.ounit('MPa')
# )
# print('tensile yield stress of the mast:', sigma_y_al.ounit('MPa'))
print('m_ballast:', m_ballast.ounit('lbm'))
if (m_ballast.val <= 0):
print('No ballast needed. (Wind will not tip station)')
else:
print('mass of the required ballast per foot:', m_ballast.ounit('lbm'))
print('mass of a foot (for comparison):', m_foot.ounit('lbm '))
F_p_legsDrag= mg(1/2)*dragf(rho=rho_air, A=A_legs, Cd=Cd, v=v_wind)
F_p_boom= dragf(
rho=rho_air, Cd=Cd, v=v_wind,
A= A_70cm+A_2m+A_crossBoom
)
F_d_mast= dragf(rho=rho_air, Cd=Cd, A=A_mast, v=v_wind) # distributed over the mast
# about the base:
# sum(M) == 0 == (F_p_legsReact-F_p_legsDrag)*Lz_leg - F_p_boom*L_mast - F_d_mast*L_mast/2
# sum(F) == 0 == -F_p_legsReact + F_p_legsDrag + F_p_boom + F_p_baseReact + F_d_mast
F_p_legsReact= mg(1)/Lz_leg*( F_p_boom*L_mast + F_d_mast*L_mast/mg(2) ) + F_p_legsDrag
F_p_baseReact= F_p_legsReact - F_p_legsDrag - F_p_boom + F_d_mast
# x = sym.symbols('x')
# print(F_p_boom.ounit('N'))
# # load per length, as a function of length along the mast:
# Fdist_expr_mast= F_d_mast/L_mast \
# + F_p_baseReact*mg(sym.DiracDelta(x),'/m') \
# + (-F_p_legsReact+F_p_legsDrag)*mg(sym.DiracDelta(x-Lz_leg.toval(ounit='m')),'/m') \
# + F_p_boom*mg(sym.DiracDelta(x-L_mast.toval(ounit='m')),'/m')
# print('load distribution expression:\n', Fdist_expr_mast.toval(ounit='N/m'), 'N/m')
# # shear load, as a function of length along the mast:
# F_expr_mast= mg(sym.integrate(Fdist_expr_mast.toval(ounit='N/m'), x),'N')
# print('shear load expression:\n', F_expr_mast.toval(ounit='N'), 'N')
# # bending moment, as a function of length along the mast:
# M_expr_mast= mg(sym.integrate(F_expr_mast.toval(ounit='N'),x), 'N m')
# print('bending load expression:\n', M_expr_mast.toval(ounit='N m'), 'N m')
x = sym.symbols('x')
# load per length, as a function of length along the mast:
Fdist_expr_mast= F_d_mast.toval('N')/L_mast.toval('m') \
+ F_p_baseReact.toval('N')*sym.DiracDelta(x) \
+ (-F_p_legsReact.toval('N')+F_p_legsDrag.toval('N'))*sym.DiracDelta(x-Lz_leg.toval(ounit='m')) \
+ F_p_boom.toval('N')*sym.DiracDelta(x-L_mast.toval(ounit='m'))
print('load distribution expression:\n', Fdist_expr_mast.toval(ounit='N/m'), 'N/m')
# shear load, as a function of length along the mast:
F_expr_mast= mg(sym.integrate(Fdist_expr_mast.toval(ounit='N/m'), x),'N')
print('shear load expression:\n', F_expr_mast.toval(ounit='N'), 'N')
# bending moment, as a function of length along the mast:
M_expr_mast= mg(sym.integrate(F_expr_mast.toval(ounit='N'),x), 'N m')
print('bending load expression:\n', M_expr_mast.toval(ounit='N m'), 'N m')
xs= np.linspace(-1e-6, L_mast.toval(ounit='m')+1e-6, 300)
Fsf= sym.lambdify(x,F_expr_mast.toval(ounit='N'), ['numpy','sympy'])
Fs= mg(np.array([Fsf(x) for x in xs]),'N')
Vs= Fs/annularArea(OD=OD_mast, ID=ID_mast)
plt.plot(xs, Vs.toval(ounit='MPa'))
print(F_d_mast, F_p_baseReact, F_p_legsReact, F_p_legsDrag, F_p_boom, sep='\n')
Msf= sym.lambdify(x, M_expr_mast.toval(ounit='N m'), ['numpy', 'sympy'])
Ms= mg(np.array([Msf(x) for x in xs]),'N m')
plt.figure()
plt.plot(xs, Ms.toval(ounit='N m'))
sigmas= Ms*(OD_mast/mg(2))/I_mast
plt.figure()
plt.plot(xs, sigmas.toval(ounit='MPa'))
#OD_mast_new=
#ID_mast_new=
print(D_wind.ounit('lbf'))
print(Lz_CoP.ounit('ft'))
print(m_ballast.ounit('lbm'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Diagram
Step2: physical quantities
Step3: ground station parameters
Step4: secondary parameters
Step5: Calculations
Step6: balance moments and find required ballast
Step7: bending on a welded mast
Step8: report results
Step9: Bending
Step10: symbolically find loading on the mast
Step11:
|
2,993
|
<ASSISTANT_TASK:>
Python Code:
# Import modules
import time
import math
import numpy as np
import scipy
import matplotlib.pyplot as plt
def linear_congruential_generator(x, a, b, m):
x = (a * x + b) % m
u = x / m
return u, x, a, b, m
def stdrand(x):
return linear_congruential_generator(x, pow(7, 5), 0, pow(2, 31) - 1)[:2]
def halton(p, n):
b = np.zeros(math.ceil(math.log(n + 1) / math.log(p)))
u = np.zeros(n)
for j in range(n):
i = 0
b[0] = b[0] + 1
while b[i] > p - 1 + np.finfo(float).eps:
b[i] = 0
i += 1
b[i] += 1
u[j] = 0
for k in range(1, b.size + 1):
u[j] = u[j] + b[k-1] * pow(p, -k)
return u
def monte_carlo_process_std(toss):
x = time.time()
hit = 0
for i in range(toss):
u1, x = stdrand(x)
u2, x = stdrand(x)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_std(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
def monte_carlo_process_customized(toss):
x0 = time.time()
args = (x0, 13, 0, 31)
hit = 0
for i in range(toss):
u1, *args = linear_congruential_generator(*args)
u2, *args = linear_congruential_generator(*args)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_customized(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
def monte_carlo_process_quasi(toss):
hit = 0
px = halton(2, toss)
py = halton(3, toss)
for i in range(toss):
u1 = px[i]
u2 = py[i]
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_quasi(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Necesaary Function For Monte Carlo Simulation
Step2: Monte Carlo Simulation (with Minimal standard random number generator)
Step3: Monte Carlo Simulation (with LCG where multiplier = 13, offset = 0 and modulus = 31)
Step4: Monte Carlo Simulation (with Quasi-random numbers)¶
|
2,994
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Tal Linzen <linzen@nyu.edu>
# Denis A. Engemann <denis.engemann@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import pandas as pd
import mne
from mne.stats import linear_regression, fdr_correction
from mne.viz import plot_compare_evokeds
from mne.datasets import kiloword
# Load the data
path = kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
print(epochs.metadata.head())
name = "Concreteness"
df = epochs.metadata
df[name] = pd.cut(df[name], 11, labels=False) / 10
colors = {str(val): val for val in df[name].unique()}
epochs.metadata = df.assign(Intercept=1) # Add an intercept for later
evokeds = {val: epochs[name + " == " + val].average() for val in colors}
plot_compare_evokeds(evokeds, colors=colors, split_legend=True,
cmap=(name + " Percentile", "viridis"))
names = ["Intercept", name]
res = linear_regression(epochs, epochs.metadata[names], names=names)
for cond in names:
res[cond].beta.plot_joint(title=cond, ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
reject_H0, fdr_pvals = fdr_correction(res["Concreteness"].p_val.data)
evoked = res["Concreteness"].beta
evoked.plot_image(mask=reject_H0, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Psycholinguistically relevant word characteristics are continuous. I.e.,
Step2: We observe that there appears to be a monotonic dependence of EEG on
Step3: Because the linear_regression function also estimates p values, we can --
|
2,995
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('orb', times=np.linspace(0,10,1000), dataset='orb01', component=['primary', 'secondary'])
b.add_dataset('lc', times=np.linspace(0,10,1000), dataset='lc01')
b.add_compute('legacy', compute='legacybackend')
print(b.get_compute('legacybackend'))
b.add_compute('phoebe', compute='phoebebackend')
print(b.get_compute('phoebebackend'))
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.run_compute('legacybackend', model='legacyresults')
b.set_value_all('enabled@lc01@phoebebackend', False)
#b.set_value_all('enabled@orb01@legacybackend', False) # don't need this since legacy NEVER computes orbits
print(b.filter(qualifier='enabled'))
b.run_compute(['phoebebackend', 'legacybackend'], model='mixedresults')
print(b['mixedresults'].computes)
b['mixedresults@phoebebackend'].datasets
b['mixedresults@legacybackend'].datasets
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And we'll attach some dummy datasets. See Datasets for more details.
Step3: Available Backends
Step4: Using Alternate Backends
Step5: Running Compute
Step6: Running Multiple Backends Simultaneously
Step7: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them.
|
2,996
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../..')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import pandas as pd
from batchflow import NumpySampler as NS
# truncated normal and uniform
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4
ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)
ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0)
ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4)
.apply(lambda m: m.astype(np.int)) / 4 + (0, 3))
# a mixture of all four
ns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4
# take a look at the heatmap of our sampler:
h = np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)
plt.imshow(h[0])
from batchflow import NumpySampler as NS
ns = NS('n', dim=2)
smp = ns.sample(size=200)
plt.scatter(*np.transpose(smp))
from batchflow import ScipySampler as SS
ss = SS('mvn', mean=[0, 0], cov=[[2, 1], [1, 2]]) # note also that you can pass the same params as in
smp = ss.sample(2000) # scipy.sample.multivariate_normal, such as `mean` and `cov`
plt.scatter(*np.transpose(smp))
from batchflow import HistoSampler as HS
histo = np.histogramdd(ss.sample(1000000))
hs = HS(histo)
plt.scatter(*np.transpose(hs.sample(150)))
hs = HS(edges=2 * [np.linspace(-4, 4)])
hs.update(ss.sample(1000000))
plt.imshow(hs.bins, interpolation='bilinear')
# blur using "+"
u = NS('u', dim=2)
noise = NS('n', dim=2)
blurred = u + noise * 0.2 # decrease the magnitude of the noise
both = blurred | u + (2, 2)
plt.imshow(np.histogramdd(both.sample(1000000), bins=100)[0])
n = NS('n', dim=2).truncate(3, 0.3, expr=lambda m: np.sum(m**2, axis=1))
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])
n = (4 * NS('n', dim=2)).apply(lambda m: m.astype(np.int)).truncate([6, 6], [-6, -6])
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])
bm = lambda vec2: np.sqrt(-2 * np.log(vec2[:, 0:1])) * np.concatenate([np.cos(2 * np.pi * vec2[:, 1:2]),
np.sin(2 * np.pi * vec2[:, 1:2])], axis=1)
n = NS('u', dim=2).apply(bm)
plt.imshow(np.histogramdd(u.sample(1000000), bins=100)[0])
n, u = NS('n'), SS('u') # initialize one-dimensional notrmal and uniform samplers
s = n & u # stack them together
s.sample(3)
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4
ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)
ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0)
ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4)
.apply(lambda m: m.astype(np.int)) / 4 + (0, 3))
ns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4
plt.imshow(np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intro
Step2: Building Samplers
Step3: take a look at a sample generated by our sampler
Step4: The same goes for ScipySampler based on scipy.stats-distributions, or SS ("mvn" stands for multivariate-normal)
Step5: 2. HistoSampler as an estimate of a distribution generating a cloud of points
Step6: ...or you can specify empty bins and estimate its weights using a method HS.update and a cloud of points
Step7: 3. Algebra of Samplers; operations on Samplers
Step8: You may also want to truncate a sampler's distribution so that sampling points belong to a specific region. The common use-case is to sample normal points inside a box.
Step9: Not infrequently you need to obtain "normal" sample in integers. For this you can use Sampler.apply method
Step10: Note that Sampler.apply-method allows you to add an arbitrary transformation to a sampler. For instance, Box-Muller transform
Step11: Another useful thing is coordinate stacking ("&" stands for multiplication of distribution functions)
Step12: 4. Alltogether
|
2,997
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
2,998
|
<ASSISTANT_TASK:>
Python Code:
#df = pd.read_csv("small-311-2015.csv")
df = pd.read_csv("311-2014.csv", nrows=200000)
df.head(2)
df.info()
def parse_date (str_date):
return dateutil.parser.parse(str_date)
df['created_dt']= df['Created Date'].apply(parse_date)
df.head(3)
df.info()
df["Complaint Type"].value_counts().head(1)
df["Complaint Type"].value_counts().head(5).sort_values().plot(kind='barh')
df["Borough"].value_counts()
people_bronx= 1438159
people_queens= 2321580
people_manhattan=1636268
people_brooklyn= 2621793
people_staten_island= 473279
complaints_per_capita_bronx= 29610/people_bronx
complaints_per_capita_bronx
complaints_per_capita_queens=46824/people_queens
complaints_per_capita_queens
complaints_per_capita_manhattan=42050/people_manhattan
complaints_per_capita_manhattan
complaints_per_capita_staten_island=473279/people_staten_island
complaints_per_capita_staten_island
complaints_per_capita_brooklyn=2621793/people_brooklyn
complaints_per_capita_brooklyn
df.index = df['created_dt']
#del df['Created Date']
df.head()
print("There were", len(df['2015-03']), "cases filed in March")
print("There were", len(df['2015-05']), "cases filed in May")
df['2015-04-01']
df['2015-04-01']['Complaint Type'].value_counts().head(3)
df.info()
df.resample('M').count()
df.resample('M').index[0]
import numpy as np
np.__version__
df.resample('M').count().plot(y="Unique Key")
ax= df.groupby(df.index.month).count().plot(y='Unique Key', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Number of Complaints")
ax.set_title("311 complains in 2015")
#df.resample('W').count().head(5)
df.resample('W').count().plot(y="Unique Key", color= "purple")
df[df['Complaint Type'].str.contains("Noise")].head()
noise_df= df[df['Complaint Type'].str.contains("Noise")]
noise_graph= noise_df.groupby(noise_df.index.month).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
noise_graph.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
noise_df.groupby(by=noise_df.index.hour)['Unique Key'].count().plot()
noise_graph= noise_df.groupby(noise_df.index.dayofweek).count().plot(y='Unique Key', legend=False)
noise_graph.set_xticks([1,2,3,4,5,6,7])
noise_graph.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thur', 'Fri', 'Sat', 'Sun'])
noise_graph.set_ylabel("Number of Noise Complaints")
noise_graph.set_title("311 noise complains in 2015")
daily_count= df['Unique Key'].resample('D').count().sort_values(ascending=False)
top_5_days= daily_count.head(5)
top_5_days
ax = top_5_days.plot(kind='bar') # I dont know how to put names to the labels
ax.set_title("Top 5 days")
ax.set_xlabel("Day")
ax.set_ylabel("Complaints")
hour_graph= df.groupby(df.index.hour).count().plot(y='Unique Key', legend=False)
hour_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
hour_graph.set_title("A day of complaints")
hour_graph.set_xlabel("Hours")
hour_graph.set_ylabel("Complaints")
twelve_am_complaints= df[df.index.hour <1]
twelve_am_complaints.head()
twelve_am_complaints['Complaint Type'].value_counts().head(5)
one_am_complaints= df[df.index.hour == 1]
one_am_complaints['Complaint Type'].value_counts().head(5)
eleven_pm_complaints= df[df.index.hour == 23]
eleven_pm_complaints['Complaint Type'].value_counts().head(5)
twelve_am_complaints.groupby(twelve_am_complaints.index.minute).count()
df['Agency'].value_counts().head(5)
df_NYPD = df[df['Agency'] == 'NYPD']
df_HPD = df[df['Agency'] == 'HPD']
df_DOT = df[df['Agency'] == 'DOT']
df_DPR= df[df['Agency'] == 'DPR']
df_DOHMH= df[df['Agency'] == 'DOHMH']
all_graph = df_NYPD.groupby(by= df_NYPD.index.hour).count().plot(y='Unique Key', label='NYPD complaints')
all_graph.set_xticks([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23])
all_graph.set_title("A day of complaints by the top 5 agencies")
all_graph.set_xlabel("Hours")
all_graph.set_ylabel("Complaints")
df_HPD.groupby(by= df_HPD.index.hour).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
all_graph = df_NYPD.groupby(by= df_NYPD.index.weekofyear).count().plot(y='Unique Key', label='NYPD complaints')
#all_graph.set_xticks([1,50])
all_graph.set_title("A year of complaints by the top 5 agencies")
all_graph.set_xlabel("Weeks")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
df_HPD.groupby(by= df_HPD.index.week).count().plot(y='Unique Key', ax=all_graph , label='HPD complaints')
df_DOT.groupby(by= df_DOT.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOT complaints')
df_DPR.groupby(by= df_DPR.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DPR complaints')
df_DOHMH.groupby(by= df_DOHMH.index.hour).count().plot(y='Unique Key', ax=all_graph , label='DOHMH complaints')
plt.legend(bbox_to_anchor=(0, 1), loc='best', ncol=1)
print(May and June are the months with more complaints, followed by October, November and December.
In May the NYPD and HPD have an odd number of complaints)
August_July = df["2015-07":"2015-08"]
August_July_complaints = August_July['Complaint Type'].value_counts().head(5)
August_July_complaints
May = df['2015-05']
May_complaints= May['Complaint Type'].value_counts().head(5)
May_complaints
# August_July_vs_May= August_July_complaints.plot(y='Unique Key', label='August - July complaints')
# August_July_vs_May.set_ylabel("Number of Complaints")
# August_July_vs_May.set_title("August-July vs May Complaints")
# May['Complaint Type'].value_counts().head(5).plot(y='Unique Key', ax=August_July_vs_May, label='May complaints')
# August_July_vs_May.set_xticks([1,2,3,4,5])
# August_July_vs_May.set_xticklabels(['Illegal Parking', 'Blocked Driveway', 'Noise - Street/Sidewalk', 'Street Condition', 'Noise - Commercial'])
#Most popular complaints of the HPD
df_HPD['Complaint Type'].value_counts().head(5)
summer_complaints= df_HPD["2015-06":"2015-08"]['Complaint Type'].value_counts().head(5)
summer_complaints
winter_complaints= df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts().head(5)
winter_complaints
winter_complaints_dec= df_HPD["2015-12"]['Complaint Type'].value_counts().head(5)
winter_complaints_dec
winter_results= df_HPD["2015-12"]['Complaint Type'].value_counts() + df_HPD["2015-01":"2015-02"]['Complaint Type'].value_counts()
winter_results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What was the most popular type of complaint, and how many times was it filed?
Step2: Make a horizontal bar graph of the top 5 most frequent complaint types.
Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
Step4: According to your selection of data, how many cases were filed in March? How about May?
Step5: I'd like to see all of the 311 complaints called in on April 1st.
Step6: What was the most popular type of complaint on April 1st?
Step7: What month has the most reports filed? How many? Graph it.
Step8: What week of the year has the most reports filed? How many? Graph the weekly complaints.
Step9: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
Step10: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
Step11: What hour of the day are the most complaints? Graph a day of complaints.
Step12: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
Step13: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
Step14: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
Step16: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
Step17: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
|
2,999
|
<ASSISTANT_TASK:>
Python Code:
import pip
REQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'
try:
import pandas as pd
assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION
except:
raise Exception("Version %s or above of Pandas is required to run this notebook" % REQUIRED_MINIMUM_PANDAS_VERSION)
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
try:
import cplex
except:
raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
# If needed, install the module pandas prior to executing this cell
import pandas as pd
from pandas import DataFrame, Series
df_mines = DataFrame({"royalties": [ 5 , 4, 4, 5 ],
"ore_quality": [ 1.0, 0.7, 1.5, 0.5],
"max_extract": [ 2 , 2.5, 1.3, 3 ]})
nb_mines = len(df_mines)
df_mines.index.name='range_mines'
df_mines
blend_qualities = Series([0.9, 0.8, 1.2, 0.6, 1.0])
nb_years = len(blend_qualities)
print("* Planning mining operations for: {} years".format(nb_years))
blend_qualities.describe()
# global data
blend_price = 10
max_worked_mines = 3 # work no more than 3 mines each year
discount_rate = 0.10 # 10% interest rate each year
from docplex.mp.model import Model
mm = Model("mining_pandas")
# auxiliary data: ranges
range_mines = range(nb_mines)
range_years = range(nb_years)
# binary decisions: work the mine or not
work_vars = mm.binary_var_matrix(keys1=range_mines, keys2=range_years, name='work')
# open the mine or not
open_vars = mm.binary_var_matrix(range_mines, range_years, name='open')
# quantity to extract
ore_vars = mm.continuous_var_matrix(range_mines, range_years, name='ore')
mm.print_information()
# Organize all decision variables in a DataFrame indexed by 'range_mines' and 'range_years'
df_decision_vars = DataFrame({'work': work_vars, 'open': open_vars, 'ore': ore_vars})
# Set index names
df_decision_vars.index.names=['range_mines', 'range_years']
# Display rows of 'df_decision_vars' DataFrame for first mine
df_decision_vars[:nb_years]
mm.add_constraints(t.work <= t.open for t in df_decision_vars.itertuples())
mm.print_information()
# Once closed, a mine stays closed
def postOpenCloseConstraint(open_vars):
mm.add_constraints(open_next <= open_curr
for (open_next, open_curr) in zip(open_vars[1:], open_vars))
# Optionally: return a string to display information regarding the aggregate operation in the Output cell
return "posted {0} open/close constraints".format(len(open_vars) - 1)
# Constraints on sequences of decision variables are posted for each mine,
# using pandas' "groupby" operation.
df_decision_vars.open.groupby(level='range_mines').aggregate(postOpenCloseConstraint)
# Maximum number of worked mines each year
# Note that Model.sum() accepts a pandas Series of variables.
df_decision_vars.work.groupby(level='range_years').aggregate(
lambda works: mm.add_constraint(mm.sum(works) <= max_worked_mines))
# Display rows of 'df_decision_vars' joined with 'df_mines.max_extract' Series for first two mines
df_decision_vars.join(df_mines.max_extract)[:(nb_years * 2)]
# quantity extracted is limited
mm.add_constraints(t.ore <= t.max_extract * t.work
for t in df_decision_vars.join(df_mines.max_extract).itertuples())
mm.print_information()
# blend variables
blend_vars = mm.continuous_var_list(nb_years, name='blend')
# define blend variables as sum of extracted quantities
mm.add_constraints(mm.sum(ores.values) == blend_vars[year]
for year, ores in df_decision_vars.ore.groupby(level='range_years'))
mm.print_information()
# Quality requirement on blended ore
mm.add_constraints(mm.sum(ores.values * df_mines.ore_quality) >= blend_qualities[year] * blend_vars[year]
for year, ores in df_decision_vars.ore.groupby(level='range_years'))
mm.print_information()
actualization = 1.0 - discount_rate
assert actualization > 0
assert actualization <= 1
#
s_discounts = Series((actualization ** y for y in range_years), index=range_years, name='discounts')
s_discounts.index.name='range_years'
# e.g. [1, 0.9, 0.81, ... 0.9**y...]
print(s_discounts)
expected_revenue = blend_price * mm.dot(blend_vars, s_discounts)
mm.add_kpi(expected_revenue, "Total Actualized Revenue");
df_royalties_data = df_decision_vars.join(df_mines.royalties).join(s_discounts)
# add a new column to compute discounted roylaties using pandas multiplication on columns
df_royalties_data['disc_royalties'] = df_royalties_data['royalties'] * df_royalties_data['discounts']
df_royalties_data[:nb_years]
total_royalties = mm.dot(df_royalties_data.open, df_royalties_data.disc_royalties)
mm.add_kpi(total_royalties, "Total Actualized Royalties");
mm.maximize(expected_revenue - total_royalties)
mm.print_information()
# turn this flag on to see the solve log
print_cplex_log = False
# start the solve
s1 = mm.solve(log_output=print_cplex_log)
assert s1, "!!! Solve of the model fails"
mm.report()
mine_labels = [("mine%d" % (m+1)) for m in range_mines]
ylabels = [("y%d" % (y+1)) for y in range_years]
# Add a column to DataFrame containing 'ore' decision variables value
# Note that we extract the solution values of ore variables in one operation with get_values().
df_decision_vars['ore_values'] = s1.get_values(df_decision_vars.ore)
# Create a pivot table by (years, mines), using pandas' "unstack" method to transform the 'range_mines' row index
# into columns
df_res = df_decision_vars.ore_values.unstack(level='range_mines')
# Set user-friendly labels for column and row indices
df_res.columns = mine_labels
df_res.index = ylabels
df_res
# import matplotlib library for visualization
import matplotlib.pyplot as plt
# matplotlib graphics are printed -inside- the notebook
%matplotlib inline
df_res.plot(kind="bar", figsize=(10,4.5))
plt.xlabel("year")
plt.ylabel("ore")
plt.title('ore values per year');
# a list of (mine, year) tuples on which work is not possible.
forced_stops = [(1, 2), (0, 1), (1, 0), (3, 2), (2, 3), (3, 4)]
mm.add_constraints(work_vars[stop_m, stop_y] == 0
for stop_m, stop_y in forced_stops)
mm.print_information()
# build a new, empty solution
full_mining = mm.new_solution()
# define the worked
for m in range_mines:
for y in range_years:
if (m,y) not in forced_stops:
full_mining.add_var_value(work_vars[m,y], 1)
#full_mining.display()
mm.add_mip_start(full_mining)
s2 = mm.solve(log_output=True) # turns on CPLEX logging
assert s2, "solve failed"
mm.report()
# Add a column to DataFrame containing 'ore' decision variables value and create a pivot table by (years, mines)
df_decision_vars['ore_values2'] = s2.get_values(df_decision_vars.ore)
df_res2 = df_decision_vars.ore_values2.unstack(level='range_mines')
df_res2.columns = mine_labels
df_res2.index = ylabels
df_res2.plot(kind="bar", figsize=(10,4.5))
plt.xlabel("year")
plt.ylabel("ore")
plt.title('ore values per year - what-if scenario');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use decision optimization
Step 1
Step2: If CPLEX is not installed, you can install CPLEX Community edition.
Step3: Step 2
Step4: Blend quality data
Step5: Additional (global) data
Step6: Step 3
Step7: What are the decisions we need to make?
Step8: Express the business constraints
Step9: Now, let's iterate over rows of the DataFrame "df_decision_vars" and enforce the desired constraints.
Step10: Constraint 2
Step11: Constraint 3
Step12: Constraint 4
Step13: Now, the constraint to limit quantity extracted is easily created by iterating over all rows of the joined DataFrames
Step14: Blend constraints
Step15: Minimum average blend quality constraint
Step16: KPIs and objective
Step17: Total actualized revenue
Step18: Total actualized royalty cost
Step19: The total royalty is now calculated by multiplying the columns "open", "royalties" and "discounts", and to sum over all rows.<br>
Step20: Express the objective
Step21: Solve with Decision Optimization
Step22: Step 5
Step23: Visualize results
Step24: Adding operational constraints.
Step25: The previous solution does not satisfy these constraints; for example (0, 1) means mine 1 should not be worked on year 2, but it was in fact worked in the above solution.
Step26: Then we pass this solution to the model as a MIP start solution and re-solve,
Step27: You can see in the CPLEX log above, that our MIP start solution provided a good start for CPLEX, defining an initial solution with objective 157.9355
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.