markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Adding Delays
In stochastic simulations, bioscrape also supports delay. In a delay reaction, delay inputs/outputs are consumed/produced after some amount of delay. Reactions may have a mix of delay and non-delay inputs and outputs. Bioscrape innately supports a number of delay-types:
fixed: constant delay with parameter "delay".
Gaussian: gaussian distributed delay with parameters "mean" and "std".
Gamma: gamma distributed delay with shape parameter "k" and scale parameter "theta".
In the following example model, the following delays are added to the transcription and translation reactions described above and then simulated stochastically. Note that delays and delay inputs/outputs will be ignored if a model with delays is simulated deterministically.
|
from bioscrape.simulator import py_simulate_model
from bioscrape.types import Model
#create reaction tuples with delays require additional elements. They are of the form:
#(Inputs[string list], Outputs[string list], propensity_type[string], propensity_dict {propensity_param:model_param},
# delay_type[string], DelayInputs [string list], DelayOutputs [string list], delay_param_dictionary {delay_param:model_param}).
rxn1d = (["G"], ["G"], "proportionalhillpositive", {"d":"G", "s1":"I", "k":"ktx", "K":"KI", "n":"n"},
"gaussian", [], ["T"], {"mean":10.0, "std":1.0})
rxn2d = (["T"], ["T"], "hillpositive", {"s1":"T", "k":"ktl", "K":"KR", "n":1},
"gamma", [], ["X"], {"k":10.0, "theta":3.0})
#Reactions 3 and 4 remain unchanged
rxns_delay = [rxn1d, rxn2d, rxn3, rxn4]
#Instaniate the Model object, species, params, and x0 remain unchanged from the previous example
M_delay = Model(species = species, parameters = params, reactions = rxns_delay, initial_condition_dict = x0)
#Simulate the Model with delay
results_delay = py_simulate_model(timepoints, Model = M_delay, stochastic = True, delay = True)
#Plot the results
plt.figure(figsize = (12, 4))
plt.subplot(121)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic (no delay)")
plt.plot(timepoints, results_stoch["T"], label = "stochastic (no delay)")
plt.plot(timepoints, results_delay["T"], label = "stochastic (with delay)")
plt.legend()
plt.xlabel("Time")
plt.subplot(122)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic (no delay)")
plt.plot(timepoints, results_stoch["X"], label = "stochastic (no delay)")
plt.plot(timepoints, results_delay["X"], label = "stochastic (with delay)")
plt.legend()
plt.xlabel("Time")
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
Adding Rules
In deterministic and stochastic simulations, bioscrape also supports rules which can be used to set species or parameter values during the simulation. Rules are updated every simulation timepoint - and therefore the model may be sensitive to how the timepoint spacing.
The following example two rules will be added to the above model (without delay).
$I = I_0 H(T)$ where $H$ is the step function. Represents the addition of the inducer I at concentrion $I_0$ some time T. Prior to t=T, I is not present.
$S = M \frac{X}{1+aX}$ represents a saturating signal detected from the species X via some sort of sensor.
Rules can also be used for quasi-steady-state or quasi-equilibrium approximations, computing parameters on during the simulation, and much more!
There are two main types of rules:
1. "additive": used for calculating the total of many species. Rule 'equation' must be in the form $s_0 = s_1 + s_2 ...$ where $s_i$ each represents a species string.
2. "assignment": a general rule type of with 'equation' form $v = f(s, p)$ where $v$ can be either a species or parameter which is assigned the value $f(s, p)$ where $s$ are all the species and $p$ are all the parameters in the model and $f$ is written as an string.
|
#Add a new species "S" and "I" to the model. Note: by making S a species, it's output will be returned as a time-course.
M = Model(species = species + ["S", "I"], parameters = params, reactions = rxns, initial_condition_dict = x0)
#Create new parameters for rule 1. Model is now being modified
M.create_parameter("I0", 10) #Inducer concentration
M.create_parameter("T_I0", 25) #Initial time inducer is added
#Create rule 1:
#NOTE Rules can also be passed into the Model constructor as a list of tuples [("rule_type", {"equation":"eq string"})]
M.create_rule("assignment", {"equation":"I = _I0*Heaviside(t-_T_I0)"}) #"_" must be placed before param names, but not species.
#Rule 2 will use constants in equations instead of new parameters.
M.create_rule("assignment", {"equation":"S = 50*X/(1+.2*X)"})
#reset the initial concentration of the inducer to 0
M.set_species({"I":0})
print(M.get_species_list())
print(M.get_params())
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
#M.write_bioscrape_xml('models/txtl_bioscrape1.xml')
f = open('models/txtl_bioscrape1.xml')
print("Bioscrape Model:\n", f.read())
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
Saving and Loading Bioscrape Models via Bioscrape XML
Models can be saved and loaded as Bioscrape XML. Here we will save and load the transcription translation model and display the bioscrape XML underneath. Once a model has been loaded, it can be accessed and modified via the API.
|
M.write_bioscrape_xml('models/txtl_model.xml')
# f = open('models/txtl_model.xml')
# print("Bioscrape Model XML:\n", f.read())
M_loaded = Model('models/txtl_model.xml')
print(M_loaded.get_species_list())
print(M_loaded.get_params())
#Change the induction time
#NOTE That changing a model loaded from xml will not change the underlying XML.
M_loaded.set_parameter("T_I0", 50)
M_loaded.write_bioscrape_xml('models/txtl_model_bioscrape.xml')
# f = open('models/txtl_model_bs.xml')
# print("Bioscrape Model XML:\n", f.read())
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M_loaded) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M_loaded, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
SBML Support : Saving and Loading Bioscrape Models via SBML
Models can be saved and loaded as SBML. Here we will save and load the transcription translation model to a SBML file. Delays, compartments, function definitions, and other non-standard SBML is not supported.
Once a model has been loaded, it can be accessed and modified via the API.
|
M.write_sbml_model('models/txtl_model_sbml.xml')
# Print out the SBML model
f = open('models/txtl_model_sbml.xml')
print("Bioscrape Model converted to SBML:\n", f.read())
from bioscrape.sbmlutil import import_sbml
M_loaded_sbml = import_sbml('models/txtl_model_sbml.xml')
#Simulate the Model deterministically
timepoints = np.arange(0, 150, 1.0)
results_det = py_simulate_model(timepoints, Model = M_loaded_sbml) #Returns a Pandas DataFrame
#Simulate the Model Stochastically
results_stoch = py_simulate_model(timepoints, Model = M_loaded_sbml, stochastic = True)
#Plot the results
plt.figure(figsize = (12, 8))
plt.subplot(223)
plt.title("Transcript T")
plt.plot(timepoints, results_det["T"], label = "deterministic")
plt.plot(timepoints, results_stoch["T"], label = "stochastic")
plt.legend()
plt.subplot(224)
plt.title("Protein X")
plt.plot(timepoints, results_det["X"], label = "deterministic")
plt.plot(timepoints, results_stoch["X"], label = "stochastic")
plt.legend()
plt.subplot(221)
plt.title("Inducer I")
plt.plot(timepoints, results_det["I"], label = "deterministic")
plt.plot(timepoints, results_stoch["I"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
plt.subplot(222)
plt.title("Signal S")
plt.plot(timepoints, results_det["S"], label = "deterministic")
plt.plot(timepoints, results_stoch["S"], label = "stochastic")
plt.legend()
plt.xlabel("Time")
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
More on SBML Compatibility
The next cell imports a model from an SBML file and then simulates it using a deterministic simulation. There are limitations to SBML compatibility.
Cannot support delays or events when reading in SBML files. Events will be ignored and a warning will be printed out.
SBML reaction rates must be in a format such that when the reaction rates are converted to a string formula, sympy must be able to parse the formula. This will work fine for usual PEMDAS rates. This will fail for complex function definitions and things like that.
Species will be initialized to their initialAmount field when it is nonzero. If the initialAmount is zero, then the initialConcentration will be used instead.
Multiple compartments or anything related to having compartments will not be supported. No warnings will be provided for this.
Assignment rules are supported, but any other type of rule will be ignored and an associated warning will be printed out.
Parameter names must start with a letter and be alphanumeric, same for species names. Furthermore, log, exp, abs, heaviside, and other associated keywords for functions are not allowed to be variable names. When in doubt, just pick something else :)
Below, we first plot out the simulation results for an SBML model where a species X0 goes to a final species X1 through an enymatic process.
|
from bioscrape.sbmlutil import import_sbml
M_sbml = import_sbml('models/sbml_test.xml')
timepoints = np.linspace(0,100,1000)
result = py_simulate_model(timepoints, Model = M_sbml)
plt.figure()
for s in M_sbml.get_species_list():
plt.plot(timepoints, result[s], label = s)
plt.legend()
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
Deterministic and Stochastic Simulation of the Repressilator
We plot out the repressilator model found <a href="http://www.ebi.ac.uk/biomodels-main/BIOMD0000000012">here</a>. This model generates oscillations as expected. Highlighting the utility of this package, we then with a single line of code switch to a stochastic simulation and note that the amplitudes of each burst become noisy.
|
# Repressilator deterministic example
from bioscrape.sbmlutil import import_sbml
M_represillator = import_sbml('models/repressilator_sbml.xml')
#Simulate Deterministically and Stochastically
timepoints = np.linspace(0,700,10000)
result_det = py_simulate_model(timepoints, Model = M_represillator)
result_stoch = py_simulate_model(timepoints, Model = M_represillator, stochastic = True)
#Plot Results
plt.figure(figsize = (12, 8))
for i in range(len(M_represillator.get_species_list())):
s = M_represillator.get_species_list()[i]
plt.plot(timepoints, result_det[s], color = color_list[i], label = "Deterministic "+s)
plt.plot(timepoints, result_stoch[s], ":", color = color_list[i], label = "Stochastic "+s)
plt.title('Repressilator Model')
plt.xlabel('Time')
plt.ylabel('Amount')
plt.legend();
|
examples/Basic Examples - START HERE.ipynb
|
ananswam/bioscrape
|
mit
|
DNC: Begin Part 1
Part 1: Data Visualization
1-1: Plotting x-y data
Use the HCEPDB file to create a single 4x4 composite plot (not 4 separate figures). The plots should contain the following data
Upper-left: PCE vs VOC
UR: PCE vs JCS
LL: E_HOMO vs VOC
LR: E_LUMO vs PCE
You should make the plots the highest quality possible and, in your judgement, ready for inclusion in a formal report or publication.
In the cell after you are finished making the plot add a markdown cell and add the following information
There are five terms above from the HCEPDB that relate to photovoltaic materials - define them as they pertain to molecules that could be used for energy conversion applications
Briefly explain the changes you made from the default plot and why you made them
1-2: Contour plotting
Use the ALA2fes.dat file to create a contour plot of the alanine dipeptide $\Phi$ vs $\Psi$ free-energy surface. Guidelines and information:
The energy scale in the data input file is on kJ/mol and the free-energy surface (FES) was collected at a temperature of 300K:
You should create a contour plot that draws contour lines spaced every kT in energy and stops drawing contours once all of the features can be clearly seen.
This is a slightly different visualization than what we drew in class which used shaded coloring to draw the contours
Annotate the cell so I can follow all the steps you are doing. The final energy plot need not be in kJ/mol (you can convert it to other energy or use units of kT if you prefer.
1-1 Plot
|
data = pd.read_csv('HCEPD_100K.csv')
data.head()
#create a single 4x4 composite plot
#ref: https://plot.ly/matplotlib/subplots/
fig = plt.figure()
fig.set_figheight(10)
fig.set_figwidth(10)
ax1 = fig.add_subplot(221)
ax1.plot(data['voc'],data['pce'],',')
ax1.set_xlabel('VOC')
ax1.set_ylabel('PCE')
ax1.set_title('PCE vs VOC')
ax1.set_ylim([-0.5,12])
ax1.grid()
ax2 = fig.add_subplot(222)
ax2.plot(data['jsc'],data['pce'],',')
ax2.set_xlabel('JSC')
ax2.set_ylabel('PCE')
ax2.set_title('PCE vs JSC')
ax2.set_ylim([-0.5,12])
ax2.grid()
ax3 = fig.add_subplot(223)
ax3.plot(data['voc'],data['e_homo_alpha'],',')
ax3.set_xlabel('VOC')
ax3.set_ylabel('$E_{HOMO}$')
ax3.set_title('$E_{HOMO}$ vs VOC')
ax3.set_xlim([-0.1,1.8])
ax3.grid()
ax4 = fig.add_subplot(224)
ax4.plot(data['pce'],data['e_lumo_alpha'],',')
ax4.set_xlabel('VOC')
ax4.set_ylabel('$E_{LUMO}$')
ax4.set_title('$E_{LUMO}$ vs VOC')
ax4.set_xlim([-0.5,12])
ax4.grid()
|
DSMCER_Hw/dsmcer-hw-2-danielfather7/HW2 Tai-Yu Pan.ipynb
|
danielfather7/teach_Python
|
gpl-3.0
|
1-1 Information
Five Terms:
PCE: Power conversion efficiency, means how much sunlight can be converted to electricity.
VOC: Voltage Open Circuit. The output Voltage of a photovoltaic(PV) under no load.
JSC: Short circuit current. The current through the solar cell when the voltage across the solar cell is zero.
E_Homo: Highest occupied molecular orbital.
E_Lumo: Lowest unoccupied molecular orbital.
(HOMO–LUMO gap is the energy difference between the HOMO and LUMO.)
Changes I made:
Change figure size to clearly display all the labels and titles.
Change the subplot() to make a 2x2 figure.
Use sign "," to display each point.
Give all figures labels and titles.
Chnage the x limit and y limit to clearly show the points at 0.
Add grid lines.
1-2 Contour plot
|
#Read the file first.
data2 = pd.read_csv('ALA2fes.dat', delim_whitespace=True, comment='#', names=['phi','psi','file.free','der_phi','der_psi'])
#Take a look at the data.
data2.head()
#We should know how many columns there are before doing contour plot.
data2.shape
#Because it has 2500 columns, shape the data into 50x50 matrix.
N = 50
M = 50
X = np.reshape(data2.psi,[N,M])
Y = np.reshape(data2.phi,[N,M])
Z = np.reshape(data2['file.free']-data2['file.free'].min(),[N,M])
#I change unit from kJ/mol to kT, and it is appromately to time 2.5, so I pick this as spacer.
#Levels should contain all the FES, so I pick lines=42.
spacer = 2.5
lines = 42
levels = np.linspace(0,lines*spacer,num=(lines+1),endpoint=True)
fig2 = plt.figure(figsize=(5,5))
axes = fig.add_subplot(111)
plt.contour(X,Y,Z,levels)
#Give plot title and labels.
plt.title('$\Phi$ vs $\Psi$ on free energy surface')
plt.xlabel('$\Psi$')
plt.ylabel('$\Phi$')
plt.colorbar().ax.set_ylabel('FES (kT)')
|
DSMCER_Hw/dsmcer-hw-2-danielfather7/HW2 Tai-Yu Pan.ipynb
|
danielfather7/teach_Python
|
gpl-3.0
|
Create Date And Time Data
|
# Create data frame
df = pd.DataFrame()
# Create five dates
df['date'] = pd.date_range('1/1/2001', periods=150, freq='W')
|
machine-learning/break_up_dates_and_times_into_multiple_features.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Break Up Dates And Times Into Individual Features
|
# Create features for year, month, day, hour, and minute
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
df['hour'] = df['date'].dt.hour
df['minute'] = df['date'].dt.minute
# Show three rows
df.head(3)
|
machine-learning/break_up_dates_and_times_into_multiple_features.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Walker detection with openCV
Open video and get video info
|
video_capture = cv2.VideoCapture('resources/TestWalker.mp4')
# From https://www.learnopencv.com/how-to-find-frame-rate-or-frames-per-second-fps-in-opencv-python-cpp/
# Find OpenCV version
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
print major_ver, minor_ver, subminor_ver
# With webcam get(CV_CAP_PROP_FPS) does not work.
# Let's see for ourselves.
if int(major_ver) < 3 :
fps = video_capture.get(cv2.cv.CV_CAP_PROP_FPS)
print "Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps)
else :
fps = video_capture.get(cv2.CAP_PROP_FPS)
print "Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps)
# Number of frames to capture
num_frames = 120;
print "Capturing {0} frames".format(num_frames)
# Start time
start = time.time()
# Grab a few frames
for i in xrange(0, num_frames) :
ret, frame = video_capture.read()
# End time
end = time.time()
# Time elapsed
seconds = end - start
print "Time taken : {0} seconds".format(seconds)
# Calculate frames per second
fps = num_frames / seconds;
print "Estimated frames per second : {0}".format(fps);
# cProfile.runctx('video_capture.read()', globals(), locals(), 'profile.prof')
# use snakeviz to read the output of the profiling
|
testWalkerDetection.ipynb
|
davidruffner/cv-people-detector
|
mit
|
Track walker using difference between frames
Following http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
|
def getSmallGrayFrame(video):
ret, frame = video.read()
if not ret:
return ret, frame
frameSmall = frame[::4, ::-4]
gray = cv2.cvtColor(frameSmall, cv2.COLOR_BGR2GRAY)
return ret, gray
#cv2.startWindowThread()
count = 0
for x in range(200):
count = count + 1
print count
ret1, gray1 = getSmallGrayFrame(video_capture)
ret2, gray2 = getSmallGrayFrame(video_capture)
diff = cv2.absdiff(gray1, gray2)
print np.amax(diff), np.amin(diff)
print
diffThresh = cv2.threshold(diff, 15, 255, cv2.THRESH_BINARY)
kernel = np.ones((3,3),np.uint8)
erosion = cv2.erode(diffThresh[1],kernel,iterations = 1)
dilation = cv2.dilate(erosion,kernel,iterations = 1)
color1 = cv2.cvtColor(gray1, cv2.COLOR_GRAY2RGB)
color1[:,:,0:1] = color1[:,:,0:1]
colorDil = cv2.cvtColor(dilation, cv2.COLOR_GRAY2RGB)
colorDil[:,:,1:2] = colorDil[:,:,1:2]*0
total = cv2.add(color1, colorDil)
if not ret1 or not ret2:
break
cv2.imshow('Video', total)
cv2.imwrite('resources/frame{}.png'.format(x), total)
if cv2.waitKey(1) & 0xFF == ord('q'): # Need the cv2.waitKey to update plot
break
# To close the windows: http://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv#15058451
cv2.waitKey(1000)
cv2.waitKey(1)
cv2.destroyAllWindows()
cv2.waitKey(1)
|
testWalkerDetection.ipynb
|
davidruffner/cv-people-detector
|
mit
|
Typical SOLT Procedure
A two-port calibration is accomplished in an identical way to one-port, except all the standards are two-port networks. This is even true of reflective standards.
So if you measure reflective standards you must measure two of them simultaneously, and store information in a two-port (S21=S12=0). For example, connect a first short to port-1 and a second short to port-2, and save a two-port measurement as short,short.s2p or similar.
<img src="VNA_2_1port.svg" width="30%"/>
If you don't have the two same reflective standard, no worries! You can forge a two-port Network from two one-port Network using the function skrf.network.two_port_reflect:
short = rf.Network('ideals/short.s1p') # a 1-port Network
shorts = rf.two_port_reflect(short, short) # a 2-port Network
The function skrf.network.two_port_reflect does this:
<img src="2_1port_to_1_2port.svg" width="50%"/>
The typical workflow for a SOLT calibration is:
# a list of Network types, holding 'ideal' responses
my_ideals = [
rf.Network('ideal/short, short.s2p'),
rf.Network('ideal/open, open.s2p'),
rf.Network('ideal/load, load.s2p'),
rf.Network('ideal/thru.s2p'),
]
# a list of Network types, holding 'measured' responses
my_measured = [
rf.Network('measured/short, short.s2p'),
rf.Network('measured/open, open.s2p'),
rf.Network('measured/load, load.s2p'),
rf.Network('measured/thru.s2p'),
]
## create a SOLT instance
cal = SOLT(
ideals = my_ideals,
measured = my_measured,
)
## run, and apply calibration to a DUT
# run calibration algorithm
cal.run()
# apply it to a dut
dut = rf.Network('my_dut.s2p')
dut_caled = cal.apply_cal(dut)
# plot results
dut_caled.plot_s_db()
# save results
dut_caled.write_touchstone()
Example
The following example illustrates a common situation: a DUT is connected to a VNA using two cables of different lengths. The purpose of the calibration is to move the reference plane do the DUT, that is to remove the effect of the cables from the measurement.
<img src="line1_dut_line2.svg" width="60%"/>
In the example below, the DUT is already known, just to be able to confirm that the calibration method is working at the end. Of course, in reality, the DUT is generally not known...
|
dut = rf.data.ring_slot
dut.plot_s_db(lw=2) # this is what we should find after the calibration
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
The ideal component Networks are obtained from your calibration kit manufacturers or from modelling.
In this example, we simulate ideal components from transmission line theory. We create a lossy and noisy transmission line (for the sake of the example).
|
media = rf.DefinedGammaZ0(frequency=dut.frequency, gamma=0.5 + 1j)
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
Then we create the ideal components: Short, Open and Load, and Through. By default, the methods media.short(), media.open(), and media.match() return a one-port network, the SOLT class expects a list of two-port Network, so two_port_reflect() is needed to forge a two-port network from two one-port networks (media.thru() returns a two-port network and no adjustment is needed).
Alternatively, the argument nports=2 can be used as a shorthand for this task.
|
# ideal 1-port Networks
short_ideal = media.short()
open_ideal = media.open()
load_ideal = media.match() # could also be: media.load(Gamma0=0)
thru_ideal = media.thru()
# forge a two-port network from two one-port networks
short_ideal_2p = rf.two_port_reflect(short_ideal, short_ideal)
open_ideal_2p = rf.two_port_reflect(open_ideal, open_ideal)
load_ideal_2p = rf.two_port_reflect(load_ideal, load_ideal)
# alternatively, the "nport=2" argument can be used as a shorthand
# short_ideal_2p = media.short(nports=2)
# open_ideal_2p = media.open(nports=2)
# load_ideal_2p = media.match(nports=2)
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
Now that we have our ideal elements, let's fake the measurements.
Note that the transmission lines are not symmetric in the example below, to make it as generic as possible. In such case, it is necessary to call the flipped() method to connect the ideal elements on the correct side of the line2 object.
|
# left and right piece of transmission lines
line1 = media.line(d=20, unit='cm')**media.impedance_mismatch(1,2)
line2 = media.line(d=30, unit='cm')**media.impedance_mismatch(1,3)
# add some noise to make it more realistic
line1.add_noise_polar(.01, .1)
line2.add_noise_polar(.01, .1)
# fake the measured setup
measured = line1 ** dut ** line2
# fake the calibration measurements
# Note the use of flipped() on line2
open_measured = rf.two_port_reflect(line1 ** media.open(), line2.flipped() ** media.open())
short_measured = rf.two_port_reflect(line1 ** media.short(), line2.flipped() ** media.short())
load_measured = rf.two_port_reflect(line1 ** media.load(Gamma0=0), line2.flipped() ** media.load(Gamma0=0))
thru_measured = line1 ** line2
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
We can now create the lists of Network that the SOLT class expects:
|
# a list of Network types, holding 'ideal' responses
my_ideals = [
short_ideal_2p,
open_ideal_2p,
load_ideal_2p,
thru_ideal, # Thru should be the last
]
# a list of Network types, holding 'measured' responses
my_measured = [
short_measured,
open_measured,
load_measured,
thru_measured, # Thru should be the last
]
## create a SOLT instance
cal = rf.calibration.SOLT(
ideals = my_ideals,
measured = my_measured,
)
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
And finally apply the calibration:
|
# run calibration algorithm
cal.run()
# apply it to a dut
measured_caled = cal.apply_cal(measured)
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
Let's see the results for S11 and S21:
|
measured.plot_s_db(m=0, n=0, lw=2, label='measured')
measured_caled.plot_s_db(m=0, n=0, lw=2, label='caled')
dut.plot_s_db(m=0, n=0, ls='--', lw=2, label='expected')
measured.plot_s_db(m=1, n=0, lw=2, label='measured')
measured_caled.plot_s_db(m=1, n=0, lw=2, label='caled')
dut.plot_s_db(m=1, n=0, ls='--', lw=2, label='expected')
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
The caled Network is (mostly) equal the DUT as expected:
|
dut == measured
dut == measured_caled # within 1e-4 absolute tolerance
|
doc/source/examples/metrology/SOLT.ipynb
|
jhillairet/scikit-rf
|
bsd-3-clause
|
Class 7: Deterministic Time Series Models
Time series models are at the foundatation of dynamic macroeconomic theory. A time series model is an equation or system of equations that describes how the variables in the model change with time. Here, we examine some theory about deterministic, i.e., non-random, time series models and we explore methods for simulating them. Leter, we'll examine the properties of stochastic time series models by introducing random variables to the discrete time models covered below.
Discrete Versus Continuous Time
To begin, suppose that we are interested in a variable $y$ that takes on the value $y_t$ at date $t$. The date index $t$ is a real number. We'll say that $y_t$ is a discrete time variable if $t$ takes on values from a countable sequence; e.g. $t = 1, 2, 3 \ldots$ and so on. Otherwise, if $t$ takes on values from an uncountable sequence; e.g. $t\in[0,\infty)$, then we'll say that $y_t$ is a continuous time variable. Discrete and continuous time models both have important places in macroeconomic theory, but we're going to focus on understanding discrete time models.
First-Order Difference Equations
Now, suppose that the variable $y_t$ is determined by a linear function of $y_{t-1}$ and some other exogenously given variable $w_t$
\begin{align}
y_{t} & = (1- \rho) \mu + \rho y_{t-1} + w_t, \tag{1}\
\end{align}
where $\rho$ and $\mu$ are constants. Equation (1) is an example of a linear first-order difference equation. As a difference equation, it specifies how $y_t$ is related to past values of $y$. The equation is a first-order difference equation because it specifies that $y_t$ depends only on $y_{t-1}$ and not $y_{t-2}$ or $y_{t-3}$.
Example: Compounding Interest
Suppose that you have an initial balance of $b_0$ dollars in a savings account that pays an interest rate $i$ per compounding period. Then, after the first compounding, your account will have $b_1 = (1+i)b_0$ dollars in it. Assuming that you never withdraw funds from the account, then your account balance in any subsequent period $t$ is given by the following difference equation:
\begin{align}
b_{t} & = \left(1+i\right) b_{t-1}. \tag{2}
\end{align}
Equation (2) is linear first-order difference equation in the same form as Equation (1). You can see this by setting $y_t = b_t$, $\rho=1+i$, $\mu=0$, and $w_t=0$ in Equation (1).
Example: Capital Accumulation
Let $K_t$ denote the amont of physical capital in a country at date $t$, let $\delta$ denote the rate at which the capital stock depreciates each period, and let $I_t$ denote the country's investment in new capital in date $t$. Then the law of motion for the stock of physical capital is:
\begin{align}
K_{t+1} & = I_t + (1-\delta)K_t. \tag{3}
\end{align}
This standard expression for the law of motion for the capital stock is a linear first-order difference equation. To reconcile Equation (3) with Equation (1), set $y_t = K_{t+1}$, $\rho=1-\delta$, $\mu=0$, and $w_t=I_t$.
Note: There is a potentially confusing way in which we identified the $t+1$-dated variable $K_{t+1}$ with the $t$-dated variable $y_t$ in this example. We can do this because the value of $K_{t+1}$ truly is determined at date $t$ even though the capital isn't used for production until the next period.
Computation
From Equation (1), it's easy to compute the value of $y_t$ as long as you know the values of the constants $\rho$ and $\mu$ and the variables $y_{t-1}$ and $w_t$. To begin, let's suppose that the values of the constants are $\mu=0$, $\rho=0.5$. Then Equation (1) in our example looks like this:
\begin{align}
y_{t} & = 0.5 y_{t-1} + w_t. \tag{4}\
\end{align}
Now, suppose that the initial value of $y$ is $y_0=0$ and that $w$ is equal to 1 in the first period and equal to zero in subsequent periods. That is: $w_1=1$ and $w_2=w_3=\cdots =0$. Now, with what we have, we can compute $y_1$. Here's how:
|
# Initialize variables: y0, rho, w1
# Compute the period 1 value of y
# Print the result
|
winter2017/econ129/python/Econ129_Class_07.ipynb
|
letsgoexploring/teaching
|
mit
|
The variable y1 in the preceding example stores the computed value for $y_1$. We can continue to iterate on Equation (4) to compute $y_2$, $y_3$, and so on. For example:
|
# Initialize w2
# Compute the period 2 value of y
# Print the result
|
winter2017/econ129/python/Econ129_Class_07.ipynb
|
letsgoexploring/teaching
|
mit
|
We can do this as many times as necessary to reach the desired value of $t$. Note that iteration is necesary. Even though $y_t$ is apparently a function of $t$, we could not, for example, compute $y_{20}$ directly. Rather we'd have to compute $y_1, y_2, y_3, \ldots, y_{19}$ first. The linear first-order difference equation is an example of a recursive model and iteration is necessary for computing recursive models in general.
Of course, there is a better way. Let's define a function called diff1_example()that takes as arguments $\rho$, an array of values for $w$, and $y_0$.
|
# Initialize the variables T and w
# Define a function that returns an arrary of y-values given rho, y0, and an array of w values.
|
winter2017/econ129/python/Econ129_Class_07.ipynb
|
letsgoexploring/teaching
|
mit
|
Exercise:
Use the function diff1_example() to make a $2\times2$ grid of plots just like the previous exercise but with with $\rho = 0.5$, $-0.5$, $1$, and $1.25$. For each, set $T = 10$, $y_0 = 1$, $w_0 = 1$, and $w_1 = w_2 = \cdots 0$.
|
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,2,1)
y = diff1_example(0.5,w,0)
ax1.plot(y,'-',lw=5,alpha = 0.75)
ax1.set_title('$\\rho=0.5$')
ax1.set_ylabel('y')
ax1.set_xlabel('t')
ax1.grid()
|
winter2017/econ129/python/Econ129_Class_07.ipynb
|
letsgoexploring/teaching
|
mit
|
Exercise 1: Visualize this data set. What representation is most appropriate, do you think?
Exercise 2: Let's now do some machine learning. In this exercise, you are going to use a random forest classifier to classify this data set. Here are the steps you'll need to perform:
* Split the column with the classes (stars and galaxies) from the rest of the data
* Cast the features and the classes to numpy arrays
* Split the data into a test set and a training set. The training set will be used to train the classifier; the test set we'll reserve for the very end to test the final performance of the model (more on this on Friday). You can use the scikit-learn function test_train_split for this task
* Define a RandomForest object from the sklearn.ensemble module. Note that the RandomForest class has three parameters:
- n_estimators: The number of decision trees in the random forest
- max_features: The maximum number of features to use for the decision trees
- min_samples_leaf: The minimum number of samples that need to end up in a terminal leaf (this effectively limits the number of branchings each tree can make)
* We'll want to use cross-validation to decide between parameters. You can do this with the scikit-learn class GridSearchCV. This class takes a classifier as an input, along with a dictionary of the parameter values to search over.
In the earlier lecture, you learned about four different types of cross-validation:
* hold-out cross validation, where you take a single validation set to compare your algorithm's performance to
* k-fold cross validation, where you split your training set into k subsets, each of which holds out a different portion of the data
* leave-one-out cross validation, where you have N different subsets, each of which leaves just one sample as a validation set
* random subset cross validation, where you pick a random subset of your data points k times as your validation set.
Exercise 2a: Which of the four algorithms is most appropriate here? And why?
Answer: In this case, k-fold CV or random subset CV seem to be the most appropriate algorithms to use.
* Using hold-out cross validation leads to a percentage of the data not being used for training at all.
* Given that the data set is not too huge, using k-fold CV probably won't slow down the ML procedure too much.
* LOO CV is particularly useful for small data sets, where even training on a subset of the training data is difficult (for example because there are only very few examples of a certain class).
* Random subset CV could also yield good results, since there's no real ordering to the training data. Do not use this algorithm when the ordering matters (for example in Hidden Markov Models)
Important: One important thing to remember that cross-validation crucially depends on your samples being independent from each other. Be sure that this is the case before using it. For example, say you want to classify images of galaxies, but your data set is small, and you're not sure whether your algorithm is rotation independent. So you might choose to use the same images multiple times in your training data set, but rotated by a random degree. In this case, you have to make sure all versions of the same image are included in the same data set (either the training, the validation or the test set), and not split across data sets! If you don't, your algorithm will be unreasonably confident in its accuracy (because you are training and validating essentially on the same data points).
Note that scikit-learn can actually deal with that! The class GroupKFold allows k-fold cross validation using an array of indices for your training data. Validation sets will only be split among samples with different indices.
But this was just an aside. Last time, you used a random forest and used k-fold cross validation to effectively do model selection for the different parameters that the random forest classifier uses.
Exercise 2b: Now follow the instructions above and implement your random forest classifier.
|
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
# set the random state
rs = 23
# extract feature names, remove class
# cast astropy table to pandas and then to a numpy array, remove classes
# our classes are the outcomes to classify on
# let's do a split in training and test set:
# we'll leave the test set for later.
# instantiate the random forest classifier:
# do a grid search over the free random forest parameters:
pars =
grid_results =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Exercise 2c: Take a look at the different validation scores for the different parameter combinations. Are they very different or are they similar?
It looks like the scores are very similar, and have very small variance between the different cross validation instances. It can be useful to do this kind of representation to see for example whether there is a large variance in the cross-validation results.
Cross-validating Multiple Model Components
In most machine learning applications, your machine learning algorithm might not be the only component having free parameters. You might not even be sure which machine learning algorithm to use!
For demonstration purposes, imagine you have many features, but many of them might be correlated. A standard dimensionality reduction technique to use is Principal Component Analysis.
Exercise 4: The number of features in our present data set is pretty small, but let's nevertheless attempt to reduce dimensionality with PCA. Run a PCA decomposition in 2 dimensions and plot the results. Colour-code stars versus galaxies. How well do they separate along the principal components?
Hint: Think about whether you can run PCA on training and test set separately, or whether you need to run it on both together before doing the train-test split?
|
from sklearn.decomposition import PCA
# instantiate the PCA object
pca =
# fit and transform the samples:
X_pca =
# make a plot of the PCA components colour-coded by stars and galaxies
fig, ax = plt.subplots(1, 1, figsize=(12,8))
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Exercise 5: Re-do the classification on the PCA components instead of the original features. Does it work better or worse than the classification on the original features?
|
# Train PCA on training data set
# apply to test set
# instantiate the random forest classifier:
# do a grid search over the free random forest parameters:
pars =
grid_results =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Note: In general, you should (cross-)validate both your data transformations and your classifiers!
But how do we know whether two components was really the right number to choose? perhaps it should have been three? Or four? Ideally, we would like to include the feature engineering in our cross validation procedure. In principle, you can do this by running a complicated for-loop. In practice, this is what scikit-learns Pipeline is for! A Pipeline object takes a list of tuples of ("string", ScikitLearnObject) pairs as input and strings them together (your feature vector X will be put first through the first object, then the second object and so on sequentially).
Note: scikit-learn distinguishes between transformers (i.e. classes that transform the features into something else, like PCA, t-SNE, StandardScaler, ...) and predictors (i.e. classes that produce predictions, such as random forests, logistic regression, ...). In a pipeline, all but the last objects must be transformers; the last object can be either.
Exercise 6: Make a pipeline including (1) a PCA object and (2) a random forest classifier. Cross-validate both the PCA components and the parameters of the random forest classifier. What is the best number of PCA components to use?
Hint: You can also use the convenience function make_pipeline to creatue your pipeline.
Hint: Check the documentation for the precise notation to use for cross-validating parameters.
|
from sklearn.pipeline import Pipeline
# make a list of name-estimator tuples
estimators =
# instantiate the pipeline
pipe =
# make a dictionary of parameters
params =
# perform the grid search
grid_search =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Comparing Algorithms
So far, we've just picked PCA because it's common. But what if there's a better algorithm for dimensionality reduction out there for our problem? Or what if you'd want to compare random forests to other classifiers?
In this case, your best option is to split off a separate validation set, perform cross-validation for each algorithm separately, and then compare the results using hold-out cross validation and your validation set (Note: Do not use your test set for this! Your test set is only used for your final error estimate!)
Doing CV across algorithms is difficult, since the KFoldCV object needs to know which parameters belong to which algorithms, which is difficult to do.
Exercise 7: Pick an algorithm from the manifold learning library in scikit-learn, cross-validate a random forest for both, and compare the performance of both.
Important: Do not choose t-SNE. The reason is that t-SNE does not generalize to new samples! This means while it's useful for data visualization, you cannot train a t-SNE transformation (in the scikit-learn implementation) on one part of your data and apply it to another!
|
# First, let's redo the train-test split to split the training data
# into training and hold-out validation set
# make a list of name-estimator tuples
estimators =
# instantiate the pipeline
pipe =
# make a dictionary of parameters
params =
# perform the grid search
grid_search =
# complete the print functions
print("Best score: ")
print("Best parameter set: " )
print("Validation score for model with PCA: ")
# Now repeat the same procedure with the second algorithm you've picked.
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Challenge Problem: Interpreting Results
Earlier today, we talked about interpreting machine learning models. Let's see how you would go about this in practice.
Repeat your classification with a logistic regression model.
Is the logistic regression model easier or harder to interpret? Why?
Assume you're interested in which features are the most relevant to your classification (because they might have some bearing on the underlying physics). Would you do your classification on the original features or the PCA transformation? Why?
Change the subset of parameters used in the logistic regression models. Look at the weights. Do they change? How? Does that affect your interpretability?
|
from sklearn.linear_model import LogisticRegressionCV
lr =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Even More Challenging Challenge Problem: Implementing Your Own Estimator
Sometimes, you might want to use algorithms, for example for feature engineering, that are not implemented in scikit-learn. But perhaps these transformations still have free parameters to estimate. What to do?
scikit-learn classes inherit from certain base classes that make it easy to implement your own objects. Below is an example I wrote for a machine learning model on time series, where I wanted to re-bin the time series in different ways and and optimize the rebinning factor with respect to the classification afterwards.
|
from sklearn.base import BaseEstimator, TransformerMixin
class RebinTimeseries(BaseEstimator, TransformerMixin):
def __init__(self, n=4, method="average"):
"""
Initialize hyperparameters
:param n: number of samples to bin
:param method: "average" or "sum" the samples within a bin?
:return:
"""
self.n = n ## save number of bins to average together
self.method = method
return
def fit(self,X):
"""
I don't really need a fit method!
"""
## set number of light curves (L) and
## number of samples per light curve (k)
return self
def transform(self, X):
self.L, self.K = X.shape
## set the number of binned samples per light curve
K_binned = int(self.K/self.n)
## if the number of samples in the original light curve
## is not divisible by n, then chop off the last few samples of
## the light curve to make it divisible
#print("X shape: " + str(X.shape))
if K_binned*self.n < self.K:
X = X[:,:self.n*K_binned]
## the array for the new, binned light curves
X_binned = np.zeros((self.L, K_binned))
if self.method in ["average", "mean"]:
method = np.mean
elif self.method == "sum":
method = np.sum
else:
raise Exception("Method not recognized!")
#print("X shape: " + str(X.shape))
#print("L: " + str(self.L))
for i in xrange(self.L):
t_reshape = X[i,:].reshape((K_binned, self.n))
X_binned[i,:] = method(t_reshape, axis=1)
return X_binned
def predict(self, X):
pass
def score(self, X):
pass
def fit_transform(self, X, y=None):
self.fit(X)
X_binned = self.transform(X)
return X_binned
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Here are the important things about writing transformer objects for use in scikit-learn:
* The class must have the following methods:
- fit: fit your training data
- transform: transform your training data into the new representation
- predict: predict new examples
- score: score predictions
- fit_transform is optional (I think)
* The __init__ method only sets up parameters. Don't put any relevant code in there (this is convention more than anything else, but it's a good one to follow!)
* The fit method is always called in a Pipeline object (either on its own or as part of fit_transform). It usually modifies the internal state of the object, so returning self (i.e. the object itself) is usually fine.
* For transformer objects, which don't need scoring and prediction methods, you can just return pass as above.
Exercise 8: Last time, you learned that the SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag} - \mathtt{cmodelMag} \gt 0.145,$$
sources that satisfy this criteria are considered galaxies.
Implement an object that takes $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ as inputs and has a free parameter p that sets the value above which a source is considered a galaxy.
Implement a transform methods that returns a single binary feature that is one if $$\mathtt{psfMag} - \mathtt{cmodelMag} \gt p$$ and zero otherwise.
Add this feature to your optimized set of features consisting of either the PCA or your alternative representation, and run a random forest classifier on both. Run a CV on all components involved.
Hint: $\mathtt{psfMag}$ and $\mathtt{cmodelMag}$ are the first and the last column in your feature vector, respectively.
Hint: You can use FeatureUnion to combine the outputs of two transformers in a single data set. (Note that using pipeline with all three will chain them, rather than compute the feature union, followed by a classifier). You can input your FeatureUnion object into Pipeline.
|
class PSFMagThreshold(BaseEstimator, TransformerMixin):
def __init__(self, p=1.45,):
def fit(self,X):
def transform(self, X):
def predict(self, X):
def score(self, X):
def fit_transform(self, X, y=None):
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Now let's make a feature set that combines this feature with the PCA features:
|
from sklearn.pipeline import FeatureUnion
transformers =
feat_union =
X_transformed =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Now we can build the pipeline:
|
# combine the transformers
transformers =
# make the feature union
feat_union =
# combine estimators for the pipeline
estimators =
# define the pipeline object
pipe_c =
# make the parameter set
params =
# perform the grid search
grid_search_c =
# complete the print statements:
print("Best score: ")
print("Best parameter set: ")
print("Validation score: ")
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Exercise 10: Run a logistic regression classifier on this data, for a very low regularization (0.0001) and a
very large regularization (10000) parameter. Print the accuracy and a confusion matrix of the results for each run. How many mis-classified samples are in each? Where do the mis-classifications end up? If you were to run a cross validation on this, could you be sure to get a good model? Why (not)?
As part of this exercise, you should plot a confusion matrix. A confusion matrix takes the true labels and the predicted labels, and then plots a grid for all samples where true labels and predicted labels match versus do not match. You can use the scikit-learn function confusion_matrix to create one. pyplot.matshow is useful for plotting it, but just printing it on the screen works pretty well, too (at least for the two classes considered here).
|
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score
X_train2, X_test2, y_train2, y_test2 = train_test_split(X_new, y_new,
test_size = 0.3,
random_state = 20)
C_all =
for C in C_all:
lr =
# ... insert code here ...
# make predictions for the validation set
y_pred =
# print accuracy score for this regularization:
# make and print a confusion matrix
cm =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Exercise 11: Take a look at the metrics implemented for model evaluation in scikit-learn, in particular the different versions of the F1 score. Is there a metric that may be more suited to the task above? Which one?
Hint: Our imbalanced class, the one we're interested in, is the STAR class. Make sure you set the keyword pos_label in the f1_score function correctly.
|
for C in C_all:
lr =
# ... insert code here ...
# predict the validdation set
y_pred = lr.predict(X_test2)
# print both accuracy and F1 score for comparison:
# create and plot a confusion matrix:
cm =
|
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
<H2>Data standarization</H2>
<P>The mean of every feature must be zero, and the standard deviation 1</P>
|
from sklearn.preprocessing import StandardScaler
# extract features
features = ['sepal_length','sepal_width','petal_length','petal_width']
x = df.loc[:, features].values
y = df.loc[:,['target']].values
# Standarize features
stdx = StandardScaler().fit_transform(x)
stdDf = pd.DataFrame(data = stdx, columns = features)
pd.concat([stdDf, df['target']], axis=1).head()
|
MachineLearning/PCA.ipynb
|
JoseGuzman/myIPythonNotebooks
|
gpl-2.0
|
<H2>Principal component analysis into two dimensions </H2>
|
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
principalComponents = pca.fit_transform(stdx)
pcDf = pd.DataFrame(data = principalComponents, columns =['PC1', 'PC2'])
finalDf = pd.concat([pcDf, df['target']], axis=1)
finalDf.head()
var1, var2 = pca.explained_variance_ratio_
print('The first component contains %2.4f %% of the variance'%var1)
print('The second component containts %2.4f %% of the variance'%var2)
print('Total variance explained %2.4f %% '%(var1+var2))
|
MachineLearning/PCA.ipynb
|
JoseGuzman/myIPythonNotebooks
|
gpl-2.0
|
<H2>Plot everything</H2>
|
fig = plt.figure(figsize = (4,4))
ax = fig.add_subplot(111)
xlabel = 'Component 1 (%2.2f %% $\sigma^2$)'%var1
ylabel = 'Component 2 (%2.2f %% $\sigma^2$)'%var2
ax.set_xlabel(xlabel, fontsize = 12)
ax.set_ylabel(ylabel, fontsize = 12)
ax.set_title('Two component analysis', fontsize = 15)
mytargets = np.unique(df['target'].values).tolist()
colors = ['r', 'g', 'b']
for target, color in zip(mytargets,colors):
mytarget = finalDf['target'] == target
ax.scatter(finalDf.loc[mytarget, 'PC1']
, finalDf.loc[mytarget, 'PC2']
, c = color
, s = 10)
ax.legend(mytargets)
#ax.grid()
|
MachineLearning/PCA.ipynb
|
JoseGuzman/myIPythonNotebooks
|
gpl-2.0
|
A float.
Note, that Python just automatically converts the result of division to floats, to be more correct.
Those kind of automatic data type changes were a problem in the old times, which is why older systems would rather insist on returning the same kind of data type as the user provided.
These days, the focus has shifted on rather doing the math correct and let the system deal with the overhead for this implicit data type change.
|
1 / 10 + 2.0 # all fine here as well
4 / 2 # even so, mathematically not required, Python returns a float here as well.
4 // 2 # But if you need an integer to be returned, force it with //
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
The reason why this automatic type conversion is even possible within Python is because it is a so called "dynamically typed" programming languages.
As opposed to "statically typed" ones like C(++) and Java.
Meaning, in Python this is possible:
|
a = 5
a
a = 'astring'
a
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
I just changed the datatype of a without deleting it first. It was just changed to whatever I need it to be.
But remember:
|
from IPython.display import YouTubeVideo
YouTubeVideo('b23wrRfy7SM')
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
(read here, if you are interested in all the multi-media display capabilities of the Jupyter notebook.)
A note about names and values
|
x = 10
y = 2 * x
x = 25
y
# What is the value of y? If you are surprised, please discuss it.
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Nice (lengthy / thorough) discussion of this:
http://nedbatchelder.com/text/names.html
We haven't yet covered some of the concepts that appear in this blog post so don't panic if something looks unfamiliar.
Today: More practice with IPython & a simple formula
Recall that to start an Jupyter notebook, simply type (in your Linux shell):
$> jupyter notebook
or to open a specific file and keep the terminal session free:
$> jupyter notebook filename.ipynb &
Note: Discuss cell types Code vs Markdown vs raw NB convert briefly
Law of gravitation equation
$F(r) = G \frac{m_1 m_2}{r^2}$
$G = 6.67 \times 10^{-11} \frac{\text{m}^3}{\text{kg} \cdot \text{s}^2}$ (the gravitational constant)
$m_1$ is the mass of the first body in kilograms (kg)
$m_2$ is the mass of the second body in kilograms (kg)
$r$ is the distance between the centers of the two bodies in meters (m)
Example 1 - Find the force of a person standing on earth
For a person of mass 70 kg standing on the surface of the Earth (mass $5.97 \times 10^{24}$ kg, radius 6370 km (Earth fact sheet)) the force will be (in units of Newtons, 1 N = 0.225 lbs):
$$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 70}{(6.37 \times 10^{6})^2}$$
|
6.67e-11 * 5.97e24 * 70 / (6.37e6)**2
# remember: the return of the last line in any cell will be automatically printed
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Notice that I put spaces on either side of each mathematical operator. This isn't required, but enhances clarity. Consider the alternative:
|
6.67e-11*5.97e24*70/(6.37e6)**2
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Example 2 - Find the acceleration due to Earth's gravity (the g in F = mg)
Using the gravitation equation above, set $m_2 = 1$ kg
$$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 1}{(6.37 \times 10^{6})^2}$$
|
6.67e-11 * 5.97e24 * 1 / (6.37e6)**2
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Q. Why would the above $F(r)$ implementation be inconvenient if we had to do this computation many times, say for different masses?
Q. How could we improve this?
|
G = 55
G = 6.67e-11
m1 = 5.97e24
m2 = 70
r = 6.37e6
F = G * m1 * m2 / r**2 # white-space for clarity!
F # remember: no print needed for the last item of a cell.
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Q. What do the "x = y" statements do?
|
G = 6.67e-11
mass_earth = 5.97e24
mass_object = 70
radius = 6.37e6
force = G * mass_earth * mass_object / radius**2
force
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Q. Can you imagine a downside to descriptive variable names?
Dealing with long lines of code
Split long lines with a backslash (with no space after it, just carriage return):
|
force2 = G * massEarth * \
massObject / radius**2
force2
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Reserved Words
Using "reserved words" will lead to an error:
|
lambda = 5000 # Some wavelength in Angstroms
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
See p.10 of the textbook for a list of Python's reserved words. Some really common ones are:
and, break, class, continue, def,
del, if, elif, else, except, False,
for, from, import, in, is, lambda, None,
not, or, pass, return, True, try, while
Comments
|
# Comments are specified with the pound symbol #
# Everything after a # in a line is ignored by Python
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Q. What will the line below do?
|
print('this') # but not 'that'
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
As an approx value, it's good practice to comment about 50\% of your code!
But one can reduce that reasonbly, by choosing intelligle variable names.
There is another way to specify "block comments": using two sets of 3 quotation marks ''' '''.
|
# Comments without ''' ''' or # create an error:
This is a comment
that takes
several lines.
# However, in this form it does not, even for multiple lines:
#
'''
This is a really, super, super, super, super, super, super, super,
super, super, super, super, super, super, super, super, super,
long comment (not really).
'''
#
# We will use block comments to document modules later!
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Notice that that comment was actually printed. That's because it's not technically a comment that is totally ignored, but just a multi-line string object.
It is being used in source code for documenting your code. Why does that work? Because that long multi-line string is not being assigned to a variable, so the Python interpreter just throws it away for not being used. But it's very useful for creating documented code!
Formatting text and numbers
|
from math import pi # more in today's tutorial
# With old style formatting
"pi = %.6f" % pi
# With new style formatting.
# It's longer in this example, but is much more powerful in general.
# You decide, which one you want to use.
"pi = {:.6f}".format(pi)
myPi = 3.92834234
print("The Earth's mass is %.0f kilograms." % myPi) # note the rounding that happens!
print("This is myPi: {} is awesome".format(str(int(myPi))))
# converting to int cuts off decimals
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Hard to read!! (And, note the junk at the end.)
Consider %x.yz
% inside the quotes
- means a "format statement" follows
x is the number of characters in the resulting string
- Not required
y is the number of digits after the decimal point
- Not required
z is the format (e.g. f (float), e (scientific), s (string))
- Required
% outside and to the right of the quotes
- Separates text from variables -- more on this later
- Uses parentheses if there is more than one variable
There is a list of print format specifications on p. 12 in the textbook
%s string (of ascii characters)
%d integer
%0xd integer padded with x leading zeros
%f decimal notation with six decimals
%e or %E compact scientific notation
%g or %G compact decimal or scientific notation
%xz format z right-justified in a field of width x
%-xy same, left-justified
%.yz format z with y decimals
%x.yz format z with y decimals in a field of width x
%% percentage sign
The power of the new formatting
If you don't care about length of the print: The type is being chosen correctly for you.
Some more examples
|
print(radius, force) # still alive from far above!
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Q. What will the next statement print?
|
# If we use triple quotes we don't have to
# use \ for multiple lines
print('''At the Earth's radius of %.2e meters,
the force is %6.0f Newtons.''' % (radius, force))
# Justification
print("At the Earth's radius of %.2e meters, \
the force is %-20f Newtons." % (radius, force))
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Note when block comments are used, the text appears on 2 lines versus when using the \, the text appears all on 1 line.
|
print("At the Earth's radius of %.2e meters, the force is %.0f Newtons." % (radius, force))
print("At the Earth's radius of %.2e meters, the force is %i Newtons." % (radius, force))
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
Note the difference between %.0f (float) and %i (integer) (rounding vs. truncating)
Also note, that the new formatting system actually warns you when you do something that would lose precision:
|
print("At the Earth's radius of {:.2e} meters, the force is {:.0f} Newtons.".format(radius, force))
print("At the Earth's radius of {:.2e} meters, the force is {:i} Newtons.".format(radius, force))
# Line breaks can also be implemented with \n
print('At the Earth radius of %.2e meters,\nthe force is\n%0.0f Newtons.' % (radius, force))
|
lecture_02_basics.ipynb
|
CUBoulder-ASTR2600/lectures
|
isc
|
<a id=want></a>
The want operator
We need to know what we're trying to do -- what we want the data to look like. To borrow a phrase from our friend Tom Sargent, we say that we apply the want operator.
Some problems we've run across that ask to be solved:
Numerical data is contaminated by commas (marking thousands) or dollar signs.
Row and column labels are contaminated.
Missing values are marked erratically.
We have too much data, would prefer to choose a subset.
Variables run across rows rather than down columns.
What we want in each case is the opposite of what we have: we want nicely formatted numbers, clean row and column labels, and so on.
We'll solve the first four problems here, the last one in the next notebook.
Example: Chipotle data
This data comes from a New York Times story about the number of calories in a typical order at Chipotle. The topic doesn't particularly excite us, but the data raises a number of issues that come up repeatedly. We adapt some code written by Daniel Forsyth.
Note: The file is a tsv (Tab Separated Values) file, so we need to set the separator accordingly when we call pandas' read_csv method. Remember that the default value of sep is sep=',' (see the docstring). We can change it to tabular by wrinting sep='\t'.
|
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv'
chipotle = pd.read_csv(url, sep='\t') # tab (\t) separated values
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We want to have it as a number, specifically a float.
Example: Data Bootcamp entry poll
This is the poll we did at the start of the course. Responses were collected in a Google spreadsheet, which we converted to a csv and uploaded to our website.
|
url1 = "https://raw.githubusercontent.com/NYUDataBootcamp/"
url2 = "Materials/master/Data/entry_poll_spring17.csv"
url = url1 + url2
entry_poll = pd.read_csv(url)
entry_poll.head()
print('Dimensions:', entry_poll.shape)
print('Data types:\n\n', entry_poll.dtypes, sep='')
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comments. This is mostly text data, which means it's assigned the dtype object. There are two things that would make the data easier to work with:
First: The column names are excessively verbose. This one's easy: We replace them with single words. Which we do below.
|
# (1) create list of strings with the new varnames
newnames = ['time', 'why', 'program', 'programming', 'prob_stats', 'major', 'career', 'data', 'topics']
newnames
# (2) Use the str.title() string method to make the varnames prettier
newnames = [name.title() for name in newnames]
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
str.title() returns a copy of the string in which first characters of all the words are capitalized.
|
newnames
# (3) assign newnames to the variables
entry_poll.columns = newnames
entry_poll.head(1)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Second: The second one is harder. The question about special topics of interest says "mark all that apply." In the spreadsheet, we have a list of every choice the person checked. Our want is to count the number of each type of response. For example, we might want a bar chart that gives us the number of each response. The question is how we get there.
|
# check multi-response question to see what we're dealing with
entry_poll['Topics'].head(20)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Example: OECD healthcare statistics
The OECD collects healthcare data on lots of (mostly rich) countries, which is helpful in producing comparisons. Here we use a spreadsheet that can be found under Frequently Requested Data.
|
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2016-Frequently-Requested-Data.xls'
oecd = pd.read_excel(url1 + url2)
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
This looks bad. But we can always use pd.read_excel?. Let's look into the excel file.
* multiple sheets (want: Physicians)
|
oecd = pd.read_excel(url1 + url2, sheetname='Physicians')
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
The first three lines are empty. Skip those
|
oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3)
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Would be nice to have the countries as indices
|
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0)
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
The last two columns contain junk
|
oecd.shape # drop 57th and 58th columns
# There is no skipcols argument, so let's google "read_excel skip columns" -> usecols
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57))
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
What about the bottom of the table?
|
oecd.tail() # we are downloading the footnotes too
?pd.read_excel # -> skip_footer
# How many rows to skip??
oecd.tail(25)
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57),
skip_footer=20)
oecd.tail()
oecd.dtypes[:5]
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
We still have a couple issues.
The index includes a space and a number: Australia 1, Chile 3, etc. We care about this because when we plot the data across countries, the country labels are going to be country names, so we want them in a better form than this.
The ..'s in the sheet lead us to label any column that includes them as dtype object. Here we want to label them as missing values.
If we want to plot each country against time, then we'll need to switch the rows and columns somehow, so that the x axis in the plot (the year) is the index and not the column label.
Example: World Economic Outlook
The IMF's World Economic Outlook database contains a broad range of macroeconomic data for a large number of countries. It's updated twice a year and is a go-to source for things like current account balances (roughly, the trade balance) and government debt and deficits. It also has a few quirks, as we'll see.
Example. Run the following code as is, and with the thousands and na_values parameters commented out. How do the dtypes differ?
|
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls'
# Try
weo = pd.read_excel(url) # NOT an excel file!
# try to open the file with a plain text editor (it is a TSV)
weo = pd.read_csv(url, sep = '\t')
weo.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Useful columns:
- 1, 2, 3, 4, 6 (indices)
- years, say from 1980 to 2016
Need a list that specifies these
|
names = list(weo.columns)
names[:8]
# for var details
details_list = names[1:5] + [names[6]]
# for years
years_list = names[9:-6]
details_list
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list)
weo.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Look at the bottom
|
weo.tail(3)
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python') # read_csv requires 'python' engine (otherwise warning)
weo.tail()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Missing values
|
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a')
weo.head()
weo.dtypes[:10] # still not ok
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Notice the , for thousands. As we saw before, there is an easy fix
|
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a',
thousands =',')
weo.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Example. Let's use a string method to fix the item_price variable in the Chipotle dataframe. This has three parts:
Use the method str to identify this as a string method.
Apply the string method of our choice (here replace) to fix the string.
Use the astype method to convert the fixed-up string to a float.
We start by making a copy of the chp dataframe that we can experiment with.
|
chipotle.head()
# create a copy of the df to play with
chipotle_num = chipotle.copy()
print('Original dtype:', chipotle_num['item_price'].dtype)
# delete dollar signs (dtype does not change!)
chipotle_num['item_price'].str.replace('$', '').head()
# delete dollar signs, convert to float, AND assign back to chipotle_num in one line
chipotle_num['item_price'] = chipotle_num['item_price'].str.replace('$', '').astype(float)
print('New dtype:', chipotle_num['item_price'].dtype)
# assign back to chp for future use
chipotle = chipotle_num
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. We did everything here in one line: replace the dollar sign with a string method, then converted to float using astype. If you think this is too dense, you might break it into two steps.
Example. Here we use the astype method again to convert the dtypes of weo into float
|
weo.head(1)
weo.head(1).dtypes
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Want to convert the year variables into float
|
weo['1980'].astype(float)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
This error indicates that somewhere in weo['1980'] there is a string value --. We want to convert that into NaN. Later we will see how we can do that directly. For now use read_csv() again
|
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values=['n/a', '--'],
thousands =',')
weo.head(1)
# With that out of our way, we can do the conversion for one variable
weo['1980'].astype(float)
# or for all numeric variables
years = [str(year) for year in range(1980, 2017)]
weo[years] = weo[years].astype(float)
weo.dtypes
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation:
Play with the rsplit method to see how it works.
Apply rsplit to the example country = 'United States 1'.
Use a string method to do this to all the entries of the variable Country.
|
# try this with an example first
country = 'United States 1'
# get documentation for the rsplit method
country.rsplit?
# an example
country.rsplit()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Not quite, we only want to split once.
|
# what about this?
country.rsplit(maxsplit=1)
# one more step, we want the first component of the list
country.rsplit(maxsplit=1)[0]
oecd.index
oecd.index.str.rsplit(maxsplit=1)[0]
#try
oecd.index.str.rsplit?
# Note the TWO str's
oecd.index.str.rsplit(n=1).str[0]
#or use the str.get() method
oecd.index.str.rsplit(n=1).str.get(0)
oecd.index = oecd.index.str.rsplit(n=1).str.get(0)
oecd.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comments.
Note that we need two str's here: one to do the split, the other to extract the first element.
For reasons that mystify us, we ran into problems when we used maxsplit=1, but it works with n=1.
This is probably more than you want to know, but file away the possibilities in case you need them.
<a id='missing'></a>
Missing values
It's important to label missing values, so that Pandas doesn't interpret entries as strings. Pandas is also smart enough to ignore things labeled missing when it does calculations or graphs. If we compute, for example, the mean of a variable, the default is to ignore missing values.
We've seen that we can label certain entries as missing values in read statements: read_csv, read_excel, and so on. Here we do it directly, mostly to remind ourselves what's involved.
Marking missing values
Example. The oecd dataframe contains a number of instances of .. (double period). How can we mark them as missing values?
|
docs = oecd
docs.head()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
|
docsna = docs.replace(to_replace=['..'], value=[None])
docsna.dtypes
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Working with missing values
|
# grab a variable to play with
var = docsna[2013].head(10)
var
# why not '2013'? check the type
docsna.columns
# which ones are missing ("null")?
var.isnull()
# which ones are not missing ("not null")?
var.notnull()
# drop the missing
var.dropna()
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
|
docsna[2013].plot.barh(figsize=(4, 12))
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
<a id='selection'></a>
Selecting variables and observations
The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, querying, or filtering. We'll treat the terms as synonymous.
There are lots of ways to do this. Mostly we do "Boolean" selection, which we address in the next section. We review more direct options here, mostly at high speed because they're not things we use much.
In the outline below, df is a dataframe, var and varn are variable names, n1 and n2 are integers,
- vlist = ['var1', 'var2'] is a list of variable names, and
- nlist = [0, 3, 4] is a list of numerical variable or observation indexes and
- bools is a list or pandas Series of booleans (True and False).
Some of the basic selection/indexing/slicing methods have the form:
df[var] extracts a variable -- a series, in other words.
df[vlist] extracts a new dataframe consisting of the variables in vlist.
df[nlist] does the same thing.
df[bools]: extracts each row where the corresponding element in bools is true. len(bools) must be equal to df.size[0]
df[n1:n2] extracts observations n1 to n2-1, the traditional slicing syntax.
|
# we create a small dataframe to experiment with
small = weo.head()
small
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Example. Let's try each of these in a different cell and see what they do:
small[['Country', 'Units']]
small[[0, 4]]
small['2011']
small[1:3]
Can you explain the results?
|
small[['Country', 'Units']]
small[[0, 4]]
small['2011']
small[1:3]
small[[False, True, True, False, False]]
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
<a id='boolean'></a>
Boolean selection
We choose observations that satisfy one or more conditions. Boolean selection consists of two steps that we typically combine in one statement:
Use a comparison to construct a Boolean variable consisting of True and False.
Compute df[comparison], where df is a dataframe and comparison is a comparison. This will select the observations (rows) for which comparison is true and throw away the others.
We work through this one step at a time:
Example: apply the want operator
Comparisons for dataframes
Boolean selection: select observations for which the comparison is True
The isin method
This is easier to describe with an example.
Example: Apply the want operator to WEO
Our want here is to take the weo dataframe and extract government debt and deficits for a given set of countries. Putting this to work involves several steps.
Here's the head of the dataframe to remind us what we're dealing with.
|
weo.head(2)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Note the use of the drop_duplicates method, which does what it sounds like: remove duplicate rows (!)
|
variable_list = weo[['Country', 'Subject Descriptor', 'Units']].drop_duplicates()
print('Number of variables: ', variable_list.shape[0])
variable_list.head()
country_list = weo['Country'].drop_duplicates()
print('Number of countries: ', country_list.shape[0])
country_list
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Exercise.
Construct a list of countries with countries = weo['Country']; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped?
<!-- cn = sorted(list(set(weo.index))) -->
<!--
* What are the country codes (`ISO`) for Argentina and the United States?
* What are the variable codes (`WEO Subject Code`) for government debt (gross debt, percent of GDP) and net lending/borrowing (also percent of GDP)?
-->
Comment. Now that we have the country and variable codes, we can be more explicit about what we want. We want observations with those country and variable codes.
We work up to the solution one step at a time.
Comparisons for series
We can construct comparisons for series (dataframe columns) much as we did with simple variables. The difference is that we get a complete column of True/False responses, not just one.
Mutiple comparisons have a different syntax than we saw earlier: and is replaced by &, and or is replaced by |. And when we have more than one comparison, we need to enclose them in parentheses.
Examples. Consider the comparisons:
small['Units'] == 'National currency'
small['2011'] >= 100
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
Remind yourself what the & and | do.
|
small
small['Units'] == 'National currency'
small['2011'] >= 200
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Exercise. Construct dataframes for which
small['Units'] does not equal 'National currency'.
small['Units'] equals 'National currency' and small['2011'] is greater than 100.
<a id='isin'></a>
The isin method
Pay attention now, this is really useful. Suppose we want to extract the data for which weo['Country'] == 'Argentina' or weo['Country'] == 'Greece' (Greece). We could do that by combining the comparisons:
python
(weo['Country'] == 'Aregentina') | (weo['Country'] == 'Greece')
Remind youself that | stands for "or." (What do we use for "and"?)
A simpler approach is to apply the isin method to a variable. This sets the comparison equal to True if the value of the observation is of weo['Country'] equals any element in a list. We could do the same thing using mulitple comparisons, but this is a lot easier.
Let's see how this works.
Example. Let's apply the same logic to variable codes. If we want to extract the observations with codes
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
we would use
|
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo['WEO Subject Code'].isin(vlist)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. We're choosing 2 variables from 45, so there are lots of Falses.
|
weo.tail(4)
# this time let's use the result of isin for selection
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo[weo['WEO Subject Code'].isin(vlist)].head(6)
# we've combined several things in one line
comparison = weo['WEO Subject Code'].isin(vlist)
selection = weo[comparison]
selection.head(6)
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like:
|
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['Argentina', 'Greece']
weo_sub = weo[weo['WEO Subject Code'].isin(variables) & weo['Country'].isin(countries)]
weo_sub
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comments.
We've now done what we described when we applied the want operator.
This is a go-to method. Circle it for later reference.
This is a go-to method. Circle it for later reference.
Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Assign the result to the dataframe gdp. Hint: You can adapt the code we just ran. The variable code is NGDPD. The country codes are CHN, IND, and USA.
|
countries = ['China', 'India', 'United States']
gdp = weo[(weo['WEO Subject Code']=='NGDPD') & weo['Country'].isin(countries)]
gdp
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
|
gdp['2015'].plot(kind='bar')
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
<a id='contains'></a>
The contains method
Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros.
We apply it to the Topics variable of the Entry Poll dataframe entry_poll. You may recall that this variable could have more than one response. We tease them apart with the contains method. Our want is to have a yes/no variable for each response.
|
# recall
entry_poll['Topics'].head(10)
# the contains method
entry_poll['Topics'].str.contains('Machine Learning')
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. That's pretty good, we now know which students mentioned Machine Learning and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick: we multiply by 1.
|
entry_poll['Topics'].str.contains('Machine Learning').head(10)*1
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Now let's do the same for some of the other entries and save them in new variables.
|
topics = ['Web scraping', 'Machine Learning', 'regression']
old_ep = entry_poll.copy()
vnames = []
for x in topics:
newname = 'Topics' + '_' + x
vnames.append(newname)
entry_poll[newname] = entry_poll['Topics'].str.contains(x)*1
vnames
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. You might want to think about this a minute. Or two.
|
# create new df of just these variables
student_topics = entry_poll[vnames]
student_topics
# count them with the sum method
topics_counts = student_topics.sum()
topics_counts
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Comment. Just for fun, here's a bar graph of the result.
|
topics_counts.plot(kind='barh')
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
and a pie chart
|
topics_counts.plot(kind='pie')
|
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
|
NYUDataBootcamp/Materials
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.