markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Adding Delays In stochastic simulations, bioscrape also supports delay. In a delay reaction, delay inputs/outputs are consumed/produced after some amount of delay. Reactions may have a mix of delay and non-delay inputs and outputs. Bioscrape innately supports a number of delay-types: fixed: constant delay with paramet...
from bioscrape.simulator import py_simulate_model from bioscrape.types import Model #create reaction tuples with delays require additional elements. They are of the form: #(Inputs[string list], Outputs[string list], propensity_type[string], propensity_dict {propensity_param:model_param}, # delay_type[string], DelayInp...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
Adding Rules In deterministic and stochastic simulations, bioscrape also supports rules which can be used to set species or parameter values during the simulation. Rules are updated every simulation timepoint - and therefore the model may be sensitive to how the timepoint spacing. The following example two rules will b...
#Add a new species "S" and "I" to the model. Note: by making S a species, it's output will be returned as a time-course. M = Model(species = species + ["S", "I"], parameters = params, reactions = rxns, initial_condition_dict = x0) #Create new parameters for rule 1. Model is now being modified M.create_parameter("I0", ...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
Saving and Loading Bioscrape Models via Bioscrape XML Models can be saved and loaded as Bioscrape XML. Here we will save and load the transcription translation model and display the bioscrape XML underneath. Once a model has been loaded, it can be accessed and modified via the API.
M.write_bioscrape_xml('models/txtl_model.xml') # f = open('models/txtl_model.xml') # print("Bioscrape Model XML:\n", f.read()) M_loaded = Model('models/txtl_model.xml') print(M_loaded.get_species_list()) print(M_loaded.get_params()) #Change the induction time #NOTE That changing a model loaded from xml will not chan...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
SBML Support : Saving and Loading Bioscrape Models via SBML Models can be saved and loaded as SBML. Here we will save and load the transcription translation model to a SBML file. Delays, compartments, function definitions, and other non-standard SBML is not supported. Once a model has been loaded, it can be accessed a...
M.write_sbml_model('models/txtl_model_sbml.xml') # Print out the SBML model f = open('models/txtl_model_sbml.xml') print("Bioscrape Model converted to SBML:\n", f.read()) from bioscrape.sbmlutil import import_sbml M_loaded_sbml = import_sbml('models/txtl_model_sbml.xml') #Simulate the Model deterministically timepoi...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
More on SBML Compatibility The next cell imports a model from an SBML file and then simulates it using a deterministic simulation. There are limitations to SBML compatibility. Cannot support delays or events when reading in SBML files. Events will be ignored and a warning will be printed out. SBML reaction rates must ...
from bioscrape.sbmlutil import import_sbml M_sbml = import_sbml('models/sbml_test.xml') timepoints = np.linspace(0,100,1000) result = py_simulate_model(timepoints, Model = M_sbml) plt.figure() for s in M_sbml.get_species_list(): plt.plot(timepoints, result[s], label = s) plt.legend()
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
Deterministic and Stochastic Simulation of the Repressilator We plot out the repressilator model found <a href="http://www.ebi.ac.uk/biomodels-main/BIOMD0000000012">here</a>. This model generates oscillations as expected. Highlighting the utility of this package, we then with a single line of code switch to a stochast...
# Repressilator deterministic example from bioscrape.sbmlutil import import_sbml M_represillator = import_sbml('models/repressilator_sbml.xml') #Simulate Deterministically and Stochastically timepoints = np.linspace(0,700,10000) result_det = py_simulate_model(timepoints, Model = M_represillator) result_stoch = py_sim...
examples/Basic Examples - START HERE.ipynb
ananswam/bioscrape
mit
DNC: Begin Part 1 Part 1: Data Visualization 1-1: Plotting x-y data Use the HCEPDB file to create a single 4x4 composite plot (not 4 separate figures). The plots should contain the following data Upper-left: PCE vs VOC UR: PCE vs JCS LL: E_HOMO vs VOC LR: E_LUMO vs PCE You should make the plots the highest qual...
data = pd.read_csv('HCEPD_100K.csv') data.head() #create a single 4x4 composite plot #ref: https://plot.ly/matplotlib/subplots/ fig = plt.figure() fig.set_figheight(10) fig.set_figwidth(10) ax1 = fig.add_subplot(221) ax1.plot(data['voc'],data['pce'],',') ax1.set_xlabel('VOC') ax1.set_ylabel('PCE') ax1.set_title('PCE...
DSMCER_Hw/dsmcer-hw-2-danielfather7/HW2 Tai-Yu Pan.ipynb
danielfather7/teach_Python
gpl-3.0
1-1 Information Five Terms: PCE: Power conversion efficiency, means how much sunlight can be converted to electricity. VOC: Voltage Open Circuit. The output Voltage of a photovoltaic(PV) under no load. JSC: Short circuit current. The current through the solar cell when the voltage across the solar cell is zero. E_Ho...
#Read the file first. data2 = pd.read_csv('ALA2fes.dat', delim_whitespace=True, comment='#', names=['phi','psi','file.free','der_phi','der_psi']) #Take a look at the data. data2.head() #We should know how many columns there are before doing contour plot. data2.shape #Because it has 2500 columns, shape the data into ...
DSMCER_Hw/dsmcer-hw-2-danielfather7/HW2 Tai-Yu Pan.ipynb
danielfather7/teach_Python
gpl-3.0
Create Date And Time Data
# Create data frame df = pd.DataFrame() # Create five dates df['date'] = pd.date_range('1/1/2001', periods=150, freq='W')
machine-learning/break_up_dates_and_times_into_multiple_features.ipynb
tpin3694/tpin3694.github.io
mit
Break Up Dates And Times Into Individual Features
# Create features for year, month, day, hour, and minute df['year'] = df['date'].dt.year df['month'] = df['date'].dt.month df['day'] = df['date'].dt.day df['hour'] = df['date'].dt.hour df['minute'] = df['date'].dt.minute # Show three rows df.head(3)
machine-learning/break_up_dates_and_times_into_multiple_features.ipynb
tpin3694/tpin3694.github.io
mit
Walker detection with openCV Open video and get video info
video_capture = cv2.VideoCapture('resources/TestWalker.mp4') # From https://www.learnopencv.com/how-to-find-frame-rate-or-frames-per-second-fps-in-opencv-python-cpp/ # Find OpenCV version (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') print major_ver, minor_ver, subminor_ver # With webcam get(CV_...
testWalkerDetection.ipynb
davidruffner/cv-people-detector
mit
Track walker using difference between frames Following http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms
def getSmallGrayFrame(video): ret, frame = video.read() if not ret: return ret, frame frameSmall = frame[::4, ::-4] gray = cv2.cvtColor(frameSmall, cv2.COLOR_BGR2GRAY) return ret, gray #cv2.startWindowThread() count = 0 for x in range(200): count = count + 1 print count ret1, g...
testWalkerDetection.ipynb
davidruffner/cv-people-detector
mit
Typical SOLT Procedure A two-port calibration is accomplished in an identical way to one-port, except all the standards are two-port networks. This is even true of reflective standards. So if you measure reflective standards you must measure two of them simultaneously, and store information in a two-port (S21=S12=0)....
dut = rf.data.ring_slot dut.plot_s_db(lw=2) # this is what we should find after the calibration
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
The ideal component Networks are obtained from your calibration kit manufacturers or from modelling. In this example, we simulate ideal components from transmission line theory. We create a lossy and noisy transmission line (for the sake of the example).
media = rf.DefinedGammaZ0(frequency=dut.frequency, gamma=0.5 + 1j)
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
Then we create the ideal components: Short, Open and Load, and Through. By default, the methods media.short(), media.open(), and media.match() return a one-port network, the SOLT class expects a list of two-port Network, so two_port_reflect() is needed to forge a two-port network from two one-port networks (media.thru(...
# ideal 1-port Networks short_ideal = media.short() open_ideal = media.open() load_ideal = media.match() # could also be: media.load(Gamma0=0) thru_ideal = media.thru() # forge a two-port network from two one-port networks short_ideal_2p = rf.two_port_reflect(short_ideal, short_ideal) open_ideal_2p = rf.two_port_refl...
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
Now that we have our ideal elements, let's fake the measurements. Note that the transmission lines are not symmetric in the example below, to make it as generic as possible. In such case, it is necessary to call the flipped() method to connect the ideal elements on the correct side of the line2 object.
# left and right piece of transmission lines line1 = media.line(d=20, unit='cm')**media.impedance_mismatch(1,2) line2 = media.line(d=30, unit='cm')**media.impedance_mismatch(1,3) # add some noise to make it more realistic line1.add_noise_polar(.01, .1) line2.add_noise_polar(.01, .1) # fake the measured setup measure...
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
We can now create the lists of Network that the SOLT class expects:
# a list of Network types, holding 'ideal' responses my_ideals = [ short_ideal_2p, open_ideal_2p, load_ideal_2p, thru_ideal, # Thru should be the last ] # a list of Network types, holding 'measured' responses my_measured = [ short_measured, open_measured, load_measured, thru_measu...
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
And finally apply the calibration:
# run calibration algorithm cal.run() # apply it to a dut measured_caled = cal.apply_cal(measured)
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
Let's see the results for S11 and S21:
measured.plot_s_db(m=0, n=0, lw=2, label='measured') measured_caled.plot_s_db(m=0, n=0, lw=2, label='caled') dut.plot_s_db(m=0, n=0, ls='--', lw=2, label='expected') measured.plot_s_db(m=1, n=0, lw=2, label='measured') measured_caled.plot_s_db(m=1, n=0, lw=2, label='caled') dut.plot_s_db(m=1, n=0, ls='--', lw=2, label...
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
The caled Network is (mostly) equal the DUT as expected:
dut == measured dut == measured_caled # within 1e-4 absolute tolerance
doc/source/examples/metrology/SOLT.ipynb
jhillairet/scikit-rf
bsd-3-clause
Class 7: Deterministic Time Series Models Time series models are at the foundatation of dynamic macroeconomic theory. A time series model is an equation or system of equations that describes how the variables in the model change with time. Here, we examine some theory about deterministic, i.e., non-random, time series ...
# Initialize variables: y0, rho, w1 # Compute the period 1 value of y # Print the result
winter2017/econ129/python/Econ129_Class_07.ipynb
letsgoexploring/teaching
mit
The variable y1 in the preceding example stores the computed value for $y_1$. We can continue to iterate on Equation (4) to compute $y_2$, $y_3$, and so on. For example:
# Initialize w2 # Compute the period 2 value of y # Print the result
winter2017/econ129/python/Econ129_Class_07.ipynb
letsgoexploring/teaching
mit
We can do this as many times as necessary to reach the desired value of $t$. Note that iteration is necesary. Even though $y_t$ is apparently a function of $t$, we could not, for example, compute $y_{20}$ directly. Rather we'd have to compute $y_1, y_2, y_3, \ldots, y_{19}$ first. The linear first-order difference equa...
# Initialize the variables T and w # Define a function that returns an arrary of y-values given rho, y0, and an array of w values.
winter2017/econ129/python/Econ129_Class_07.ipynb
letsgoexploring/teaching
mit
Exercise: Use the function diff1_example() to make a $2\times2$ grid of plots just like the previous exercise but with with $\rho = 0.5$, $-0.5$, $1$, and $1.25$. For each, set $T = 10$, $y_0 = 1$, $w_0 = 1$, and $w_1 = w_2 = \cdots 0$.
fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(2,2,1) y = diff1_example(0.5,w,0) ax1.plot(y,'-',lw=5,alpha = 0.75) ax1.set_title('$\\rho=0.5$') ax1.set_ylabel('y') ax1.set_xlabel('t') ax1.grid()
winter2017/econ129/python/Econ129_Class_07.ipynb
letsgoexploring/teaching
mit
Exercise 1: Visualize this data set. What representation is most appropriate, do you think? Exercise 2: Let's now do some machine learning. In this exercise, you are going to use a random forest classifier to classify this data set. Here are the steps you'll need to perform: * Split the column with the classes (stars a...
from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.ensemble import RandomForestClassifier # set the random state rs = 23 # extract feature names, remove class # cast astropy table to pandas and then to a numpy array, remove classes # our classes are th...
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Exercise 2c: Take a look at the different validation scores for the different parameter combinations. Are they very different or are they similar? It looks like the scores are very similar, and have very small variance between the different cross validation instances. It can be useful to do this kind of representation...
from sklearn.decomposition import PCA # instantiate the PCA object pca = # fit and transform the samples: X_pca = # make a plot of the PCA components colour-coded by stars and galaxies fig, ax = plt.subplots(1, 1, figsize=(12,8))
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Exercise 5: Re-do the classification on the PCA components instead of the original features. Does it work better or worse than the classification on the original features?
# Train PCA on training data set # apply to test set # instantiate the random forest classifier: # do a grid search over the free random forest parameters: pars = grid_results =
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Note: In general, you should (cross-)validate both your data transformations and your classifiers! But how do we know whether two components was really the right number to choose? perhaps it should have been three? Or four? Ideally, we would like to include the feature engineering in our cross validation procedure. In ...
from sklearn.pipeline import Pipeline # make a list of name-estimator tuples estimators = # instantiate the pipeline pipe = # make a dictionary of parameters params = # perform the grid search grid_search =
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Comparing Algorithms So far, we've just picked PCA because it's common. But what if there's a better algorithm for dimensionality reduction out there for our problem? Or what if you'd want to compare random forests to other classifiers? In this case, your best option is to split off a separate validation set, perform ...
# First, let's redo the train-test split to split the training data # into training and hold-out validation set # make a list of name-estimator tuples estimators = # instantiate the pipeline pipe = # make a dictionary of parameters params = # perform the grid search grid_search = # complete the print functio...
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Challenge Problem: Interpreting Results Earlier today, we talked about interpreting machine learning models. Let's see how you would go about this in practice. Repeat your classification with a logistic regression model. Is the logistic regression model easier or harder to interpret? Why? Assume you're interested in w...
from sklearn.linear_model import LogisticRegressionCV lr =
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Even More Challenging Challenge Problem: Implementing Your Own Estimator Sometimes, you might want to use algorithms, for example for feature engineering, that are not implemented in scikit-learn. But perhaps these transformations still have free parameters to estimate. What to do? scikit-learn classes inherit from ce...
from sklearn.base import BaseEstimator, TransformerMixin class RebinTimeseries(BaseEstimator, TransformerMixin): def __init__(self, n=4, method="average"): """ Initialize hyperparameters :param n: number of samples to bin :param method: "average" or "sum" the samples within a bin...
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Here are the important things about writing transformer objects for use in scikit-learn: * The class must have the following methods: - fit: fit your training data - transform: transform your training data into the new representation - predict: predict new examples - score: score predictions - fit_t...
class PSFMagThreshold(BaseEstimator, TransformerMixin): def __init__(self, p=1.45,): def fit(self,X): def transform(self, X): def predict(self, X): def score(self, X): def fit_transform(self, X, y=None):
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now let's make a feature set that combines this feature with the PCA features:
from sklearn.pipeline import FeatureUnion transformers = feat_union = X_transformed =
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now we can build the pipeline:
# combine the transformers transformers = # make the feature union feat_union = # combine estimators for the pipeline estimators = # define the pipeline object pipe_c = # make the parameter set params = # perform the grid search grid_search_c = # complete the print statements: print("Best score: ") print("Be...
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Exercise 10: Run a logistic regression classifier on this data, for a very low regularization (0.0001) and a very large regularization (10000) parameter. Print the accuracy and a confusion matrix of the results for each run. How many mis-classified samples are in each? Where do the mis-classifications end up? If you we...
from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, accuracy_score X_train2, X_test2, y_train2, y_test2 = train_test_split(X_new, y_new, test_size = 0.3, rando...
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Exercise 11: Take a look at the metrics implemented for model evaluation in scikit-learn, in particular the different versions of the F1 score. Is there a metric that may be more suited to the task above? Which one? Hint: Our imbalanced class, the one we're interested in, is the STAR class. Make sure you set the keywo...
for C in C_all: lr = # ... insert code here ... # predict the validdation set y_pred = lr.predict(X_test2) # print both accuracy and F1 score for comparison: # create and plot a confusion matrix: cm =
Sessions/Session02/Day2/ModelSelection_Exercise.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
<H2>Data standarization</H2> <P>The mean of every feature must be zero, and the standard deviation 1</P>
from sklearn.preprocessing import StandardScaler # extract features features = ['sepal_length','sepal_width','petal_length','petal_width'] x = df.loc[:, features].values y = df.loc[:,['target']].values # Standarize features stdx = StandardScaler().fit_transform(x) stdDf = pd.DataFrame(data = stdx, columns = features...
MachineLearning/PCA.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2>Principal component analysis into two dimensions </H2>
from sklearn.decomposition import PCA pca = PCA(n_components = 2) principalComponents = pca.fit_transform(stdx) pcDf = pd.DataFrame(data = principalComponents, columns =['PC1', 'PC2']) finalDf = pd.concat([pcDf, df['target']], axis=1) finalDf.head() var1, var2 = pca.explained_variance_ratio_ print('The first componen...
MachineLearning/PCA.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2>Plot everything</H2>
fig = plt.figure(figsize = (4,4)) ax = fig.add_subplot(111) xlabel = 'Component 1 (%2.2f %% $\sigma^2$)'%var1 ylabel = 'Component 2 (%2.2f %% $\sigma^2$)'%var2 ax.set_xlabel(xlabel, fontsize = 12) ax.set_ylabel(ylabel, fontsize = 12) ax.set_title('Two component analysis', fontsize = 15) mytargets = np.unique(df['targe...
MachineLearning/PCA.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
A float. Note, that Python just automatically converts the result of division to floats, to be more correct. Those kind of automatic data type changes were a problem in the old times, which is why older systems would rather insist on returning the same kind of data type as the user provided. These days, the focus has ...
1 / 10 + 2.0 # all fine here as well 4 / 2 # even so, mathematically not required, Python returns a float here as well. 4 // 2 # But if you need an integer to be returned, force it with //
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
The reason why this automatic type conversion is even possible within Python is because it is a so called "dynamically typed" programming languages. As opposed to "statically typed" ones like C(++) and Java. Meaning, in Python this is possible:
a = 5 a a = 'astring' a
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
I just changed the datatype of a without deleting it first. It was just changed to whatever I need it to be. But remember:
from IPython.display import YouTubeVideo YouTubeVideo('b23wrRfy7SM')
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
(read here, if you are interested in all the multi-media display capabilities of the Jupyter notebook.) A note about names and values
x = 10 y = 2 * x x = 25 y # What is the value of y? If you are surprised, please discuss it.
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Nice (lengthy / thorough) discussion of this: http://nedbatchelder.com/text/names.html We haven't yet covered some of the concepts that appear in this blog post so don't panic if something looks unfamiliar. Today: More practice with IPython & a simple formula Recall that to start an Jupyter notebook, simply type (in yo...
6.67e-11 * 5.97e24 * 70 / (6.37e6)**2 # remember: the return of the last line in any cell will be automatically printed
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Notice that I put spaces on either side of each mathematical operator. This isn't required, but enhances clarity. Consider the alternative:
6.67e-11*5.97e24*70/(6.37e6)**2
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Example 2 - Find the acceleration due to Earth's gravity (the g in F = mg) Using the gravitation equation above, set $m_2 = 1$ kg $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 1}{(6.37 \times 10^{6})^2}$$
6.67e-11 * 5.97e24 * 1 / (6.37e6)**2
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. Why would the above $F(r)$ implementation be inconvenient if we had to do this computation many times, say for different masses? Q. How could we improve this?
G = 55 G = 6.67e-11 m1 = 5.97e24 m2 = 70 r = 6.37e6 F = G * m1 * m2 / r**2 # white-space for clarity! F # remember: no print needed for the last item of a cell.
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What do the "x = y" statements do?
G = 6.67e-11 mass_earth = 5.97e24 mass_object = 70 radius = 6.37e6 force = G * mass_earth * mass_object / radius**2 force
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. Can you imagine a downside to descriptive variable names? Dealing with long lines of code Split long lines with a backslash (with no space after it, just carriage return):
force2 = G * massEarth * \ massObject / radius**2 force2
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Reserved Words Using "reserved words" will lead to an error:
lambda = 5000 # Some wavelength in Angstroms
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
See p.10 of the textbook for a list of Python's reserved words. Some really common ones are: and, break, class, continue, def, del, if, elif, else, except, False, for, from, import, in, is, lambda, None, not, or, pass, return, True, try, while Comments
# Comments are specified with the pound symbol # # Everything after a # in a line is ignored by Python
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What will the line below do?
print('this') # but not 'that'
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
As an approx value, it's good practice to comment about 50\% of your code! But one can reduce that reasonbly, by choosing intelligle variable names. There is another way to specify "block comments": using two sets of 3 quotation marks ''' '''.
# Comments without ''' ''' or # create an error: This is a comment that takes several lines. # However, in this form it does not, even for multiple lines: # ''' This is a really, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, long comment (not really). ...
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Notice that that comment was actually printed. That's because it's not technically a comment that is totally ignored, but just a multi-line string object. It is being used in source code for documenting your code. Why does that work? Because that long multi-line string is not being assigned to a variable, so the Python...
from math import pi # more in today's tutorial # With old style formatting "pi = %.6f" % pi # With new style formatting. # It's longer in this example, but is much more powerful in general. # You decide, which one you want to use. "pi = {:.6f}".format(pi) myPi = 3.92834234 print("The Earth's mass ...
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Hard to read!! (And, note the junk at the end.) Consider %x.yz % inside the quotes - means a "format statement" follows x is the number of characters in the resulting string - Not required y is the number of digits after the decimal point - Not required z is the format (e.g. f (float), e (scientific),...
print(radius, force) # still alive from far above!
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Q. What will the next statement print?
# If we use triple quotes we don't have to # use \ for multiple lines print('''At the Earth's radius of %.2e meters, the force is %6.0f Newtons.''' % (radius, force)) # Justification print("At the Earth's radius of %.2e meters, \ the force is %-20f Newtons." % (radius, force))
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Note when block comments are used, the text appears on 2 lines versus when using the \, the text appears all on 1 line.
print("At the Earth's radius of %.2e meters, the force is %.0f Newtons." % (radius, force)) print("At the Earth's radius of %.2e meters, the force is %i Newtons." % (radius, force))
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
Note the difference between %.0f (float) and %i (integer) (rounding vs. truncating) Also note, that the new formatting system actually warns you when you do something that would lose precision:
print("At the Earth's radius of {:.2e} meters, the force is {:.0f} Newtons.".format(radius, force)) print("At the Earth's radius of {:.2e} meters, the force is {:i} Newtons.".format(radius, force)) # Line breaks can also be implemented with \n print('At the Earth radius of %.2e meters,\nthe force is\n%0.0f Newtons.' ...
lecture_02_basics.ipynb
CUBoulder-ASTR2600/lectures
isc
<a id=want></a> The want operator We need to know what we're trying to do -- what we want the data to look like. To borrow a phrase from our friend Tom Sargent, we say that we apply the want operator. Some problems we've run across that ask to be solved: Numerical data is contaminated by commas (marking thousands) o...
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv' chipotle = pd.read_csv(url, sep='\t') # tab (\t) separated values print('Variable dtypes:\n', chipotle.dtypes, sep='') chipotle.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We want to have it as a number, specifically a float. Example: Data Bootcamp entry poll This is the poll we did at the start of the course. Responses were collected in a Google spreadsheet, which we converted to a...
url1 = "https://raw.githubusercontent.com/NYUDataBootcamp/" url2 = "Materials/master/Data/entry_poll_spring17.csv" url = url1 + url2 entry_poll = pd.read_csv(url) entry_poll.head() print('Dimensions:', entry_poll.shape) print('Data types:\n\n', entry_poll.dtypes, sep='')
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comments. This is mostly text data, which means it's assigned the dtype object. There are two things that would make the data easier to work with: First: The column names are excessively verbose. This one's easy: We replace them with single words. Which we do below.
# (1) create list of strings with the new varnames newnames = ['time', 'why', 'program', 'programming', 'prob_stats', 'major', 'career', 'data', 'topics'] newnames # (2) Use the str.title() string method to make the varnames prettier newnames = [name.title() for name in newnames]
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
str.title() returns a copy of the string in which first characters of all the words are capitalized.
newnames # (3) assign newnames to the variables entry_poll.columns = newnames entry_poll.head(1)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Second: The second one is harder. The question about special topics of interest says "mark all that apply." In the spreadsheet, we have a list of every choice the person checked. Our want is to count the number of each type of response. For example, we might want a bar chart that gives us the number of each respons...
# check multi-response question to see what we're dealing with entry_poll['Topics'].head(20)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow. Example: OECD healthcare statistics The OECD collects healthcare data on lots of (mostly rich) countries, which is helpful in producing comparisons. Here we use a spreadsheet that can be found under Frequently R...
url1 = 'http://www.oecd.org/health/health-systems/' url2 = 'OECD-Health-Statistics-2016-Frequently-Requested-Data.xls' oecd = pd.read_excel(url1 + url2) oecd.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
This looks bad. But we can always use pd.read_excel?. Let's look into the excel file. * multiple sheets (want: Physicians)
oecd = pd.read_excel(url1 + url2, sheetname='Physicians') oecd.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
The first three lines are empty. Skip those
oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3) oecd.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Would be nice to have the countries as indices
oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3, index_col=0) oecd.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
The last two columns contain junk
oecd.shape # drop 57th and 58th columns # There is no skipcols argument, so let's google "read_excel skip columns" -> usecols oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3, index_col=0, usecols=range(57)) o...
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
What about the bottom of the table?
oecd.tail() # we are downloading the footnotes too ?pd.read_excel # -> skip_footer # How many rows to skip?? oecd.tail(25) oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3, index_col=0, usecols=range(57), ...
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
We still have a couple issues. The index includes a space and a number: Australia 1, Chile 3, etc. We care about this because when we plot the data across countries, the country labels are going to be country names, so we want them in a better form than this. The ..'s in the sheet lead us to label any column tha...
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls' # Try weo = pd.read_excel(url) # NOT an excel file! # try to open the file with a plain text editor (it is a TSV) weo = pd.read_csv(url, sep = '\t') weo.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Useful columns: - 1, 2, 3, 4, 6 (indices) - years, say from 1980 to 2016 Need a list that specifies these
names = list(weo.columns) names[:8] # for var details details_list = names[1:5] + [names[6]] # for years years_list = names[9:-6] details_list weo = pd.read_csv(url, sep = '\t', index_col='ISO', usecols=details_list + years_list) weo.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Look at the bottom
weo.tail(3) weo = pd.read_csv(url, sep = '\t', index_col='ISO', usecols=details_list + years_list, skipfooter=1, engine='python') # read_csv requires 'python' engine (otherwise warning) weo.tail()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Missing values
weo = pd.read_csv(url, sep = '\t', index_col='ISO', usecols=details_list + years_list, skipfooter=1, engine='python', na_values='n/a') weo.head() weo.dtypes[:10] # still not ok
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Notice the , for thousands. As we saw before, there is an easy fix
weo = pd.read_csv(url, sep = '\t', index_col='ISO', usecols=details_list + years_list, skipfooter=1, engine='python', na_values='n/a', thousands =',') weo.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then conver...
chipotle.head() # create a copy of the df to play with chipotle_num = chipotle.copy() print('Original dtype:', chipotle_num['item_price'].dtype) # delete dollar signs (dtype does not change!) chipotle_num['item_price'].str.replace('$', '').head() # delete dollar signs, convert to float, AND assign back to chipotl...
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. We did everything here in one line: replace the dollar sign with a string method, then converted to float using astype. If you think this is too dense, you might break it into two steps. Example. Here we use the astype method again to convert the dtypes of weo into float
weo.head(1) weo.head(1).dtypes
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Want to convert the year variables into float
weo['1980'].astype(float)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
This error indicates that somewhere in weo['1980'] there is a string value --. We want to convert that into NaN. Later we will see how we can do that directly. For now use read_csv() again
weo = pd.read_csv(url, sep = '\t', index_col='ISO', usecols=details_list + years_list, skipfooter=1, engine='python', na_values=['n/a', '--'], thousands =',') weo.head(1) # With that out of our way, we can do t...
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation: Play with the rsplit method to see how it works. Apply rsplit to the example country = 'United States 1'. Use a string method to do this to all the entries of the variable Country.
# try this with an example first country = 'United States 1' # get documentation for the rsplit method country.rsplit? # an example country.rsplit()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Not quite, we only want to split once.
# what about this? country.rsplit(maxsplit=1) # one more step, we want the first component of the list country.rsplit(maxsplit=1)[0] oecd.index oecd.index.str.rsplit(maxsplit=1)[0] #try oecd.index.str.rsplit? # Note the TWO str's oecd.index.str.rsplit(n=1).str[0] #or use the str.get() method oecd.index.str.rsplit...
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comments. Note that we need two str's here: one to do the split, the other to extract the first element. For reasons that mystify us, we ran into problems when we used maxsplit=1, but it works with n=1. This is probably more than you want to know, but file away the possibilities in case you need them. <a id='m...
docs = oecd docs.head()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
docsna = docs.replace(to_replace=['..'], value=[None]) docsna.dtypes
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Working with missing values
# grab a variable to play with var = docsna[2013].head(10) var # why not '2013'? check the type docsna.columns # which ones are missing ("null")? var.isnull() # which ones are not missing ("not null")? var.notnull() # drop the missing var.dropna()
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically. Comment. Let's try a picture to give us a feeling of accomplishment. What else would you say we need? How would we get it?
docsna[2013].plot.barh(figsize=(4, 12))
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
<a id='selection'></a> Selecting variables and observations The word selection refers to choosing a subset of variables or observations using their labels or index. Similar methods are sometimes referred to as slicing, subsetting, indexing, querying, or filtering. We'll treat the terms as synonymous. There are lots...
# we create a small dataframe to experiment with small = weo.head() small
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Example. Let's try each of these in a different cell and see what they do: small[['Country', 'Units']] small[[0, 4]] small['2011'] small[1:3] Can you explain the results?
small[['Country', 'Units']] small[[0, 4]] small['2011'] small[1:3] small[[False, True, True, False, False]]
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
<a id='boolean'></a> Boolean selection We choose observations that satisfy one or more conditions. Boolean selection consists of two steps that we typically combine in one statement: Use a comparison to construct a Boolean variable consisting of True and False. Compute df[comparison], where df is a dataframe and co...
weo.head(2)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries. Note the use of the drop_duplicates method, which does what it sounds like: remove duplicate rows (!)
variable_list = weo[['Country', 'Subject Descriptor', 'Units']].drop_duplicates() print('Number of variables: ', variable_list.shape[0]) variable_list.head() country_list = weo['Country'].drop_duplicates() print('Number of countries: ', country_list.shape[0]) country_list
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Exercise. Construct a list of countries with countries = weo['Country']; that is, without applying the drop_duplicates method. How large is it? How many duplicates have we dropped? <!-- cn = sorted(list(set(weo.index))) --> <!-- * What are the country codes (`ISO`) for Argentina and the United States? * What ...
small small['Units'] == 'National currency' small['2011'] >= 200 (small['Units'] == 'National currency') & (small['2011'] >= 100) (small['Units'] == 'National currency') | (small['2011'] >= 100)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Exercise. Construct dataframes for which small['Units'] does not equal 'National currency'. small['Units'] equals 'National currency' and small['2011'] is greater than 100. <a id='isin'></a> The isin method Pay attention now, this is really useful. Suppose we want to extract the data for which weo['Country'] == '...
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP'] weo['WEO Subject Code'].isin(vlist)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. We're choosing 2 variables from 45, so there are lots of Falses.
weo.tail(4) # this time let's use the result of isin for selection vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP'] weo[weo['WEO Subject Code'].isin(vlist)].head(6) # we've combined several things in one line comparison = weo['WEO Subject Code'].isin(vlist) selection = weo[comparison] selection.head(6)
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like:
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP'] countries = ['Argentina', 'Greece'] weo_sub = weo[weo['WEO Subject Code'].isin(variables) & weo['Country'].isin(countries)] weo_sub
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comments. We've now done what we described when we applied the want operator. This is a go-to method. Circle it for later reference. This is a go-to method. Circle it for later reference. Exercise. Use the isin method to extract Gross domestic product in US dollars for China, India, and the United States. Ass...
countries = ['China', 'India', 'United States'] gdp = weo[(weo['WEO Subject Code']=='NGDPD') & weo['Country'].isin(countries)] gdp
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
gdp['2015'].plot(kind='bar')
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
<a id='contains'></a> The contains method Another useful one. The contains string method for series identifies observations that contain a specific string. If yes, the observation is labelled True, if no, False. A little trick converts the True/False outcomes to ones and zeros. We apply it to the Topics variable o...
# recall entry_poll['Topics'].head(10) # the contains method entry_poll['Topics'].str.contains('Machine Learning')
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. That's pretty good, we now know which students mentioned Machine Learning and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick: we multiply by 1.
entry_poll['Topics'].str.contains('Machine Learning').head(10)*1
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Now let's do the same for some of the other entries and save them in new variables.
topics = ['Web scraping', 'Machine Learning', 'regression'] old_ep = entry_poll.copy() vnames = [] for x in topics: newname = 'Topics' + '_' + x vnames.append(newname) entry_poll[newname] = entry_poll['Topics'].str.contains(x)*1 vnames
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. You might want to think about this a minute. Or two.
# create new df of just these variables student_topics = entry_poll[vnames] student_topics # count them with the sum method topics_counts = student_topics.sum() topics_counts
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
Comment. Just for fun, here's a bar graph of the result.
topics_counts.plot(kind='barh')
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit
and a pie chart
topics_counts.plot(kind='pie')
Code/notebooks/bootcamp_pandas_adv1-clean.ipynb
NYUDataBootcamp/Materials
mit