markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be subsituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more "future-proof" way to...
from matplotlib import pyplot as plt import matplotlib.image as mpimg import numpy as np N1 = pynoddy.NoddyOutput(output_name) AM= pynoddy.NoddyTopology(output_name) am_name=root_name +'_uam.bin' print am_name print AM.maxlitho image = np.empty((int(AM.maxlitho),int(AM.maxlitho)), np.uint8) image.data[:] = open(am_...
docs/notebooks/9-Topology.ipynb
flohorovicic/pynoddy
gpl-2.0
Create a simple packe with a few simple modules that we will update.
directory = "../examplepackage/" if not os.path.exists(directory): os.makedirs(directory) %%writefile ../examplepackage/neato.py def torpedo(): print('First module modification 0!') %%writefile ../examplepackage/neato2.py def torpedo2(): print('Second module modification 0!') %%writefile ../examplepack...
notebooks/autoreload-example.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
%autoreload 1 The docs say: ``` %autoreload 1 Reload all modules imported with %aimport every time before executing the Python code typed. ```
import examplepackage.neato import examplepackage.neato2 import examplepackage.neato3 %autoreload 1 %aimport examplepackage
notebooks/autoreload-example.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
You might think that importing examplepackage would result in that package being auto-reloaded if you updated code inside of it. You'd be wrong. Follow along!
examplepackage.neato.torpedo() examplepackage.neato2.torpedo2() examplepackage.neato3.torpedo3() %%writefile ../examplepackage/neato.py def torpedo(): print('First module modification 1') %%writefile ../examplepackage/neato2.py def torpedo2(): print('Second module modification 1') %%writefile ../examplep...
notebooks/autoreload-example.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
Nothing is updated. You have to import the module explicitly like:
%autoreload 1 %aimport examplepackage.neato examplepackage.neato.torpedo() examplepackage.neato2.torpedo2() examplepackage.neato3.torpedo3()
notebooks/autoreload-example.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
%autoreload 2 The docs say: ``` %autoreload 2 Reload all modules (except those excluded by %aimport) every time before executing the Python code typed. ``` I read this as "if you set %autoreload 2, then it will reload all modules except whatever you %aimport examplepackage.module". This is not how it works. When using...
%autoreload 2 %aimport examplepackage.neato %aimport -examplepackage.neato2 examplepackage.neato.torpedo() examplepackage.neato2.torpedo2() examplepackage.neato3.torpedo3() %%writefile ../examplepackage/neato.py def torpedo(): print('First module modification 2!') %%writefile ../examplepackage/neato2.py def ...
notebooks/autoreload-example.ipynb
jbwhit/WSP-312-Tips-and-Tricks
mit
Logistic Regression Classification Using Linear Regression Load your data.
from helpers import sample_data, load_data, standardize # load data. height, weight, gender = load_data() # build sampled x and y. seed = 1 y = np.expand_dims(gender, axis=1) X = np.c_[height.reshape(-1), weight.reshape(-1)] y, X = sample_data(y, X, seed, size_samples=200) x, mean_x, std_x = standardize(X)
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Use least_squares to compute w, and visualize the results.
from least_squares import least_squares from plots import visualization def least_square_classification_demo(y, x): # *************************************************** # INSERT YOUR CODE HERE # classify the data by linear regression: TODO # *************************************************** tx =...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Logistic Regression Compute your cost by negative log likelihood.
def sigmoid(t): """apply sigmoid function on t.""" # *************************************************** # INSERT YOUR CODE HERE # ?? Sigmoid or Logistic function # *************************************************** return 1/(1+np.exp(-t)) def calculate_loss(y, tx, w): """compute the cost ...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Using Gradient Descent Implement your function to calculate the gradient for logistic regression.
def learning_by_gradient_descent(y, tx, w, alpha): """ Do one step of gradient descen using logistic regression. Return the loss and the updated w. """ # *************************************************** # INSERT YOUR CODE HERE # compute the cost: TODO # *******************************...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Demo!
from helpers import de_standardize def logistic_regression_gradient_descent_demo(y, x): # init parameters max_iter = 10000 threshold = 1e-8 alpha = 0.001 losses = [] # build tx tx = np.c_[np.ones((y.shape[0], 1)), x] w = np.zeros((tx.shape[1], 1)) # start the logistic regression ...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Calculate your hessian below
def calculate_hessian(y, tx, w): """return the hessian of the loss function.""" # *************************************************** # INSERT YOUR CODE HERE # calculate hessian: TODO # *************************************************** S = np.zeros((len(tx),len(tx))) for i in range(len(tx)...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Write a function below to return loss, gradient, and hessian.
def logistic_regression(y, tx, w): """return the loss, gradient, and hessian.""" # *************************************************** # INSERT YOUR CODE HERE # return loss, gradient, and hessian: TODO # *************************************************** return calculate_loss(y, tx, w), calcula...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Using Newton method Use Newton method for logistic regression.
def learning_by_newton_method(y, tx, w, alpha): """ Do one step on Newton's method. return the loss and updated w. """ # *************************************************** # INSERT YOUR CODE HERE # return loss, gradient and hessian: TODO # ***********************************************...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
demo
def logistic_regression_newton_method_demo(y, x): # init parameters max_iter = 10000 alpha = 0.01 threshold = 1e-8 lambda_ = 0.1 losses = [] # build tx tx = np.c_[np.ones((y.shape[0], 1)), x] w = np.zeros((tx.shape[1], 1)) # start the logistic regression for iter in range(m...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Using penalized logistic regression Fill in the function below.
def penalized_logistic_regression(y, tx, w, lambda_): """return the loss, gradient, and hessian.""" # *************************************************** # INSERT YOUR CODE HERE # return loss, gradient: TODO # *************************************************** loss = calculate_loss(y,tx,w) # or...
labs/ex05/template/ex05.ipynb
kcyu1993/ML_course_kyu
mit
Note that there are two nested MeterGroups: one for the electric oven, and one for the washer dryer (both of which are 240 volt appliances and have two meters per appliance):
elec.nested_metergroups()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Putting these meters into a MeterGroup allows us to easily sum together the power demand recorded by both meters to get the total power demand for the entire appliance (but it's also very easy to see the individual meter power demand too). We can easily get a MeterGroup of either the submeters or the mains:
elec.mains()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
We can easily get the power data for both mains meters summed together:
elec.mains().power_series_all_data().head() elec.submeters()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Stats for MeterGroups Proportion of energy submetered Let's work out the proportion of energy submetered in REDD building 1:
elec.proportion_of_energy_submetered()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Note that NILMTK has raised a warning that Mains uses a different type of power measurement than all the submeters, so it's not an entirely accurate comparison. Which raises the question: which type of power measurements are used for the mains and submeters? Let's find out... Active, apparent and reactive power
mains = elec.mains() mains.available_ac_types('power') elec.submeters().available_ac_types('power') next(elec.load())
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Total Energy
elec.mains().total_energy() # returns kWh
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Energy per submeter
energy_per_meter = elec.submeters().energy_per_meter() # kWh, again energy_per_meter
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
column headings are the ElecMeter instance numbers. The function fraction_per_meter does the same thing as energy_per_submeter but returns the fraction of energy per meter. Select meters on the basis of their energy consumption Let's make a new MeterGroup which only contains the ElecMeters which used more than 20 kWh:
# energy_per_meter is a DataFrame where each row is a # power type ('active', 'reactive' or 'apparent'). # All appliance meters in REDD are record 'active' so just select # the 'active' row: energy_per_meter = energy_per_meter.loc['active'] more_than_20 = energy_per_meter[energy_per_meter > 20] more_than_20 instances...
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Plot fraction of energy consumption of each appliance
fraction = elec.submeters().fraction_per_meter().dropna() # Create convenient labels labels = elec.get_labels(fraction.index) plt.figure(figsize=(10,30)) fraction.plot(kind='pie', labels=labels);
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Draw wiring diagram We can get the wiring diagram for the MeterGroup:
elec.draw_wiring_graph()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
It's not very pretty but it shows that meters (1,2) (the site meters) are upstream of all other meters. Buildings in REDD have only two levels in their meter hierarchy (mains and submeters). If there were more than two levels then it might be useful to get only the meters immediately downstream of mains:
elec.meters_directly_downstream_of_mains()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Plot appliances when they are in use
#sns.set_palette("Set3", n_colors=12) # Set a threshold to remove residual power noise when devices are off elec.plot_when_on(on_power_threshold = 40)
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Stats and info for individual meters The ElecMeter class represents a single electricity meter. Each ElecMeter has a list of associated Appliance objects. ElecMeter has many of the same stats methods as MeterGroup such as total_energy and available_power_ac_types and power_series and power_series_all_data. We will n...
fridge_meter = elec['fridge']
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Get upstream meter
fridge_meter.upstream_meter() # happens to be the mains meter group!
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Metadata about the class of meter
fridge_meter.device
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Dominant appliance If the metadata specifies that a meter has multiple meters connected to it then one of those can be specified as the 'dominant' appliance, and this appliance can be retrieved with this method:
fridge_meter.dominant_appliance()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Total energy
fridge_meter.total_energy() # kWh
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Get good sections If we plot the raw power data then we see there is one large gap where, supposedly, the metering system was not working. (if we were to zoom in then we'd see lots of smaller gaps too):
fridge_meter.plot()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
We can automatically identify the 'good sections' (i.e. the sections where every pair of consecutive samples is less than max_sample_period specified in the dataset metadata):
good_sections = fridge_meter.good_sections(full_results=True) # specifying full_results=False would give us a simple list of # TimeFrames. But we want the full GoodSectionsResults object so we can # plot the good sections... good_sections.plot()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
The blue chunks show where the data is good. The white gap is the large gap seen in the raw power data. There are lots of smaller gaps that we cannot see at this zoom level. We can also see the exact sections identified:
good_sections.combined()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Dropout rate As well as large gaps appearing because the entire system is down, we also get frequent small gaps from wireless sensors dropping data. This is sometimes called 'dropout'. The dropout rate is a number between 0 and 1 which specifies the proportion of missing samples. A dropout rate of 0 means no samples...
fridge_meter.dropout_rate()
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Note that the dropout rate has gone down (which is good!) now that we are ignoring the gaps. This value is probably more representative of the performance of the wireless system. Select subgroups of meters We use ElecMeter.select_using_appliances() to select a new MeterGroup using an metadata field. For example, to g...
import nilmtk nilmtk.global_meter_group.select_using_appliances(type='washer dryer')
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Or select multiple appliance types:
elec.select_using_appliances(type=['fridge', 'microwave'])
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Or all appliances in the 'heating' category:
nilmtk.global_meter_group.select_using_appliances(category='heating')
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Or all appliances in building 1 with a single-phase induction motor(!):
nilmtk.global_meter_group.select_using_appliances(building=1, category='single-phase induction motor')
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
(NILMTK imports the 'common metadata' from the NILM Metadata project, which includes a wide range of different category taxonomies)
nilmtk.global_meter_group.select_using_appliances(building=2, category='laundry appliances')
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Select a group of meters from properties of the meters (not the appliances)
elec.select(device_model='REDD_whole_house') elec.select(sample_period=3)
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Select a single meter from a MeterGroup We use [] to retrive a single ElecMeter from a MeterGroup. Search for a meter using appliances connected to each meter
elec['fridge']
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Appliances are uniquely identified within a building by a type (fridge, kettle, television, etc.) and an instance number. If we do not specify an instance number then ElecMeter retrieves instance 1 (instance numbering starts from 1). If you want a different instance then just do this:
elec.select_using_appliances(type='fridge') elec['light', 2]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
To uniquely identify an appliance in nilmtk.global_meter_group then we must specify the dataset name, building instance number, appliance type and appliance instance in a dict:
import nilmtk nilmtk.global_meter_group[{'dataset': 'REDD', 'building': 1, 'type': 'fridge', 'instance': 1}]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Search for a meter using details of the ElecMeter get ElecMeter with instance = 1:
elec[1]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Instance numbering ElecMeter and Appliance instance numbers uniquely identify the meter or appliance type within the building, not globally. To uniquely identify a meter globally, we need three keys:
from nilmtk.elecmeter import ElecMeterID # ElecMeterID is a namedtuple for uniquely identifying each ElecMeter nilmtk.global_meter_group[ElecMeterID(instance=8, building=1, dataset='REDD')]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Select nested MeterGroup We can also select a single, existing nested MeterGroup. There are two ways to specify a nested MeterGroup:
elec[[ElecMeterID(instance=3, building=1, dataset='REDD'), ElecMeterID(instance=4, building=1, dataset='REDD')]] elec[ElecMeterID(instance=(3,4), building=1, dataset='REDD')]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
We can also specify the mains by asking for meter instance 0:
elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
which is equivalent to elec.mains():
elec.mains() == elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Plot sub-metered data for a single day
redd.set_window(start='2011-04-21', end='2011-04-22') elec.plot(); plt.xlabel("Time");
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Autocorrelation Plot
from pandas.plotting import autocorrelation_plot elec.mains().plot_autocorrelation();
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Daily energy consumption across fridges in the dataset
fridges_restricted = nilmtk.global_meter_group.select_using_appliances(type='fridge') daily_energy = pd.Series([meter.average_energy_per_period(offset_alias='D') for meter in fridges_restricted.meters]) # daily_energy.plot(kind='hist'); # plt.title('Histogram of daily fridge energy'); # plt....
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
Correlation dataframe of the appliances
correlation_df = elec.pairwise_correlation() correlation_df
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
nilmtk/nilmtk
apache-2.0
First, we need to set up our test data. We'll use two relaxation modes that are themselves log-normally distributed.
def H(tau): h1 = 1; tau1 = 0.03; sd1 = 0.5; h2 = 7; tau2 = 10; sd2 = 0.5; term1 = h1/np.sqrt(2*sd1**2*np.pi) * np.exp(-(np.log10(tau/tau1)**2)/(2*sd1**2)) term2 = h2/np.sqrt(2*sd2**2*np.pi) * np.exp(-(np.log10(tau/tau2)**2)/(2*sd2**2)) return term1 + term2 Nfreq = 50 Nmodes = 30 w = np.logspace(-4,...
Double_Maxwell_Lognormal_prior.ipynb
sgrindy/Bayesian-estimation-of-relaxation-spectra
mit
Now, let's calculate the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance.
wt = tau*w Kp = wt**2/(1+wt**2) Kpp = wt/(1+wt**2) noise_level = 0.02 Gp_true = np.dot(g_true,Kp) Gp_noise = Gp_true + Gp_true*noise_level*np.random.randn(Nfreq) Gpp_true = np.dot(g_true,Kpp) Gpp_noise = Gpp_true + Gpp_true*noise_level*np.random.randn(Nfreq) plt.loglog(w.ravel(),Gp_true.ravel(),label="True G'") plt.plo...
Double_Maxwell_Lognormal_prior.ipynb
sgrindy/Bayesian-estimation-of-relaxation-spectra
mit
Now, we can build the model with PyMC3. I'll make 2: one with noise, and one without.
noisyModel = pm.Model() with noisyModel: g = pm.Lognormal('g', mu=0, tau=0.1, shape=g_true.shape) sd1 = pm.HalfNormal('sd1',tau=1) sd2 = pm.HalfNormal('sd2',tau=1) # we'll log-weight the moduli as in other fitting methods logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)), sd=...
Double_Maxwell_Lognormal_prior.ipynb
sgrindy/Bayesian-estimation-of-relaxation-spectra
mit
Now we can sample the models to get our parameter distributions:
Nsamples = 2000 trueMapEstimate = pm.find_MAP(model=trueModel) with trueModel: trueTrace = pm.sample(Nsamples, start=trueMapEstimate) pm.backends.text.dump('./Double_Maxwell_v3_true', trueTrace) noisyMapEstimate = pm.find_MAP(model=noisyModel) with noisyModel: noisyTrace = pm.sample(Nsamples, start=noisyMapEst...
Double_Maxwell_Lognormal_prior.ipynb
sgrindy/Bayesian-estimation-of-relaxation-spectra
mit
Plotting the quantiles gives us a sense of the uncertainty in our estimation of $g_i$:
def plot_quantiles(Q,ax): ax.fill_between(tau.ravel(), y1=Q['g'][2.5], y2=Q['g'][97.5], color='c', alpha=0.25) ax.fill_between(tau.ravel(), y1=Q['g'][25], y2=Q['g'][75], color='c', alpha=0.5) ax.plot(tau.ravel(), Q['g'][50], 'b-') # sampling localization lines: ax.axvli...
Double_Maxwell_Lognormal_prior.ipynb
sgrindy/Bayesian-estimation-of-relaxation-spectra
mit
Generate new Row Maps The FERC 1 Row Maps function similarly to the xlsx_maps that we use to track which columns contain what data across years in the EIA spreadsheets. In many FERC 1 tables, a particular piece of reported data is associated not only with a named column in the database, but also what "row" the data sh...
def get_row_literals(table_name, report_year, ferc1_engine): row_literals = ( pd.read_sql("f1_row_lit_tbl", ferc1_engine) .query(f"sched_table_name=='{table_name}'") .query(f"report_year=={report_year}") .sort_values("row_number") ) return row_literals def compare_row_litera...
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Identify Missing Respondents Some FERC 1 respondents appear in the data tables, but not in the f1_respondent_id table. During the database cloning process we create dummy entries for these respondents to ensure database integrity. Some of these missing respondents can be identified based on the data they report. For i...
def get_util_from_plants(pudl_out, patterns, display=False): """ Find any utilities associated with a list patterns for matching plant names. Args: pudl_out (pudl.output.pudltabl.PudlTable): A PUDL Output Object. patterns (iterable of str): Collection of patterns with which to match ...
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Missing Respondents This will show all the as of yet unidentified respondents You can then use these respondent IDs to search through other tables for identifying information
f1_respondent_id = pd.read_sql("f1_respondent_id", ferc1_engine) missing_respondent_ids = f1_respondent_id[f1_respondent_id.respondent_name.str.contains("Missing Respondent")].respondent_id.unique() missing_respondent_ids
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Utility identification example using Plants Let's use respondent_id==529 which was identified as Tri-State Generation & Transmission in 2019 Searching for that respondent_id in all of the plant-related tables we find the following plants:
( pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS["ferc1"]) .query("utility_id_ferc1==529") )
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Create a list of patterns based on plant names Pretend this respondent hadn't already been identified Generate a list of plant name patterns based on what we see here Use the above function get_utils_from_plants to identify candidate utilities involved with those plants, in the EIA data. Note that the list of patterns...
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine=pudl_engine) get_util_from_plants( pudl_out, patterns=[ ".*laramie.*", ".*craig.*", ".*escalante.*", ])
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Another example with respondent_id==519
( pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS["ferc1"]) .query("utility_id_ferc1==519") ) get_util_from_plants( pudl_out, patterns=[ ".*kuester.*", ".*mihm.*", ])
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
And again with respondent_id==531
( pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS["ferc1"]) .query("utility_id_ferc1==531") ) get_util_from_plants( pudl_out, patterns = [ ".*leland.*", ".*antelope.*", ".*dry fork.*", ".*laramie.*", ])
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
What about missing respondents in the Plant in Service table? There are a couple of years worth of plant in service data associated with unidentified respondents. Unfortunately the plant in service table doesn't have a lot of identifying information. The same is true of the f1_dacs_epda depreciation table
f1_plant_in_srvce = pd.read_sql_table("f1_plant_in_srvce", ferc1_engine) f1_plant_in_srvce[f1_plant_in_srvce.respondent_id.isin(missing_respondent_ids)]
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Identify new strings for cleaning Several FERC 1 fields contain freeform strings that should have a controlled vocabulary imposed on them. This function help identify new, unrecognized strings in those fields each year. Use regular expressions to identify collections of new, related strings, and add them to the approp...
clean_me = [ {"table": "f1_fuel", "field": "fuel", "strdict": pudl.transform.ferc1.FUEL_STRINGS}, {"table": "f1_fuel", "field": "fuel_unit", "strdict": pudl.transform.ferc1.FUEL_UNIT_STRINGS}, {"table": "f1_steam", "field": "plant_kind", "strdict": pudl.transform.ferc1.PLANT_KIND_STRINGS}, {"ta...
devtools/ferc1/ferc1-new-year.ipynb
catalyst-cooperative/pudl
mit
Часто, когда вы имеете дело с величинами, представляющими собой сумму значений показателя за каждый день или за каждый рабочий день, имеет смысл перед началом прогнозирования поделить весь ряд на число дней в периоде. Например, если поделить ряд с объёмом производства молока на одну корову на число дней в месяце, получ...
milk['daily'] = milk.milk.values.flatten() / milk.index.days_in_month _ = plt.plot(milk.index, milk.daily) milk.daily.values.sum()
5 Data analysis applications/Homework/1 test autocorrelation and stationarity/Test Autocorrelation and stationarity.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Для ряда со средним дневным количеством молока на корову из предыдущего вопроса давайте с помощью критерия Дики-Фуллера подберём порядок дифференцирования, при котором ряд становится стационарным. Дифференцирование можно делать так: milk.daily_diff1 = milk.daily - milk.daily.shift(1) Чтобы сделать сезонное дифференциро...
milk.daily_diff1 = milk.daily - milk.daily.shift(1) _ = plt.plot(milk.index, milk.daily_diff1) sm.tsa.stattools.adfuller(milk.daily_diff1.dropna()) milk.daily_diff12 = milk.daily - milk.daily.shift(12) _ = plt.plot(milk.index, milk.daily_diff12) sm.tsa.stattools.adfuller(milk.daily_diff12.dropna()) milk.daily_diff1...
5 Data analysis applications/Homework/1 test autocorrelation and stationarity/Test Autocorrelation and stationarity.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
Для стационарного ряда из предыдущего вопроса постройте график автокорреляционной функции.
sm.graphics.tsa.plot_acf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50), sm.graphics.tsa.plot_pacf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50);
5 Data analysis applications/Homework/1 test autocorrelation and stationarity/Test Autocorrelation and stationarity.ipynb
maxis42/ML-DA-Coursera-Yandex-MIPT
mit
This is how we define classes:
class Class(object): """A simple meaningless Class""" def __init__(self, attribute): """ This method get's called during object initialisation. :param attribute: attribute of the class to save """ self.attribute = attribute def tell(self): ...
parallel.ipynb
superbock/parallel2015
mit
Since all (our) functions and classes are well documented, we can always ask how to use them:
function? Class?
parallel.ipynb
superbock/parallel2015
mit
Python also has lists which can be accessed by index (indices always start at 0):
x = [1, 2, 4.5, 'bla'] x[1] x[3] x.index('bla')
parallel.ipynb
superbock/parallel2015
mit
One of the more fancy stuff we can do in Python is list comprehensions. Here's a simple example of how to calculate the squares of some numbers:
[x ** 2 for x in range(10)]
parallel.ipynb
superbock/parallel2015
mit
Ok, that's it for now, let's speed things up :) Threads From the Python threading documentation (https://docs.python.org/2/library/threading.html): In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this lim...
import multiprocessing as mp
parallel.ipynb
superbock/parallel2015
mit
A simple CPU bound example: We consider a function which sums all primes below a given number. Source: http://www.parallelpython.com/content/view/17/31/#SUM_PRIMES
import math def isprime(n): """Returns True if n is prime and False otherwise""" if not isinstance(n, int): raise TypeError("argument passed to is_prime is not of 'int' type") if n < 2: return False if n == 2: return True max = int(math.ceil(math.sqrt(n))) i = 2 whil...
parallel.ipynb
superbock/parallel2015
mit
If we simply use the list comprehension without the sum, we get a list of primes smaller than the given one:
[x for x in xrange(10) if isprime(x)] sum_primes(10) %timeit sum_primes(10)
parallel.ipynb
superbock/parallel2015
mit
Not bad, why should we speed up something like this? Let's see what happens if we ask for the sum of primes below a larger number:
%timeit sum_primes(10000)
parallel.ipynb
superbock/parallel2015
mit
What if we do this for a bunch of numbers, e.g.:
%timeit [sum_primes(n) for n in xrange(1000)]
parallel.ipynb
superbock/parallel2015
mit
Ok, this is definitely too slow, but we should be able to run this in parallel easily, since we are asking for the same thing (summing all prime numbers below the given number) for a lot of of numbers. We do this by calling this very same function a thousand times (inside the list comprehension). Python has a map funct...
map(sum_primes, xrange(10))
parallel.ipynb
superbock/parallel2015
mit
This is basically the same as what we had before with our list comprehension.
%timeit map(sum_primes, xrange(1000))
parallel.ipynb
superbock/parallel2015
mit
multiprocessing offers the same map function in it's Pool class. Let's create a Pool with 2 simultaneous threads (i.e. processes).
pool = mp.Pool(2)
parallel.ipynb
superbock/parallel2015
mit
Now we can use the pool's map function to do the same as before but in parallel.
pool.map(sum_primes, xrange(10)) %timeit pool.map(sum_primes, xrange(1000))
parallel.ipynb
superbock/parallel2015
mit
This is a speed-up of almost 2x, which is exactly what we expected (using 2 processes minus some overhead). Vectorisation We can solve a lot of stuff in almost no time if we avoid loops and vectorise the expressions instead. Numpy does an excellent job here and automatically does some things in parallel (e.g. matrix mu...
import numpy as np
parallel.ipynb
superbock/parallel2015
mit
Some Numpy basics: We can define arrays simple as that:
x = np.array([1, 2, 5, 15]) x
parallel.ipynb
superbock/parallel2015
mit
As long as it can be casted to an array, we can use almost everything as input for an array:
np.array(map(sum_primes, xrange(10)))
parallel.ipynb
superbock/parallel2015
mit
Or we define arrays with some special functions:
np.zeros(10) np.arange(10.)
parallel.ipynb
superbock/parallel2015
mit
Numpy supports indexing and slicing:
x = np.arange(10)
parallel.ipynb
superbock/parallel2015
mit
Get a single item of the array:
x[3]
parallel.ipynb
superbock/parallel2015
mit
Get a slice of the array:
x[1:5]
parallel.ipynb
superbock/parallel2015
mit
Get everything starting from index 4 to the end:
x[4:]
parallel.ipynb
superbock/parallel2015
mit
Negative indices are counted backwards from the end. Get everything before the last element:
x[:-1]
parallel.ipynb
superbock/parallel2015
mit
Let's define another problem: comb filters. In signal processing, a comb filter adds a delayed version of a signal to itself, causing constructive and destructive interference [Wikipedia]. These filters can be either feed forward or backward, depending on wheter the signal itself or the output of the filter is delaye...
def feed_forward_comb_filter_loop(signal, tau, alpha): """ Filter the signal with a feed forward comb filter. :param signal: signal :param tau: delay length :param alpha: scaling factor :return: comb filtered signal """ # y[n] = x[n] + α * x[n - τ] if tau <= 0: ra...
parallel.ipynb
superbock/parallel2015
mit
Let's vectorise this by removig the loop (the for i in range(len(signal)) stuff):
def feed_forward_comb_filter(signal, tau, alpha): """ Filter the signal with a feed forward comb filter. :param signal: signal :param tau: delay length :param alpha: scaling factor :return: comb filtered signal """ # y[n] = x[n] + α * x[n - τ] if tau <= 0: raise V...
parallel.ipynb
superbock/parallel2015
mit
This is a nice ~67x speed-up (523 µs vs. 35.1 ms). Continue with the feed backward example... The feed backward variant comb filter function containing a loop:
def feed_backward_comb_filter_loop(signal, tau, alpha): """ Filter the signal with a feed backward comb filter. :param signal: signal :param tau: delay length :param alpha: scaling factor :return: comb filtered signal """ # y[n] = x[n] + α * y[n - τ] if tau <= 0: ...
parallel.ipynb
superbock/parallel2015
mit
The backward variant has basically the same runtime as the forward (loop-)version, but unfortunately, we cannot speed this up further with a vectorised expression, since the output depends on the output of a previous step. And this is where Cython comes in. Cython Cython can be used to write C-extensions in Python. The...
%load_ext Cython
parallel.ipynb
superbock/parallel2015
mit
To be able to use Cython code within IPython, we need to add the magic %%cython handler as a first line into a cell. Then we can start writing normal Python code.
%%cython # magic cython handler for IPython (must be first line of a cell) def sum_two_numbers(a, b): return a + b
parallel.ipynb
superbock/parallel2015
mit
Cython then compiles and loads everything transparently.
sum_two_numbers(10, 5)
parallel.ipynb
superbock/parallel2015
mit