markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Next, let's import the function "kappa_in" located in the file kappa_calculation.py. This function calculates the photon loss of a CPW resonator which is capacitively coupled to an input transmission line.
# Import the function "kappa_in" from the file kappa_calculation.py from qiskit_metal.analyses.em.kappa_calculation import kappa_in
_____no_output_____
Apache-2.0
tutorials/6 Analysis/CPW_kappa_calculation_demo.ipynb
mtreinish/qiskit-metal
The function "kappa_in" takes either three or six arguments, depending on how the lowest resonant frequency of the resonator is handled. In the first case, the resonant frequency of the CPW resonator is calculated numerically (using HFSS, for example) and passed as in floating-point input along with the frequency of interest and the capacitance between the resonator and the transmission line. In the second case, the lowest resonant frequency of the CPW resonator can be estimated by assuming an ideal resonator, in which case some additional inputs are required (1/2 or 1/4 depending on the type of resonator, the resonator length, width of resonator trace, width of resonator gap.)Here's a quick sanity check to verify that we only get numerical output from this function in the cases of N=3 or N=6 arguments:
# SANITY CHECK #1 # Let's check that output is only given for three and six arguments print("Output for N=1 Arguments:", kappa_in(1.0)) print("Output for N=2 Arguments:", kappa_in(1.0, 1.0)) print("Output for N=3 Arguments:", kappa_in(1.0, 1.0, 1.0)) print("Output for N=4 Arguments:", kappa_in(1.0, 1.0, 1.0, 1.0)) print("Output for N=5 Arguments:", kappa_in(1.0, 1.0, 1.0, 1.0, 1.0)) print("Output for N=6 Arguments:", kappa_in(1.0, 1.0, 1.0, 1.0, 1.0, 1.0)) print("Output for N=7 Arguments:", kappa_in(1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0))
Output for N=1 Arguments: None Output for N=2 Arguments: None Output for N=3 Arguments: 1591.5494309189535 Output for N=4 Arguments: None Output for N=5 Arguments: None Output for N=6 Arguments: 3234.721973158391 Output for N=7 Arguments: None
Apache-2.0
tutorials/6 Analysis/CPW_kappa_calculation_demo.ipynb
mtreinish/qiskit-metal
Now, let's actually calculate the photon loss for a representative CPW resonator with realistic values of input parameters. Here we'll assume a qubit frequency of 5 GHz, capacitive coupling of 30fF and a CPW resonant frequency of 4 GHz. The calculated value of kappa is the range of 0-1 MHz, as expected.
# SANITY CHECK #2 # Let's check that the magnitude of the output is what we would expect for 3 arguments: # Input #1: omega = 5GHz = 5E9 Hertz # Input #2: C_in = 30fF = 30E-15 Farads # Input #3: omega_n = 4GHz = 4.5E9 Hertz print("Calculated kappa (in Hz):", kappa_in(5.0E9, 30.0E-15, 4.5E9), "Hz") print("Calculated kappa (in MHz):", kappa_in(5.0E9, 30.0E-15, 4.5E9)/1.0E6, "MHz")
Calculated kappa (in Hz): 161144.37988054403 Hz Calculated kappa (in MHz): 0.16114437988054403 MHz
Apache-2.0
tutorials/6 Analysis/CPW_kappa_calculation_demo.ipynb
mtreinish/qiskit-metal
Comparing various online solversAn example showing how different online solvers performon the hand-written digits dataset.
# Author: Rob Zinkov <rob at zinkov dot com> # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier, Perceptron from sklearn.linear_model import PassiveAggressiveClassifier from sklearn.linear_model import LogisticRegression heldout = [0.95, 0.90, 0.75, 0.50, 0.01] rounds = 20 X, y = datasets.load_digits(return_X_y=True) classifiers = [ ("SGD", SGDClassifier(max_iter=100)), ("ASGD", SGDClassifier(average=True)), ("Perceptron", Perceptron()), ("Passive-Aggressive I", PassiveAggressiveClassifier(loss='hinge', C=1.0, tol=1e-4)), ("Passive-Aggressive II", PassiveAggressiveClassifier(loss='squared_hinge', C=1.0, tol=1e-4)), ("SAG", LogisticRegression(solver='sag', tol=1e-1, C=1.e4 / X.shape[0])) ] xx = 1. - np.array(heldout) for name, clf in classifiers: print("training %s" % name) rng = np.random.RandomState(42) yy = [] for i in heldout: yy_ = [] for r in range(rounds): X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=i, random_state=rng) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) yy_.append(1 - np.mean(y_pred == y_test)) yy.append(np.mean(yy_)) plt.plot(xx, yy, label=name) plt.legend(loc="upper right") plt.xlabel("Proportion train") plt.ylabel("Test Error Rate") plt.show()
_____no_output_____
MIT
sklearn/sklearn learning/demonstration/auto_examples_jupyter/linear_model/plot_sgd_comparison.ipynb
wangyendt/deeplearning_models
Try Different base models.
logreg = LogisticRegression() logreg_cv = LogisticRegressionCV() rf = RandomForestClassifier() gboost = GradientBoostingClassifier() models = [logreg, logreg_cv, rf, gboost] for model in models: print('Cross-validation of : {0}'.format(model.__class__)) score = compute_score(clf=model, X=train_reduced, y=targets, scoring='accuracy') print('CV score = {0}'.format(score)) print('****') # Tuning # turn run_gs to True if you want to run the gridsearch again. run_gs = False if run_gs: parameter_grid = { 'max_depth' : [4, 6, 8], 'n_estimators': [50, 10], 'max_features': ['sqrt', 'auto', 'log2'], 'min_samples_split': [2, 3, 10], 'min_samples_leaf': [1, 3, 10], 'bootstrap': [True, False], } forest = RandomForestClassifier() cross_validation = StratifiedKFold(n_splits=5) grid_search = GridSearchCV(forest, scoring='accuracy', param_grid=parameter_grid, cv=cross_validation, verbose=1 ) grid_search.fit(train, targets) model = grid_search parameters = grid_search.best_params_ print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_)) else: parameters = {'bootstrap': False, 'min_samples_leaf': 3, 'n_estimators': 50, 'min_samples_split': 10, 'max_features': 'sqrt', 'max_depth': 6} model = RandomForestClassifier(**parameters) model.fit(train, targets) # output = model.predict(test).astype(int) # df_output = pd.DataFrame() # aux = pd.read_csv('datasets/test.csv') # df_output['PassengerId'] = aux['PassengerId'] # df_output['Survived'] = output # df_output[['PassengerId','Survived']].to_csv('submission_2.csv ', index=False)
_____no_output_____
MIT
titanic/titanic_2.ipynb
utshabkg/ML_Web_Apps
Save and Load Model
import pickle import joblib file = 'titanic.pkl' joblib.dump(model, file) load = joblib.load('titanic.pkl') y_pred = load.predict(test).astype(int) y_pred val = pd.DataFrame(y_pred, columns = ['Survived']) val = val.replace({1: 'Alive', 0: 'Died'}) val
_____no_output_____
MIT
titanic/titanic_2.ipynb
utshabkg/ML_Web_Apps
Notebook Goal & Approach GoalFor each FERC 714 respondent that reports hourly demand as an electricity planning area, create a geometry representing the geographic area in which that electricity demand originated. Create a separate geometry for each year in which data is available. Approach* Use the `eia_code` found in the `respondent_id_ferc714` table to link FERC 714 respondents to their corresponding EIA utilities or balancing areas.* Use the `balancing_authority_eia861` and `sales_eia861` tables to figure out which respondents correspond to what utility or utilities (if a BA), and which states of operation.* Use the `service_territory_eia861` table to link those combinations of years, utilities, and states of operation to collections of counties.* Given the FIPS codes of the counties associated with each utility or balancing area in a given year, use geospatial data from the US Census to compile an annual demand area geometry.* Merge those geometries back in with the `respondent_id_ferc714` table, along with additional EIA balancing area and utility IDs / Codes on a per-year basis. Imports & Config
%load_ext autoreload %autoreload 2 # Standard Libraries: import dateutil import logging import pathlib import pickle import re import sys import zipfile # 3rd Party Libraries: import contextily as ctx import geopandas import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import sqlalchemy as sa # Local Packages: import pudl
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Configure Output Formatting
sns.set() %matplotlib inline mpl.rcParams['figure.figsize'] = (20,8) mpl.rcParams['figure.dpi'] = 150 pd.options.display.max_columns = 100 pd.options.display.max_rows = 100
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Logging
logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler(stream=sys.stdout) log_format = '%(asctime)s [%(levelname)8s] %(name)s:%(lineno)s %(message)s' formatter = logging.Formatter(log_format) handler.setFormatter(formatter) logger.handlers = [handler]
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
PUDL Setup
pudl_settings = pudl.workspace.setup.get_defaults() ferc1_engine = sa.create_engine(pudl_settings['ferc1_db']) pudl_engine = sa.create_engine(pudl_settings['pudl_db']) pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine) pudl_settings
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Parameters
MAP_CRS = "EPSG:3857" CALC_CRS = "ESRI:102003"
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Function Definitions Dummy EIA 861 ETL
def test_etl_eia(eia_inputs, pudl_settings): """ This is a dummy function that runs the first part of the EIA ETL process -- everything up until the entity harvesting begins. For use in this notebook only. """ eia860_tables = eia_inputs["eia860_tables"] eia860_years = eia_inputs["eia860_years"] eia861_tables = eia_inputs["eia861_tables"] eia861_years = eia_inputs["eia861_years"] eia923_tables = eia_inputs["eia923_tables"] eia923_years = eia_inputs["eia923_years"] # generate CSVs for the static EIA tables, return the list of tables #static_tables = _load_static_tables_eia(datapkg_dir) # Extract EIA forms 923, 860 eia860_raw_dfs = pudl.extract.eia860.Extractor().extract(eia860_years, testing=True) eia861_raw_dfs = pudl.extract.eia861.Extractor().extract(eia861_years, testing=True) eia923_raw_dfs = pudl.extract.eia923.Extractor().extract(eia923_years, testing=True) # Transform EIA forms 860, 861, 923 eia860_transformed_dfs = pudl.transform.eia860.transform(eia860_raw_dfs, eia860_tables=eia860_tables) eia861_transformed_dfs = pudl.transform.eia861.transform(eia861_raw_dfs, eia861_tables=eia861_tables) eia923_transformed_dfs = pudl.transform.eia923.transform(eia923_raw_dfs, eia923_tables=eia923_tables) # create an eia transformed dfs dictionary eia_transformed_dfs = eia860_transformed_dfs.copy() eia_transformed_dfs.update(eia861_transformed_dfs.copy()) eia_transformed_dfs.update(eia923_transformed_dfs.copy()) # convert types.. eia_transformed_dfs = pudl.helpers.convert_dfs_dict_dtypes(eia_transformed_dfs, 'eia') return eia_transformed_dfs
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Dummy EIA 861 Harvesting* Used to separately test the EIA entity harvesting process with EIA 861* Doesn't yet work b/c 861 is structured differently than 860/923.
def test_harvest_eia(eia_transformed_dfs, eia860_years, eia861_years, eia923_years): entities_dfs, eia_transformed_dfs = pudl.transform.eia.transform( eia_transformed_dfs, eia860_years=eia860_years, eia861_years=eia861_years, eia923_years=eia923_years, ) # convert types.. entities_dfs = pudl.helpers.convert_dfs_dict_dtypes(entities_dfs, 'eia') # Compile transformed dfs for loading... return entities_dfs, eia_transformed_dfs
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Compare Annual Demand vs. Sales
def annual_demand_vs_sales(dhpa_ferc714, sales_eia861, ba_eia861): """ Categorize EIA Codes in FERC 714 as BA or Utility IDs. Most FERC 714 respondent IDs are associated with an `eia_code` which refers to either a `balancing_authority_id_eia` or a `utility_id_eia` but no indication is given as to which type of ID each one is. This is further complicated by the fact that EIA uses the same numerical ID to refer to the same entity in most but not all cases, when that entity acts as both a utility and as a balancing authority. In order to identify which type of ID each `eia_code` is, this funciton compares the annual demand reported in association with each code in the FERC 714 hourly planning area time series, and in the EIA 861 sales table -- using the ID both as a utility and as a balancing authority ID. The correlation between the FERC 714 demand and the EIA 861 sales should be much higher for one type of ID than the other, indicating which type of ID is represented in the FERC 714 data. Args: dhpa_ferc714 (pandas.DataFrame): The FERC 714 hourly demand time series. sales_eia861 (pandas.DataFrame): The EIA 861 Sales table. ba_eia861 (pandas.DataFrame): The EIA 861 Balancing Authority table, which contains the mapping between EIA Balancing Authority Codes (3-4 letters) and EIA Balancing Authority IDs (integers). The codes are present in the Sales table, but the IDs are what the eia_code refers to. Returns: pandas.DataFrame: A table containing FERC 714 respondent IDs, EIA codes, and a column indicating whether that code was found to be more consistent with Balancing Authority or Utility electricity demand / sales. """ # Sum up FERC 714 demand by report_year and eia_code: dhpa_ferc714_by_eia_code = ( dhpa_ferc714 .groupby(["eia_code", "report_year"])["demand_mwh"] .sum() .reset_index() ) # Sum up the EIA 861 sales by Utility ID: sales_eia861_by_util = ( sales_eia861.groupby(["utility_id_eia", "report_date"])["sales_mwh"] .sum() .reset_index() .assign(report_year=lambda x: x.report_date.dt.year) .drop("report_date", axis="columns") .rename(columns={"sales_mwh": "sales_utility_mwh"}) ) # Need to translate the BA Code to BA ID for comparison w/ eia_code ba_codes_and_ids = ( ba_eia861[["balancing_authority_code_eia", "balancing_authority_id_eia", "report_date"]] .drop_duplicates() .assign(report_year=lambda x: x.report_date.dt.year) .drop("report_date", axis="columns") .dropna() ) # Sum up the EIA 861 sales by Balancing Authority Code: sales_eia861_by_ba = ( sales_eia861 .groupby(["balancing_authority_code_eia", "report_date"], observed=True)["sales_mwh"] .sum() .reset_index() .assign(report_year=lambda x: x.report_date.dt.year) .drop("report_date", axis="columns") .rename(columns={"sales_mwh": "sales_ba_mwh"}) .query("balancing_authority_code_eia!='UNK'") .merge(ba_codes_and_ids) ) # Combine the demand and sales data with all the IDs demand_and_sales = ( dhpa_ferc714_by_eia_code .merge( sales_eia861_by_util, left_on=["eia_code", "report_year"], right_on=["utility_id_eia", "report_year"], how="left" ) .merge( sales_eia861_by_ba, left_on=["eia_code", "report_year"], right_on=["balancing_authority_id_eia", "report_year"], how="left" ) .astype({ "eia_code": pd.Int64Dtype(), "utility_id_eia": pd.Int64Dtype(), "balancing_authority_id_eia": pd.Int64Dtype(), }) .assign( ba_ratio=lambda x: x.sales_ba_mwh / x.demand_mwh, utility_ratio=lambda x: x.sales_utility_mwh / x.demand_mwh, ) ) return demand_and_sales
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
EIA Code Categorization
def categorize_eia_code(rids_ferc714, utils_eia860, ba_eia861): """ Categorize EIA Codes in FERC 714 as BA or Utility IDs. Most FERC 714 respondent IDs are associated with an `eia_code` which refers to either a `balancing_authority_id_eia` or a `utility_id_eia` but no indication is given as to which type of ID each one is. This is further complicated by the fact that EIA uses the same numerical ID to refer to the same entity in most but not all cases, when that entity acts as both a utility and as a balancing authority. Given the nature of the FERC 714 hourly demand dataset, this function assumes that if the `eia_code` appears in the EIA 861 Balancing Authority table, that it should be labeled `balancing_authority`. If the `eia_code` appears only in the EIA 860 Utility table, then it is labeled `utility`. These labels are put in a new column named `respondent_type`. If the planning area's `eia_code` does not appear in either of those tables, then `respondent_type is set to NA. Args: rids_ferc714 (pandas.DataFrame): The FERC 714 `respondent_id` table. utils_eia860 (pandas.DataFrame): The EIA 860 Utilities output table. ba_eia861 (pandas.DataFrame): The EIA 861 Balancing Authority table. Returns: pandas.DataFrame: A table containing all of the columns present in the FERC 714 `respondent_id` table, plus a new one named `respondent_type` which can take on the values `balancing_authority`, `utility`, or the special value pandas.NA. """ ba_ids = set(ba_eia861.balancing_authority_id_eia.dropna()) util_not_ba_ids = set(utils_eia860.utility_id_eia.dropna()).difference(ba_ids) new_rids = rids_ferc714.copy() new_rids["respondent_type"] = pd.NA new_rids.loc[new_rids.eia_code.isin(ba_ids), "respondent_type"] = "balancing_authority" new_rids.loc[new_rids.eia_code.isin(util_not_ba_ids), "respondent_type"] = "utility" ba_rids = new_rids[new_rids.respondent_type=="balancing_authority"] util_rids = new_rids[new_rids.respondent_type=="utility"] na_rids = new_rids[new_rids.respondent_type.isnull()] ba_rids = ( ba_rids.merge( ba_eia861 .filter(like="balancing_") .drop_duplicates(subset=["balancing_authority_id_eia", "balancing_authority_code_eia"]), how="left", left_on="eia_code", right_on="balancing_authority_id_eia" ) ) util_rids = ( util_rids.merge( utils_eia860[["utility_id_eia", "utility_name_eia"]] .drop_duplicates("utility_id_eia"), how="left", left_on="eia_code", right_on="utility_id_eia" ) ) new_rids = ( pd.concat([ba_rids, util_rids, na_rids]) .astype({ "respondent_type": pd.StringDtype(), "balancing_authority_code_eia": pd.StringDtype(), "balancing_authority_id_eia": pd.Int64Dtype(), "balancing_authority_name_eia": pd.StringDtype(), "utility_id_eia": pd.Int64Dtype(), "utility_name_eia": pd.StringDtype(), }) ) return new_rids
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Georeference Balancing Authorities
def georef_bas(ba_eia861, st_eia861, sales_eia861, census_gdf): """ Create a GeoDataFrame mapping BAs to Utils to county geometries by year. This GDF includes the following columns: balancing_authority_id_eia (ba_eia861) balancing_authority_name_eia (ba_eia861) balancing_authority_code_eia (ba_eia861) utility_id_eia (sales_eia861) utility_name_eia (sales_eia861) county_id_fips (st_eia861) county (st_eia861) state_id_fips (st_eia861) state (st_eia861) geometry (census_gdf) county_name_census (census_gdf) It includes information both about which counties are associated with utilities that are part of balancing authorities, and utilities that are not part part of balancing authorities, so should be possible to use it to generate geometries for all of the respondents in FERC 714, both BAs and Utils. """ # Make sure that there aren't any more BA IDs we can recover from later years: ba_ids_missing_codes = ( ba_eia861.loc[ba_eia861.balancing_authority_code_eia.isnull(), "balancing_authority_id_eia"] .drop_duplicates() .dropna() ) assert len(ba_eia861[ (ba_eia861.balancing_authority_id_eia.isin(ba_ids_missing_codes)) & (ba_eia861.balancing_authority_code_eia.notnull()) ]) == 0 # Which utilities were part of what balancing areas in 2010-2012? early_ba_by_util = ( ba_eia861 .query("report_date <= '2012-12-31'") .loc[:, [ "report_date", "balancing_authority_id_eia", "balancing_authority_code_eia", "utility_id_eia", "balancing_authority_name_eia", ]] .drop_duplicates(subset=["report_date", "balancing_authority_id_eia", "utility_id_eia"]) ) # Create a dataframe that associates utilities and balancing authorities. # This information is directly avaialble in the early_ba_by_util dataframe # but has to be compiled for 2013 and later years based on the utility # BA associations that show up in the Sales table # Create an annual, normalized version of the BA table: ba_normed = ( ba_eia861 .loc[:, [ "report_date", "state", "balancing_authority_code_eia", "balancing_authority_id_eia", "balancing_authority_name_eia", ]] .drop_duplicates(subset=[ "report_date", "state", "balancing_authority_code_eia", "balancing_authority_id_eia", ]) ) ba_by_util = ( pd.merge( ba_normed, sales_eia861 .loc[:, [ "report_date", "state", "utility_id_eia", "balancing_authority_code_eia" ]].drop_duplicates() ) .loc[:, [ "report_date", "state", "utility_id_eia", "balancing_authority_id_eia" ]] .append(early_ba_by_util[["report_date", "utility_id_eia", "balancing_authority_id_eia"]]) .drop_duplicates() .merge(ba_normed) .dropna(subset=["report_date", "utility_id_eia", "balancing_authority_id_eia"]) .sort_values(["report_date", "balancing_authority_id_eia", "utility_id_eia", "state"]) ) # Merge in county FIPS IDs for each county served by the utility from # the service territory dataframe. We do an outer merge here so that we # retain any utilities that are not part of a balancing authority. This # lets us generate both BA and Util maps from the same GeoDataFrame # We have to do this separately for the data up to 2012 (which doesn't # include state) and the 2013 and onward data (which we need to have # state for) early_ba_util_county = ( ba_by_util.drop("state", axis="columns") .merge(st_eia861, on=["report_date", "utility_id_eia"], how="outer") .query("report_date <= '2012-12-31'") ) late_ba_util_county = ( ba_by_util .merge(st_eia861, on=["report_date", "utility_id_eia", "state"], how="outer") .query("report_date >= '2013-01-01'") ) ba_util_county = pd.concat([early_ba_util_county, late_ba_util_county]) # Bring in county geometry information based on FIPS ID from Census ba_util_county_gdf = ( census_gdf[["GEOID10", "NAMELSAD10", "geometry"]] .to_crs(MAP_CRS) .rename( columns={ "GEOID10": "county_id_fips", "NAMELSAD10": "county_name_census", } ) .merge(ba_util_county) ) return ba_util_county_gdf
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Map Balancing Authorities
def map_ba(ba_ids, year, ba_util_county_gdf, save=False): """ Create a map of a balancing authority for a historical year. Args: ba_ids (iterable): A collection of Balancing Authority IDs. year (int): The year for which to create a map. ba_util_county_gdf (geopandas.GeoDataFrame): A dataframe associating report_date, balancing_authority_id_eia, and county_id_fips. save (bool): If True, save the figure to disk. Returns: None """ map_gdf = ( ba_util_county_gdf[ (ba_util_county_gdf.report_date.dt.year == year) & (ba_util_county_gdf.balancing_authority_id_eia.isin(ba_ids)) & (~ba_util_county_gdf.county_id_fips.str.match("^02")) & # Avoid Alaska (~ba_util_county_gdf.county_id_fips.str.match("^15")) & # Avoid Hawaii (~ba_util_county_gdf.county_id_fips.str.match("^72")) # Avoid Puerto Rico ] .drop_duplicates(subset=["balancing_authority_id_eia", "county_id_fips"]) ) ax = map_gdf.plot(figsize=(20, 20), color="black", alpha=0.25, linewidth=0.25) plt.title(f"Balancing Areas ({year=})") ctx.add_basemap(ax) if save is True: plt.savefig(f"BA_Overlap_{year}.jpg") def compare_hifld_eia_ba(ba_code, hifld_gdf, eia_gdf): """ Compare historical EIA BAs vs. HIFLD geometries. """ fig, (hifld_ax, eia_ax) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True) hifld_ax.set_title(f"{ba_code} (HIFLD)") hifld_gdf[hifld_gdf.ABBRV==ba_code].to_crs(MAP_CRS).plot(ax=hifld_ax, linewidth=0) eia_ax.set_title(f"{ba_code} (EIA)") eia_gdf[ (eia_gdf.balancing_authority_code_eia==ba_code) & (eia_gdf.report_date.dt.year == 2017) ].plot(ax=eia_ax, linewidth=0.1) plt.show()
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Read Data EIA 860 via PUDL Outputs
plants_eia860 = pudl_out.plants_eia860() utils_eia860 = pudl_out.utils_eia860()
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
EIA 861 (2010-2018)* Not yet fully integrated into PUDL* Post-transform harvesting process isn't compatible w/ EIA 861 structure* Only getting the `sales_eia861`, `balancing_authority_eia861`, and `service_territory_eia861` tables
%%time logger.setLevel("WARN") eia_years = list(range(2010, 2019)) eia_inputs = { "eia860_years": [], "eia860_tables": pudl.constants.pudl_tables["eia860"], "eia861_years": eia_years, "eia861_tables": pudl.constants.pudl_tables["eia861"], "eia923_years": [], "eia923_tables": pudl.constants.pudl_tables["eia923"], } eia_transformed_dfs = test_etl_eia(eia_inputs=eia_inputs, pudl_settings=pudl_settings) logger.setLevel("INFO") ba_eia861 = eia_transformed_dfs["balancing_authority_eia861"].copy() st_eia861 = eia_transformed_dfs["service_territory_eia861"].copy() sales_eia861 = eia_transformed_dfs["sales_eia861"].copy() raw_eia861_dfs = pudl.extract.eia861.Extractor().extract(years=range(2010,2019), testing=True)
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
FERC 714 (2006-2018)
%%time logger.setLevel("WARN") raw_ferc714 = pudl.extract.ferc714.extract(pudl_settings=pudl_settings) tfr_ferc714 = pudl.transform.ferc714.transform(raw_ferc714) logger.setLevel("INFO")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
HIFLD Electricity Planning Areas (2018)* Electricty Planning Area geometries from HIFLD.* Indexed by `ID` which corresponds to EIA utility or balancing area IDs.* Only valid for 2017-2018.
hifld_pa_gdf = ( pudl.analysis.demand_mapping.get_hifld_planning_areas_gdf(pudl_settings) .to_crs(MAP_CRS) )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
US Census DP1 (2010)* This GeoDataFrame contains county-level geometries and demographic data.
%%time census_gdf = ( pudl.analysis.demand_mapping.get_census2010_gdf(pudl_settings, layer="county") .to_crs(MAP_CRS) )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Combine Data Categorize FERC 714 Respondent IDs
rids_ferc714 = ( tfr_ferc714["respondent_id_ferc714"] .pipe(categorize_eia_code, utils_eia860, ba_eia861) )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Add FERC 714 IDs to HIFLD
hifld_pa_gdf = ( hifld_pa_gdf .merge(rids_ferc714, left_on="ID", right_on="eia_code", how="left") )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Add Respondent info to FERC 714 Demand
dhpa_ferc714 = pd.merge( tfr_ferc714["demand_hourly_pa_ferc714"], tfr_ferc714["respondent_id_ferc714"], on="respondent_id_ferc714", how="left", # There are respondents with no demand )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Utilities vs. Balancing AuthoritiesExploration of the Balancing Authority EIA 861 table for cleanup Which columns are available in which years?| Year | BA ID | BA Name | BA Code | Util ID | Util Name | State | N ||------|-------|---------|---------|---------|-----------|-------|----|| 2010 | XXXXX | XXXXXXX | | XXXXXXX | | |3193|| 2011 | XXXXX | XXXXXXX | | XXXXXXX | | |3126|| 2012 | XXXXX | XXXXXXX | | XXXXXXX | XXXXXXXXX | |3146|| 2013 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 239|| 2014 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 208|| 2015 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 203|| 2016 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 203|| 2017 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 203|| 2018 | XXXXX | XXXXXXX | XXXXXXX | | | XXXXX | 204| What does this table mean?* In 2010-2012, the table says which utilities (by ID) are included in which balancing authorities.* In 2013-2018, the table indicates which *states* a BA is operating in, and also provides a BA Code Questions:* Where does the `balancing_authority_code` show up elsewhere in the EIA 860/861 data? * `plants_eia860` (nowhere else that I know of)* Are the BA to Utility mappings likely to remain valid throughout the entire time period? Can we propagate them forward? * No, there's some variation year to year in which utilities are associated with which BAs* Are the BA Code/Name to BA ID mappings permanent? * No they aren't -- when a BA changes owners and names, the code changes, but ID stays the same. Untangling HIFLD, FERC 714, & EIA IDs* There are unspecified "EIA codes" associated with FERC 714 respondents.* These IDs correspond to a mix of `utility_id_eia` and `balancing_authority_id_eia` values.* Similarly, the ID field of the HIFLD geometries are a mix of BA and Utility IDs from EIA.* This is extra confusing, because EIA *usually* uses the *same* ID for BAs and Utils.* However, the EIA BA and Util IDs appear to be distinct namespaces * Not all IDs which appear in both tables identify the same entity in both tables. * In a few cases different IDs are used to identify the same entity when it shows up in both tables.* It could be that whoever entered the IDs in the FERC 714 / HIFLD datasets didn't realize these were different sets of IDs. BA / Utility ID Overlap* Example of an ID that shows up in both, but refers to different entities, see `59504` * `balancing_area_id_eia == 59504` is the Southwest Power Pool (SWPP). * `utility_id_eia == 59504` is Kirkwood Community College, in MO.* Example of an entity that exists in both datsets, but shows up with different IDs, see PacifiCorp. * Has two BA IDs (East and West): `[14379, 14378]` * Has one Utility ID: `14354`* Example of an entity that shows up with the same ID in both tables: * ID `15466` is Public Service Co of Colorado -- both a BA (PSCO) and a Utility.
# BA ID comes from EIA 861 BA Table ba_ids = set(ba_eia861.balancing_authority_id_eia) print(f"Total # of BA IDs: {len(ba_ids)}") # Util ID comes from EIA 860 Utilities Entity table. util_ids = set(pudl_out.utils_eia860().utility_id_eia) print(f"Total # of Util IDs: {len(util_ids)}") ba_not_util_ids = ba_ids.difference(util_ids) print(f"BA IDs that are not Util IDs: {len(ba_not_util_ids)}") util_not_ba_ids = util_ids.difference(ba_ids) print(f"Util IDs that are not BA IDs: {len(util_not_ba_ids)}") ba_and_util_ids = ba_ids.intersection(util_ids) print(f"BA IDs that are also Util IDs: {len(ba_and_util_ids)}") ba_and_util = ( ba_eia861 .loc[:, ["balancing_authority_id_eia", "balancing_authority_name_eia"]] .dropna(subset=["balancing_authority_id_eia"]) .merge( pudl_out.utils_eia860(), left_on="balancing_authority_id_eia", right_on="utility_id_eia", how="inner" ) .loc[:, [ "utility_id_eia", "balancing_authority_name_eia", "utility_name_eia", ]] .rename(columns={"utility_id_eia": "util_ba_id"}) .drop_duplicates() .reset_index(drop=True) ) ba_not_util = ( ba_eia861.loc[ba_eia861.balancing_authority_id_eia.isin(ba_not_util_ids)] .loc[:,["balancing_authority_id_eia", "balancing_authority_code_eia", "balancing_authority_name_eia"]] .drop_duplicates(subset=["balancing_authority_id_eia", "balancing_authority_code_eia"]) .sort_values("balancing_authority_id_eia") )
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Missing IDs* There are `eia_code` values that don't show up in the list of balancing authority IDs (2010-2018).* There are also `eia_code` values that don't show up in the list of utility IDs (2009-2018).* There are a few `eia_code` values that don't show up in either!* Mostly this is an artifact of the different time covered by FERC 714 (2006-2018).* If we look only at the respondents that reported non-zero demand for 2010-2018, we find that all of the `eia_code` values *do* appear in either the `blancing_authority_eia861` or `utilities_eia860` tables.
rids_ferc714[ (~rids_ferc714.eia_code.isin(ba_eia861.balancing_authority_id_eia.unique())) & (~rids_ferc714.eia_code.isin(utils_eia860.utility_id_eia.unique())) ] rids_recent = ( dhpa_ferc714 .groupby(["respondent_id_ferc714", "report_year"]) .agg({"demand_mwh": sum}) .reset_index() .query("report_year >= 2010") .query("demand_mwh >= 0.0") .merge(rids_ferc714[["eia_code", "respondent_id_ferc714", "respondent_name_ferc714"]], how="left") .drop(["report_year", "demand_mwh"], axis="columns") .drop_duplicates() ) assert len(rids_recent[ (~rids_recent.eia_code.isin(ba_eia861.balancing_authority_id_eia.unique())) & (~rids_recent.eia_code.isin(utils_eia860.utility_id_eia.unique())) ]) == 0
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
BA to Utility Mappings are Many to Many* Unsurprisingly, BAs often contain many utilities.* However, it's also common for utilities to participate in more than one BA.* About 1/3 of all utilities show up in association with more than one BA
ba_to_util_mapping = ( ba_eia861[["balancing_authority_id_eia", "utility_id_eia"]] .dropna(subset=["balancing_authority_id_eia", "utility_id_eia"]) .drop_duplicates(subset=["balancing_authority_id_eia", "utility_id_eia"]) .groupby(["balancing_authority_id_eia"]) .agg({ "utility_id_eia": "count" }) ) plt.hist(ba_to_util_mapping.utility_id_eia, bins=99, range=(1,100)) plt.xlabel("# of Utils / BA") plt.ylabel("# of BAs") plt.title("Number of Utilities per Balancing Area"); util_to_ba_mapping = ( ba_eia861[["balancing_authority_id_eia", "utility_id_eia"]] .dropna(subset=["balancing_authority_id_eia", "utility_id_eia"]) .drop_duplicates(subset=["balancing_authority_id_eia", "utility_id_eia"]) .groupby(["utility_id_eia"]) .agg({ "balancing_authority_id_eia": "count" }) ) plt.hist(util_to_ba_mapping.balancing_authority_id_eia, bins=4, range=(1,5)) plt.title("Number of Balancing Authorities per Utility");
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Georeferenced Demand Fraction* With their original EIA codes the HIFLD Electricity Planning Areas only georeference some of the FERC 714 demand.* It's about 86% in 2018. In 2013 and earlier years, the fraction starts to drop off more quickly, to 76% in 2010, and 58% in 2006.* After manually identifying and fixing some bad and missing EIA codes in the FERC 714, the mapped fraction is much higher.* 98% or more in 2014-2018, dropping to 87% in 2010, and 68% in 2006* **However** because the geometries have also evolved over time, just the fact that the demand time series is linked to **some** HIFLD geometry, doesn't mean that it's the **right** geometry.
annual_demand_ferc714 = ( dhpa_ferc714 .groupby(["report_year"]).demand_mwh.sum() .reset_index() ) annual_demand_mapped = ( dhpa_ferc714[dhpa_ferc714.eia_code.isin(hifld_pa_gdf.eia_code)] .groupby(["report_year"]).demand_mwh.sum() .reset_index() .merge(annual_demand_ferc714, on="report_year", suffixes=("_map", "_tot")) .assign( fraction_mapped=lambda x: x.demand_mwh_map / x.demand_mwh_tot ) ) plt.plot("report_year", "fraction_mapped", data=annual_demand_mapped, lw=5) plt.ylabel("Fraction of demand which is mapped") plt.title("Completeness of HIFLD demand mapping by year") plt.ylim(0.6, 1.05);
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Historical Planning Area GeometriesCompile a GeoDataFrame that relates balancing authorities, their constituent utilities, and the collections of counties which are served by those utilities, across all the years for which we have EIA 861 data (2010-2018)
ba_util_county_gdf = georef_bas(ba_eia861, st_eia861, sales_eia861, census_gdf) ba_util_county_gdf.info() for year in (2010, 2014, 2018): map_ba(ba_util_county_gdf.balancing_authority_id_eia.unique(), year, ba_util_county_gdf, save=True)
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Output Simplified Annual BA Geometries* This takes half an hour so it's commented out.* Resulting shapefile is ~250MB compressed. Seems too big.* Need to figure out how to add explicity projection.* Need to figure out how to make each year's BA geometries its own layer.
#%%time #ba_fips_simplified = ( # ba_util_county_gdf # .assign(report_year=lambda x: x.report_date.dt.year) # .drop([ # "report_date", # "state", # "state_id_fips", # "county", # "county_name_census", # "utility_id_eia", # "utility_name_eia" # ], axis="columns") # .drop_duplicates(subset=["report_year", "balancing_authority_id_eia", "county_id_fips"]) # .dropna(subset=["report_year", "balancing_authority_id_eia", "county_id_fips"]) # .loc[:,["report_year", "balancing_authority_id_eia", "balancing_authority_code_eia", "balancing_authority_name_eia", "county_id_fips", "geometry"]] #) #ba_annual_gdf = ( # ba_fips_simplified # .dissolve(by=["report_year", "balancing_authority_id_eia"]) # .reset_index() # .drop("county_id_fips", axis="columns") #) #ba_output_gdf = ( # ba_annual_gdf # .astype({ # "report_year": int, # "balancing_authority_id_eia": float, # "balancing_authority_code_eia": str, # "balancing_authority_name_eia": str, # }) # .rename(columns={ # "report_year": "year", # "balancing_authority_id_eia": "ba_id", # "balancing_authority_code_eia": "ba_code", # "balancing_authority_name_eia": "ba_name", # }) #) #ba_output_gdf.to_file("ba_annual.shp")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Compare HIFLD and EIA BA maps for 2018
for ba_code in hifld_pa_gdf.ABBRV.unique(): if ba_code in ba_util_county_gdf.balancing_authority_code_eia.unique(): compare_hifld_eia_ba(ba_code, hifld_pa_gdf, ba_util_county_gdf)
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Time Evolution of BA GeometriesFor each BA we now have a collection of annual geometries. How have they changed over time?
for ba_code in ba_util_county_gdf.balancing_authority_code_eia.unique(): fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(20,20), sharex=True, sharey=True, facecolor="white") for year, ax in zip(range(2010, 2019), axes.flat): ax.set_title(f"{ba_code} ({year})") ax.set_xticks([]) ax.set_yticks([]) plot_gdf = ( ba_util_county_gdf .assign(report_year=lambda x: x.report_date.dt.year) .query(f"balancing_authority_code_eia=='{ba_code}'") .query(f"report_year=='{year}'") .drop_duplicates(subset="county_id_fips") ) plot_gdf.plot(ax=ax, linewidth=0.1) plt.show()
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Merge Geometries with FERC 714Now that we have a draft of wht the BA and Utility level territories look like, we can merge those with the FERC 714 Respondent ID table, and see how many leftovers there are, and whether the BA and Utility geometires play well together.Before dissolving the boundaries between counties the output dataframe needs to have:* `report_date`* `respondent_id_ferc714`* `eia_code`* `respondent_type`* `balancing_authority_id_eia`* `utility_id_eia`* `county_id_fips`* `geometry`* `balancing_authority_code_eia`* `balancing_authority_name_eia`* `respondent_name_ferc714`* `utility_name_eia`* `county_name_census`* `state`* `state_id_fips`
utils_ferc714 = ( rids_ferc714.loc[ rids_ferc714.respondent_type == "utility", ["respondent_id_ferc714", "respondent_name_ferc714", "utility_id_eia", "respondent_type"] ] ) bas_ferc714 = ( rids_ferc714.loc[ rids_ferc714.respondent_type == "balancing_authority", ["respondent_id_ferc714", "respondent_name_ferc714", "balancing_authority_id_eia", "respondent_type"] ] ) null_ferc714 = ( rids_ferc714.loc[ rids_ferc714.respondent_type.isnull(), ["respondent_id_ferc714", "respondent_name_ferc714", "respondent_type"] ] ) bas_ferc714_gdf = ( ba_util_county_gdf .drop(["county"], axis="columns") .merge(bas_ferc714, how="right") ) utils_ferc714_gdf = ( ba_util_county_gdf .drop(["balancing_authority_id_eia", "balancing_authority_code_eia", "balancing_authority_name_eia", "county"], axis="columns") .drop_duplicates() .merge(utils_ferc714, how="right") ) rids_ferc714_gdf = ( pd.concat([bas_ferc714_gdf, utils_ferc714_gdf, null_ferc714]) .astype({ "county_id_fips": pd.StringDtype(), "county_name_census": pd.StringDtype(), "respondent_type": pd.StringDtype(), "utility_id_eia": pd.Int64Dtype(), "balancing_authority_id_eia": pd.Int64Dtype(), "balancing_authority_code_eia": pd.StringDtype(), "balancing_authority_name_eia": pd.StringDtype(), "state": pd.StringDtype(), "utility_name_eia": pd.StringDtype(), }) ) display(rids_ferc714_gdf.info()) rids_ferc714_gdf.sample(10)
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Check Geometries for Completeness* How many balancing authorities do we have geometries for?* How many utilities do we have geometries for?* Do those geometries cover all of the entities that report in FERC 714?* Do we have a geometry for every entity in every year in which it reports demand? Count BA & Util Geometries
n_bas = len(rids_ferc714_gdf.balancing_authority_id_eia.unique()) logger.info(f"Found territories for {n_bas} unique Balancing Areas") n_utils = len(rids_ferc714_gdf.loc[ (rids_ferc714_gdf.balancing_authority_id_eia.isnull()) & (~rids_ferc714_gdf.utility_id_eia.isnull()) ].utility_id_eia.unique()) logger.info(f"Found territories for {n_utils} Utilities outside of the BAs")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Identify Missing Geometries* Within each year of historical data from 2010-2018, are there any entities (either BA or Utility) which **do** have hourly demand reported in the FERC 714, for whivh we do not have a historical geometry?* How many of them are there?* Why are they missing?* Do we have the geometires in adjacent years and can we re-use them?* Is it possible that the FERC 714 IDs correspond to a precursor entity, or one that was discontinued? E.g. if SWPP is missing in 2010, is that because the BA was reported in EIA as SPS in that year?* How important are the missing geometries? Do the associated entities have a lot of demand associated with them in FERC 714?* Can we use `ffill` or `backfill` on the `geometry` column in a GeoDataFrame?
problem_ids = pd.DataFrame() for year in range(2010, 2019): this_year_gdf = ( rids_ferc714_gdf .loc[(rids_ferc714_gdf.report_date.dt.year==year) & (~rids_ferc714_gdf.geometry.isnull())] ) # All BA IDs which show up in FERC 714: ba_ids_ferc714 = ( rids_ferc714 .loc[rids_ferc714.respondent_type=="balancing_authority", "balancing_authority_id_eia"] .unique() ) # BA IDs which have a geometry in this year ba_geom_ids = ( this_year_gdf .balancing_authority_id_eia .dropna().unique() ) # BA IDs which have reported demand in this year ba_demand_ids = ( dhpa_ferc714 .query("report_year==@year") .query("demand_mwh>0.0") .loc[dhpa_ferc714.eia_code.isin(ba_ids_ferc714)] .eia_code.unique() ) # Need to make the demand IDs clearly either utility of BA IDs. Whoops! missing_ba_geom_ids = [x for x in ba_demand_ids if x not in ba_geom_ids] logger.info(f"{len(missing_ba_geom_ids)} BA respondents w/o geometries in {year}") problem_ids = problem_ids.append( rids_ferc714 .loc[rids_ferc714.balancing_authority_id_eia.isin(missing_ba_geom_ids)] .assign(year=year) ) # All EIA Utility IDs which show up in FERC 714: util_ids_ferc714 = ( rids_ferc714 .loc[rids_ferc714.respondent_type=="utility", "utility_id_eia"] .unique() ) # EIA Utility IDs which have geometry information for this year util_geom_ids = ( this_year_gdf .utility_id_eia .dropna().unique() ) util_demand_ids = ( dhpa_ferc714 .query("report_year==@year") .query("demand_mwh>0.0") .loc[dhpa_ferc714.eia_code.isin(util_ids_ferc714)] .eia_code.unique() ) missing_util_geom_ids = [x for x in util_demand_ids if x not in util_geom_ids] logger.info(f"{len(missing_util_geom_ids)} Utility respondents w/o geometries in {year}") problem_ids = problem_ids.append( rids_ferc714 .loc[rids_ferc714.utility_id_eia.isin(missing_util_geom_ids)] .assign(year=year) ) problem_ids.query("year==2010").query("respondent_type=='balancing_authority'")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Dissolve to BA or Util* At this point we still have geometires at the county level.* This is 150,000+ records.* Really we just want a single geometry per respondent per year.* Dissolve based on year and respondent_id_ferc714.* Merge the annual per-respondent geometry with the rids_ferc714 which has more information* Note that this takes about half an hour to run...
%%time dissolved_rids_ferc714_gdf = ( rids_ferc714_gdf.drop_duplicates(subset=["report_date", "county_id_fips", "respondent_id_ferc714"]) .dissolve(by=["report_date", "respondent_id_ferc714"]) .reset_index() .loc[:, ["report_date", "respondent_id_ferc714", "geometry"]] .merge(rids_ferc714, on="respondent_id_ferc714", how="outer") ) #dissolved_rids_ferc714_gdf.to_file("planning_areas_ferc714.gpkg", driver="GPKG")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Select based on respondent type
dissolved_utils = dissolved_rids_ferc714_gdf.query("respondent_type=='utility'") dissolved_bas = dissolved_rids_ferc714_gdf.query("respondent_type=='balancing_authority'")
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Nationwide BA / Util Maps* Still want to add the US state boundaries / coastlines to this for context.
unwanted_ba_ids = ( 112, # Alaska 133, # Alaska 178, # Hawaii 301, # PJM Dupe 302, # PJM Dupe 303, # PJM Dupe 304, # PJM Dupe 305, # PJM Dupe 306, # PJM Dupe ) for report_date in pd.date_range(start="2010-01-01", end="2018-01-01", freq="AS"): ba_ax = ( dissolved_bas .query("report_date==@report_date") .query("respondent_id_ferc714 not in @unwanted_ba_ids") .plot(figsize=(20, 20), color="blue", alpha=0.25, linewidth=1) ) plt.title(f"FERC 714 Balancing Authority Respondents {report_date}") ctx.add_basemap(ba_ax) util_ax = ( dissolved_utils .query("report_date==@report_date") .plot(figsize=(20, 20), color="red", alpha=0.25, linewidth=1) ) plt.title(f"FERC 714 Utility Respondents {report_date}") ctx.add_basemap(util_ax) plt.show();
_____no_output_____
MIT
notebooks/historical_planning_areas.ipynb
catalyst-cooperative/electricity-demand-mapping
Python Crash CourseMaster in Data Science - Sapienza UniversityHomework 2: Python ChallengesA.A. 2017/18Tutor: Francesco Fabbri![time_to_code.jpg](attachment:time_to_code.jpg) InstructionsSo guys, here we are! **Finally** you're facing your first **REAL** homework. Are you ready to fight?We're going to apply all the Pythonic stuff seen before AND EVEN MORE... Simple rules:1. Don't touch the instructions, you **just have to fill the blank rows**.2. This is supposed to be an exercise for improving your Pythonic Skills in a spirit of collaboration so...of course you can help your classmates and obviously get a really huge help as well from all the others (as the proverb says: "I get help from you and then you help me", right?!...)3. **RULE OF THUMB** for you during the homework: - *1st Step:* try to solve the problem alone - *2nd Step:* googling random the answer - *3rd Step:* ask to your colleagues - *3rd Step:* screaming and complaining about life - *4th Step:* ask to Tutors And the Prize? The Beer?The glory?!:Guys the life is hard...in this Master it's even worse...Soooo, since that you seem so smart I want to test you before the start of all the courses....But not now.You have to come prepared to the challenge, so right now solve these first 6 exercises, then it will be the time for **FIGHTING** and (for one of you) **DRINKING**.![bevehomer.PNG](attachment:bevehomer.PNG) Warm-up... 1. 12! is equal to...
def fatt(n): if(n == 0): return 1 else: return n*fatt(n-1) fatt(12)
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
2. More math...Write a program which will find all such numbers which are divisible by 7 but are not a multiple of 5, between 0 and 1000 (both included). The numbers obtained should be printed in a comma-separated sequence on a single line. (range and CFS)
ex_2=[str(x) for x in range (1001) if x%7 ==0 and x%5 !=0] ','.join(ex_2)
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
2. Count capital lettersIn this exercises you're going to deal with YOUR DATA. Indeed, in the list below there are stored your Favorite Tv Series. But, as you can see, there is something weird. There are too much CaPITal LeTTErs. Your task is to count the capital letters in all the strings and then print the total number of capital letters in all the list.
tv_series = ['Game of THRroneS', 'big bang tHeOrY', 'MR robot', 'WesTWorlD', 'fIRefLy', "i haven't", 'HOW I MET your mothER', 'friENds', 'bRon broen', 'gossip girl', 'prISon break', 'breaking BAD'] count=0 for string in tv_series: for letter in string: if letter.lower() == letter: pass else: count+=1 count
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
3. A remarkUsing the list above, create a dictionary where the keys are Unique IDs and values the TV Series.You have to do the exercise keeping in mind these 2 constraints: 1. The order of the IDs has to be **dependent on the alphabetical order of the titles**, i.e. 0: first_title_in_alphabetical_order and so on...2. **Solve the mess** of the capital letter: we want them only at the start of the words ("prISon break" should be "Prison Break")
# write here your code newlst = [] for x in tv_series: x.title() newlst.append(x.title()) newlst a=range(12) b=sorted(newlst) dict1=dict(zip(a,b)) dict1
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
4. Dictionary to its maximumInvert the keys with the values in the dictionary built before.
# write here your code inv= {v: k for k, v in dict1.items()} inv
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
Have you done in **one line of code**? If not, try now!
# write here your code already done :D
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
4. Other boring mathLet's talk about our beloved exams. Starting from the exams and CFU below, are you able to compute the weighted mean of them?Let's do it and print the result.Description of the data:exams[1] = $(title_1, grade_1)$cfu[1] = $CFU_1$
exams = [('BIOINFORMATICS', 29), ('DATA MANAGEMENT FOR DATA SCIENCE', 30), ('DIGITAL EPIDEMIOLOGY', 26), ('NETWORKING FOR BIG DATA AND LABORATORY',28), ('QUANTITATIVE MODELS FOR ECONOMIC ANALYSIS AND MANAGEMENT','30 e lode'), ('DATA MINING TECHNOLOGY FOR BUSINESS AND SOCIETY', 30), ('STATISTICAL LEARNING',30), ('ALGORITHMIC METHODS OF DATA MINING AND LABORATORY',30), ('FUNDAMENTALS OF DATA SCIENCE AND LABORATORY', 29)] cfu = sum([6,6,6,9,6,6,6,9,9]) cfu type(exams [0]) a=list(zip (*exams))[1] a type (a) singlecfu=([6,6,6,9,6,6,6,9,9]) b= (a[]*singlecfu[])/(cfu) b mean= dict2 [0]
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
5. Palindromic numbersWrite a script which finds all the Palindromic numbers, in the range [0,**N**] (bounds included). The numbers obtained should be printed in a comma-separated sequence on a single line.What is **N**?Looking at the exercise before:**N** = (Total number of CFU) x (Sum of all the grades)(details: https://en.wikipedia.org/wiki/Palindromic_number)
def pali(n): return str(n) == str(n)[::-1] a=list(filter(pali, range(0,15876))) print(a) ?filter
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
6. StackOverflow Let's start using your new best friend. Now I'm going to give other task, slightly more difficult BUT this time, just googling, you will find easily the answer on the www.stackoverflow.com. You can use the code there for solving the exercise BUT you have to understand the solution there **COMMENTING** the code, showing me you understood the thinking process behind the code. 6. AShow me an example of how to use **PROPERLY** the *Try - Except* statements
# write here your code
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
6. BGiving this list of words below, after copying in a variable, explain and provide me a code for obtaining a **Bag of Words** from them.(Hint: use dictionaries and loops) ['theory', 'of', 'bron', 'firefly', 'thrones', 'break', 'bad', 'mother', 'firefly', "haven't", 'prison', 'big', 'friends', 'girl', 'westworld', 'bad', "haven't", 'gossip', 'thrones', 'your', 'big', 'how', 'friends', 'theory', 'your', 'bron', 'bad', 'bad', 'breaking', 'met', 'breaking', 'breaking', 'game', 'bron', 'your', 'breaking', 'met', 'bang', 'how', 'mother', 'bad', 'theory', 'how', 'i', 'friends', "haven't", 'of', 'of', 'gossip', 'i', 'robot', 'of', 'prison', 'bad', 'friends', 'friends', 'i', 'robot', 'bang', 'mother', 'bang', 'i', 'of', 'bad', 'friends', 'theory', 'i', 'friends', 'thrones', 'prison', 'theory', 'theory', 'big', 'of', 'bang', 'how', 'thrones', 'bang', 'theory', 'friends', 'game', 'bang', 'mother', 'broen', 'bad', 'game', 'break', 'break', 'bang', 'big', 'gossip', 'robot', 'met', 'i', 'game', 'your', 'met', 'bad', 'firefly', 'your']
# write here your code
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
6. CAnd now, write down a code which computes the first 10 Fibonacci numbers(details: https://en.wikipedia.org/wiki/Fibonacci_number)
# write here your code
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-moka1992
Lineplot
# 200 values from the interval <0,100>, equidistantly divided x = np.linspace(0,100,200) y = np.sin(x) # a line plot plt.plot(x,y,'red') plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
scatterplot
# 200 random values from the interval <0,10> x = 10*np.random.rand(200,1) # 200 random values from the interval <0,15> y = 15*np.random.rand(200,1) # a scatter plot plt.scatter(x,y) plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
histogram
# 200 random values from the interval <0,15> y = 15*np.random.rand(200,1) # a histogram with 20 bins plt.hist(y,bins=20) plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
Graphs on common axes
# 200 values from the interval <0,100>, equidistantly divided x = np.linspace(0,100,200) # sin(x) values y1 = np.sin(x) # sin(x)*cos(x) values y2 =(np.sin(x))*(np.cos(x)) # a line plot of sin(x), red line plt.plot(x,y1,'red') # a line plot of sin(x)*cos(x), blue line plt.plot(x,y2,'blue') plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
Subplots
# the first figure plt.subplot(2,1,1) plt.plot(x,y1,'red') plt.title('sin(x)') # the second figure plt.subplot(2,1,2) plt.plot(x,y2,'blue') plt.title('sin(x)*(cos(x))') # automatically adjust the subplot parameters to give a specified padding plt.tight_layout() plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
Legends
# import pandas import pandas as pd # import sklearn datasets from sklearn import datasets # load iris dataset iris = datasets.load_iris() # create dataframe iris_df = pd.DataFrame(iris.data, columns=iris.feature_names) # create target iris_df['target'] = iris.target # map the target values to the target names iris_df['target_name'] =iris_df.target.map( {0: 'setosa', 1: 'versicolor', 2: 'virginica'} ) iris_df.head() # Iris setosa setosa = iris_df[iris_df.target_name == 'setosa'] # Iris versicolor versicolor = iris_df[iris_df.target_name == 'versicolor'] # Iris virginica virginica = iris_df[iris_df.target_name == 'virginica'] # plot setosa plt.scatter(setosa['sepal length (cm)'], setosa['sepal width (cm)'], marker ='o', color = 'red', label = 'setosa') # plot versicolor plt.scatter(versicolor['sepal length (cm)'], versicolor['sepal width (cm)'], marker ='o', color = 'green', label = 'versicolor') # plot virginica plt.scatter(virginica['sepal length (cm)'], virginica['sepal width (cm)'], marker ='o', color = 'blue', label = 'virginica') # legend location plt.legend(loc='upper right') # plot title plt.title('Iris flower') # x-axis title plt.xlabel('sepal length (cm)') # y-axis title plt.ylabel('sepal width (cm)') plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
Annotations
# the same code as before plt.scatter(setosa['sepal length (cm)'],setosa['sepal width (cm)'], marker ='o', color = 'red', label = 'setosa') plt.scatter(versicolor['sepal length (cm)'],versicolor['sepal width (cm)'], marker ='o', color = 'green', label = 'versicolor') plt.scatter(virginica['sepal length (cm)'],virginica['sepal width (cm)'], marker ='o', color = 'blue', label = 'virginica') # new lines of code # it can be tricky to find the right coordinates for the first time ###################### plt.annotate('setosa', xy =(5.0,3.5), xytext = (4.25,4.0), arrowprops={'color':'red'}) plt.annotate('versicolor', xy =(7.2,3.6), xytext = (6.5,4.0), arrowprops={'color':'red'}) plt.annotate('virginica', xy =(5.05,1.95), xytext = (5.5,1.75), arrowprops={'color':'red'}) ###################### # the same code as before plt.legend(loc='upper right') plt.title('Iris flower') plt.xlabel('sepal length (cm)') plt.ylabel('sepal width (cm)') plt.ylim(1.5,4.7) plt.show()
_____no_output_____
MIT
w3/w3-day_1/.ipynb_checkpoints/Matplotlib_tutorial-checkpoint.ipynb
bmskarate/lighthouseMain
Part 0: Mining the webPerhaps the richest source of openly available data today is [the Web](http://www.computerhistory.org/revolution/networking/19/314)! In this lab, you'll explore some of the basic programming tools you need to scrape web data.> **Note.** The Vocareum platform runs in a cloud-based environment that limits what websites a program can connect to directly. Therefore, some (or possibly all) of the code below will **not** work. Therefore, we are making this notebook **optional** and are providing solutions inline.>> Even if you are using a home or local installation of Jupyter, you may encounter problems if you attempt to access a site too many times or too rapidly. That can happen if your internet service provider (ISP) or the target website detect your accesses as "unusual" and reject them. It's easy to imagine accidentally writing an infinite loop that tries to access a page and being seen from the other side as a malicious program. :) The Requests modulePython's [Requests module](http://requests.readthedocs.io/en/latest/user/quickstart/) to download a web page.For instance, here is a code fragment to download the [Georgia Tech](http://www.gatech.edu) home page and print the first 250 characters. You might also want to [view the source](http://www.computerhope.com/issues/ch000746.htm) of Georgia Tech's home page to get a nicely formatted view, and compare its output to what you see above.
import requests response = requests.get('https://www.gatech.edu/') webpage = response.text # or response.content for raw bytes print(webpage[0:250]) # Prints the first hundred characters only
<!DOCTYPE html> <html lang="en" dir="ltr" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/terms/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000
MIT
EdX/GTx: CSE6040x: FA18 - Computing for Data Analysis/Module 1: Representing, transforming and visualizing data/Topic 6 + Notebook 6 (OPTIONAL): Mining the web/part6.ipynb
helpthx/Path_through_Data_Science_2019
**Exercise 1.** Given the contents of the GT home page as above, write a function that returns a list of links (URLs) of the "top stories" on the page.For instance, on Friday, September 9, 2016, here was the front page:![www.gatech.edu as of Fri Sep 9, 2016](./www.gatech.edu--2016-09-09--annotated-medium.png)The top stories cycle through in the large image placeholder shown above. We want your function to return the list of URLs behind each of the "Full Story" links, highlighted in red. If no URLs can be found, the function should return an empty list.
import re # Maybe you want to use a regular expression? def get_gt_top_stories(webpage_text): """Given the HTML text for the GT front page, returns a list of the URLs of the top stories or an empty list if none are found. """ pattern = '''<a class="slide-link" href="(?P<url>[^"]+)"''' return re.findall(pattern, webpage_text) top_stories = get_gt_top_stories(webpage) print("Links to GT's top stories:", top_stories)
Links to GT's top stories: ['https://www.news.gatech.edu/features/age-empowerment-meet-casey-aultman', 'https://news.gatech.edu/features/first-tech-governors-inauguration-mccamish', 'http://www.news.gatech.edu/features/you-should-come-georgia-tech']
MIT
EdX/GTx: CSE6040x: FA18 - Computing for Data Analysis/Module 1: Representing, transforming and visualizing data/Topic 6 + Notebook 6 (OPTIONAL): Mining the web/part6.ipynb
helpthx/Path_through_Data_Science_2019
A more complex exampleGo to [Yelp!](http://www.yelp.com) and look up `ramen` in `Atlanta, GA`. Take note of the URL:![Yelp! search for ramen in ATL](./yelp-search-example.png) This URL encodes what is known as an _HTTP "get"_ method (or request). It basically means a URL with two parts: a _command_ followed by one or more _arguments_. In this case, the command is everything up to and including the word `search`; the arguments are the rest, where individual arguments are separated by the `&` or ``.> "HTTP" stands for "HyperText Transport Protocol," which is a standardized set of communication protocols that allow _web clients_, like your web browser or your Python program, to communicate with _web servers_.In this next example, let's see how to build a "get request" with the `requests` module. It's pretty easy!
url_command = 'https://yelp.com/search' url_args = {'find_desc': "ramen", 'find_loc': "atlanta, ga"} response = requests.get (url_command, params=url_args, timeout=60) print ("==> Downloading from: '%s'" % response.url) # confirm URL print ("\n==> Excerpt from this URL:\n\n%s\n" % response.text[0:100])
==> Downloading from: 'https://www.yelp.com/search?find_desc=ramen&find_loc=atlanta%2C+ga' ==> Excerpt from this URL: <!DOCTYPE HTML> <!--[if lt IE 7 ]> <html xmlns:fb="http://www.facebook.com/2008/fbml" class="ie6 ie
MIT
EdX/GTx: CSE6040x: FA18 - Computing for Data Analysis/Module 1: Representing, transforming and visualizing data/Topic 6 + Notebook 6 (OPTIONAL): Mining the web/part6.ipynb
helpthx/Path_through_Data_Science_2019
**Exercise 2.** Given a search topic, location, and a rank $k$, return the name of the $k$-th item of a Yelp! search. If there is no $k$-th item, return `None`.> The demo query above only gives you a website with the top 10 items, meaning you could only use it for $k \leq 10$. Figure out how to modify it to solve the problem when $k > 10$.
def find_yelp_item (topic, location, k): """Returns the k-th suggested item from Yelp! in Atlanta for the given topic.""" import re if k < 1: return None # Download page url_command = 'http://yelp.com/search' url_args = {'find_desc': topic, 'find_loc': location, 'start': k-1 } response = requests.get (url_command, params=url_args) if not response: return None # Split page into lines lines = response.text.split ('\n') # Look for the line containing the name of the k-th item item_pattern = re.compile ('<span class="indexed-biz-name">{}\..*<span >(?P<item_name>.*)</span></a>'.format (k)) for l in lines: item_match = item_pattern.search (l) if item_match: return item_match.group ('item_name') # No matches, evidently return None assert find_yelp_item('fried chicken', 'Atlanta, GA', -1) is None # Tests an invalid value for 'k'
_____no_output_____
MIT
EdX/GTx: CSE6040x: FA18 - Computing for Data Analysis/Module 1: Representing, transforming and visualizing data/Topic 6 + Notebook 6 (OPTIONAL): Mining the web/part6.ipynb
helpthx/Path_through_Data_Science_2019
> Search queries on Yelp! don't always return the same answers, since the site is always changing! Also, your results might not match a query you do via your web browser (_why not?_). As such, you should manually check your answers.
item = find_yelp_item ('fried chicken', 'Atlanta, GA', 1) print (item) item = find_yelp_item ('fried chicken', 'Atlanta, GA', 5) print (item) # The most likely answer on September 11, 2018: #assert item == 'Buttermilk Kitchen' item = find_yelp_item('fried chicken', 'Atlanta, GA', 10) print(item) # Most likely correct answer as of September 11, 2018: #assert item == 'Colonnade Restaurant'
None
MIT
EdX/GTx: CSE6040x: FA18 - Computing for Data Analysis/Module 1: Representing, transforming and visualizing data/Topic 6 + Notebook 6 (OPTIONAL): Mining the web/part6.ipynb
helpthx/Path_through_Data_Science_2019
Main notebook for battery state estimation
import numpy as np import pandas as pd import scipy.io import math import os import ntpath import sys import logging import time import sys from importlib import reload import plotly.graph_objects as go import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD, Adam from keras.utils import np_utils from keras.layers import LSTM, Embedding, RepeatVector, TimeDistributed, Masking from keras.callbacks import EarlyStopping, ModelCheckpoint, LambdaCallback IS_COLAB = False if IS_COLAB: from google.colab import drive drive.mount('/content/drive') data_path = "/content/drive/My Drive/battery-state-estimation/battery-state-estimation/" else: data_path = "../../" sys.path.append(data_path) from data_processing.lg_dataset import LgData
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Config logging
reload(logging) logging.basicConfig(format='%(asctime)s [%(levelname)s]: %(message)s', level=logging.DEBUG, datefmt='%Y/%m/%d %H:%M:%S')
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Load Data
train_names = [ 'n10degC/601_Mixed1', 'n10degC/601_Mixed2', 'n10degC/604_Mixed3', 'n10degC/602_Mixed4', 'n10degC/602_Mixed5', 'n10degC/604_Mixed6', 'n10degC/604_Mixed7', 'n10degC/604_Mixed8', 'n20degC/610_Mixed1', 'n20degC/610_Mixed2', 'n20degC/611_Mixed3', 'n20degC/611_Mixed4', 'n20degC/611_Mixed5', 'n20degC/611_Mixed6', 'n20degC/611_Mixed7', 'n20degC/611_Mixed8' ] test_names = [ 'n10degC/596_UDDS', 'n10degC/601_US06', 'n10degC/596_LA92', 'n20degC/610_UDDS', 'n20degC/610_US06', 'n20degC/610_LA92', ] steps = 500 lg_data = LgData(data_path) cycles = lg_data.get_discharge_whole_cycle(train_names, test_names, output_capacity=False, scale_test=True) train_x, train_y, test_x, test_y = lg_data.get_discharge_multiple_step(cycles, steps) train_y = lg_data.keep_only_y_end(train_y, steps) test_y = lg_data.keep_only_y_end(test_y, steps)
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Model training
EXPERIMENT = "lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test" experiment_name = time.strftime("%Y-%m-%d-%H-%M-%S") + '_' + EXPERIMENT print(experiment_name) os.environ["CUDA_VISIBLE_DEVICES"] = "1" # Model definition opt = tf.keras.optimizers.Adam(lr=0.00001) model = Sequential() model.add(LSTM(256, activation='selu', return_sequences=True, input_shape=(train_x.shape[1], train_x.shape[2]))) model.add(LSTM(256, activation='selu', return_sequences=False)) model.add(Dense(256, activation='selu')) model.add(Dense(128, activation='selu')) model.add(Dense(1, activation='linear')) model.summary() model.compile(optimizer=opt, loss='huber', metrics=['mse', 'mae', 'mape', tf.keras.metrics.RootMeanSquaredError(name='rmse')]) es = EarlyStopping(monitor='val_loss', patience=50) mc = ModelCheckpoint(data_path + 'results/trained_model/%s_best.h5' % experiment_name, save_best_only=True, monitor='val_loss') history = model.fit(train_x, train_y, epochs=1000, batch_size=32, verbose=2, validation_split=0.2, callbacks = [es, mc] ) model.save(data_path + 'results/trained_model/%s.h5' % experiment_name) hist_df = pd.DataFrame(history.history) hist_csv_file = data_path + 'results/trained_model/%s_history.csv' % experiment_name with open(hist_csv_file, mode='w') as f: hist_df.to_csv(f)
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Testing
results = model.evaluate(test_x, test_y) print(results)
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Data Visualization
# fig = go.Figure() # fig.add_trace(go.Scatter(y=history.history['loss'], # mode='lines', name='train')) # fig.add_trace(go.Scatter(y=history.history['val_loss'], # mode='lines', name='validation')) # fig.update_layout(title='Loss trend', # xaxis_title='epoch', # yaxis_title='loss') # fig.show() # train_predictions = model.predict(train_x) # cycle_num = 0 # steps_num = 8000 # step_index = np.arange(cycle_num*steps_num, (cycle_num+1)*steps_num) # fig = go.Figure() # fig.add_trace(go.Scatter(x=step_index, y=train_predictions.flatten()[cycle_num*steps_num:(cycle_num+1)*steps_num], # mode='lines', name='SoC predicted')) # fig.add_trace(go.Scatter(x=step_index, y=train_y.flatten()[cycle_num*steps_num:(cycle_num+1)*steps_num], # mode='lines', name='SoC actual')) # fig.update_layout(title='Results on training', # xaxis_title='Step', # yaxis_title='SoC percentage') # fig.show() # test_predictions = model.predict(test_x) # cycle_num = 0 # steps_num = 8000 # step_index = np.arange(cycle_num*steps_num, (cycle_num+1)*steps_num) # fig = go.Figure() # fig.add_trace(go.Scatter(x=step_index, y=test_predictions.flatten()[cycle_num*steps_num:(cycle_num+1)*steps_num], # mode='lines', name='SoC predicted')) # fig.add_trace(go.Scatter(x=step_index, y=test_y.flatten()[cycle_num*steps_num:(cycle_num+1)*steps_num], # mode='lines', name='SoC actual')) # fig.update_layout(title='Results on testing', # xaxis_title='Step', # yaxis_title='SoC percentage') # fig.show()
_____no_output_____
Apache-2.0
battery-state-estimation/experiments/lg/lstm_soc_percentage_lg_negative_temp_500_steps_drive_cycle_test.ipynb
KeiLongW/battery-state-estimation
Regression Analysis: Seasonal Effects with Sklearn Linear RegressionIn this notebook, you will build a SKLearn linear regression model to predict Yen futures ("settle") returns with *lagged* Yen futures returns.
# Futures contract on the Yen-dollar exchange rate: # This is the continuous chain of the futures contracts that are 1 month to expiration yen_futures = pd.read_csv( Path("yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True ) yen_futures.head() # Trim the dataset to begin on January 1st, 1990 yen_futures = yen_futures.loc["1990-01-01":, :] yen_futures.head()
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Data Preparation Returns
# Create a series using "Settle" price percentage returns, drop any nan"s, and check the results: # (Make sure to multiply the pct_change() results by 100) # In this case, you may have to replace inf, -inf values with np.nan"s yen_futures['Return'] = (yen_futures[["Settle"]].pct_change() * 100) returns = yen_futures.replace(-np.inf, np.nan).dropna() returns.tail()
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Lagged Returns
# Create a lagged return using the shift function yen_futures['Lagged_Return'] = yen_futures['Return'].shift() yen_futures = yen_futures.dropna() yen_futures.tail()
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Train Test Split
# Create a train/test split for the data using 2018-2019 for testing and the rest for training train = yen_futures[:'2017'] test = yen_futures['2018':] # Create four dataframes: # X_train (training set using just the independent variables), X_test (test set of of just the independent variables) # Y_train (training set using just the "y" variable, i.e., "Futures Return"), Y_test (test set of just the "y" variable): X_train = train["Lagged_Return"].to_frame() X_test = test["Lagged_Return"].to_frame() y_train = train["Return"] y_test = test["Return"] X_train
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Linear Regression Model
# Create a Linear Regression model and fit it to the training data from sklearn.linear_model import LinearRegression # Fit a SKLearn linear regression using just the training set (X_train, Y_train): model = LinearRegression() model.fit(X_train, y_train)
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Make predictions using the Testing DataNote: We want to evaluate the model using data that it has never seen before, in this case: X_test.
# Make a prediction of "y" values using just the test dataset predictions = model.predict(X_test) # Assemble actual y data (Y_test) with predicted y data (from just above) into two columns in a dataframe: Results = y_test.to_frame() Results["Predicted Return"] = predictions # Plot the first 20 predictions vs the true values prediction_plot = Results[:20].plot(subplots=True)
_____no_output_____
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
Out-of-Sample PerformanceEvaluate the model using "out-of-sample" data (X_test and y_test)
from sklearn.metrics import mean_squared_error # Calculate the mean_squared_error (MSE) on actual versus predicted test "y" mse = mean_squared_error(Results["Return"],Results["Predicted Return"]) # Using that mean-squared-error, calculate the root-mean-squared error (RMSE): rmse = np.sqrt(mse) print(f"Out-of-Sample Root Mean Squared Error (RMSE): {rmse}")
Out-of-Sample Root Mean Squared Error (RMSE): 0.41545437184712763
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
In-Sample PerformanceEvaluate the model using in-sample data (X_train and y_train)
# Construct a dataframe using just the "y" training data: in_sample_results = y_train.to_frame() # Add a column of "in-sample" predictions to that dataframe: in_sample_results["In-sample Predictions"] = model.predict(X_train) # Calculate in-sample mean_squared_error (for comparison to out-of-sample) in_sample_mse = mean_squared_error(in_sample_results["Return"], in_sample_results["In-sample Predictions"]) # Calculate in-sample root mean_squared_error (for comparison to out-of-sample) in_sample_rmse = np.sqrt(in_sample_mse) print(f"In-sample Root Mean Squared Error (RMSE): {in_sample_rmse}")
In-sample Root Mean Squared Error (RMSE): 0.5962037920929946
ADSL
Starter_Code/regression_analysis.ipynb
focraniv/A-Yen-for-the-Future
้…ทๆˆ‘้Ÿณๆจ‚- ไธ‹่ผ‰้…ทๆˆ‘้Ÿณๆจ‚ๅนณๅฐไธŠ้›ปๅฐ็š„ใ€Œๅฐˆ่ผฏใ€้Ÿณๆช” ่ผ‰ๅ…ฅๅฅ—ไปถ
import re import os import time import requests from bs4 import BeautifulSoup
_____no_output_____
MIT
KUWO_ALBUM.ipynb
TLYu0419/KUWO
่จญๅฎš็ˆฌ่Ÿฒๅƒๆ•ธ
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36 Edg/88.0.705.63', 'Cookie': '_ga=GA1.2.841063431.1603504850; Hm_lvt_cdb524f42f0ce19b169a8071123a4797=1611556993; _gid=GA1.2.997533225.1613194048; Hm_lpvt_cdb524f42f0ce19b169a8071123a4797=1613194071; kw_token=UCMBLA99FF', 'referer': 'http://www.kuwo.cn/', 'If-Range': '6023dfa9-8d7179', 'Sec-Fetch-Dest': 'video', 'Sec-Fetch-Mode': 'no-cors', 'Sec-Fetch-Site': 'cross-site', 'csrf': 'UCMBLA99FF'}
_____no_output_____
MIT
KUWO_ALBUM.ipynb
TLYu0419/KUWO
็ˆฌๅ–่ณ‡ๆ–™ ่’้›†้€ฃ็ตๆธ…ๅ–ฎ
page = 1 links = [] while True: # ็”Ÿๆˆ็ถฒๅ€ url = 'http://www.kuwo.cn/api/www/album/albumInfo?albumId=547562&pn={}&rn=30'.format(page) # ่ซ‹ๆฑ‚่ณ‡ๆ–™ resp = requests.get(url, headers=headers) time.sleep(0.5) try: musicList = [i['rid'] for i in resp.json()['data']['musicList']] # ไฟๅญ˜่ณ‡่จŠ links += musicList # ่ผธๅ‡บ่ณ‡ๆ–™ๆ“ทๅ–้€ฒๅบฆ print(page, ': ', len(list(set(links)))) # ๅˆคๆ–ทๆ˜ฏๅฆ่ทณๅ‡บ่ฟดๅœˆ page += 1 if len(musicList) < 30: links = list(set(links)) print('There are totally {} links!'.format(len(links))) break except: print('status_code: ', resp.status_code, ', Retry')
1 : 30 2 : 60 status_code: 504 , Retry status_code: 504 , Retry status_code: 504 , Retry 3 : 90 4 : 120 status_code: 504 , Retry 5 : 150 6 : 180 7 : 210 8 : 240 9 : 270 10 : 300 11 : 330 12 : 360 13 : 390 14 : 420 status_code: 504 , Retry 15 : 450 16 : 480 17 : 510 18 : 540 19 : 570 status_code: 504 , Retry 20 : 600 21 : 630 22 : 660 23 : 690 24 : 720 status_code: 504 , Retry 25 : 750 26 : 780 27 : 810 28 : 840 status_code: 504 , Retry 29 : 870 30 : 900 31 : 915 There are totally 915 links!
MIT
KUWO_ALBUM.ipynb
TLYu0419/KUWO
ไธ‹่ผ‰้€ฃ็ต้Ÿณๆช”
# os.mkdir('./musics') # ๅทฒไธ‹่ผ‰็š„้Ÿณๆจ‚ๆธ…ๅ–ฎ download_list = [int(i.split('_',-1)[0]) for i in os.listdir('./musics')] len(download_list) # ๆŽ’้™คๅทฒไธ‹่ผ‰็š„้Ÿณๆจ‚ links = [link for link in links if link not in download_list] len(links) for link in links: # ๅ–้Ÿณๆช”ๆ‘˜่ฆ url = 'http://www.kuwo.cn/play_detail/{}'.format(link) resp = requests.get(url, headers=headers) soup = BeautifulSoup(resp.text) time.sleep(3) music_name = soup.find('title').text music_name = re.sub(r'/|', '', music_name) music_uploadtime = soup.find('span', {'class':'time'}).text # ๅ–้Ÿณๆช”้€ฃ็ต music_link = 'http://www.kuwo.cn/url?format=mp3&rid={}&response=url&type=convert_url3&br=128kmp3'.format(link) try: music_link = requests.get(music_link).json()['url'] except: time.sleep(1) music_link = requests.get(music_link).json()['url'] # ไธ‹่ผ‰้Ÿณๆช” music_content = requests.get(url=music_link).content with open('./musics/{}.mp3'.format(str(link) + '_' + music_name), 'wb') as f: f.write(music_content) print('Succed: ', link, music_name)
Succed: 7090145 ่ฏทไฝ ไธ่ฆๅ‘Š่ฏ‰ๆˆ‘๏ผŒไฝ ๆœ€็ˆฑ็š„ไบบไธๆ˜ฏๆˆ‘(ไธ€ไธชไบบๅฌ)_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 7090146 ไธบไป€ไนˆ๏ผŒไฝ ๅฐฑไธๅ–œๆฌขๆˆ‘ไบ†(ไธ€ไธชไบบๅฌ)_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 76240867 โ€ๆˆ‘ไธๆ˜ฏๅ‚ป๏ผŒๅชๆ˜ฏๆ‡’ๅพ—่ฎก่พƒโ€_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 83795941 ไฝ ๆœ‰ๅคšไน…ๆฒกๅฏนๅฆไธ€ๅŠ่ฏดๆˆ‘็ˆฑไฝ ไบ†๏ผŸ_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 57690089 ไผšๆœ‰ไบบๆ‹ฟ็€ๆˆ’ๆŒ‡ๅฏนไฝ ็ฌ‘๏ผŒ่ฏดไฝ™็”Ÿ่ฏทๅคšๆŒ‡ๆ•™ใ€‚ (่Š‚็›ฎ)_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 57151470 โ€ ๆˆ‘ๆœˆ่–ชไธ‰ไธ‡๏ผŒๆ‹’็ป็ป™ๅฅณๅ‹ไนฐไธคไธ‡็š„ๅŒ…ใ€‚ โ€_่•ŠๅธŒ__ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 148090871 ๆ€ปๆœ‰ไบบๅœจๅทๅท็พกๆ…•ไฝ _่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน Succed: 41351166 โ€œๆˆ‘ไปฌๅ†ไนŸไธไผšไบ’้“ๆ™šๅฎ‰โ€_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน
MIT
KUWO_ALBUM.ipynb
TLYu0419/KUWO
็•ฐๅธธ็‹€ๆณๆŽ’้™ค- ้€šๅธธๆ˜ฏๆช”ๆกˆๅ็จฑๆœ‰้žๆณ•ๅญ—ๅ…ƒ๏ผŒๆˆ–่€…request็š„้€Ÿๅบฆ้Žๅฟซ่ขซๆ“‹~
soup.find('title').text requests.get(music_link).json()['url'] music_name = '66823804_่•ŠๅธŒไธ“่ฎฟ ้™ˆไน”ๆฉ๏ผš็ˆฑ่‡ชๅทฑ๏ผŒๆ˜ฏ็ปˆ่บซๆตชๆผซ็š„ๅผ€ๅง‹_่•ŠๅธŒErin_ๅ•ๆ›ฒๅœจ็บฟ่ฏ•ๅฌ_้…ทๆˆ‘้Ÿณไน.mp3' with open('./musics/{}.mp3'.format(str(link) + '_' + music_name), 'wb') as f: f.write(music_content)
_____no_output_____
MIT
KUWO_ALBUM.ipynb
TLYu0419/KUWO
Make cluster plot
_ = pylab.hist(data['nunique'][data['nunique']!= 0], 50) def setup_clusters(width=1500, xoffset=0, yoffset=0, **params): kwargs = dict(xticks=False, yticks=False, grid=False, tickmarks=False) kwargs.update(params) ax = setup(xr=[-width+xoffset,width+xoffset], yr=[-width+yoffset,width+yoffset], **kwargs) pylab.xticks([]) pylab.yticks([]) return ax def plot_cluster(data, isgood=None, vmin=18, vmax=32, cmap=None, maxsize=50, sizescale=1.0, **kwargs): if isgood is None: isgood = (np.ones(data.shape) == 1) if cmap is None: cmap=make_cmap() agenorm = matplotlib.colors.Normalize(vmin, vmax, clip=True) index = np.where(isgood & (data['x'] != 0) & (data['y'] != 0))[0] s = np.clip(np.sqrt(data['nunique'][index]), 3, maxsize)*2*sizescale sca = pylab.scatter(data['x'][index], data['y'][index], label='Age', s=s, c=data['age'][index], vmin=vmin, vmax=vmax, cmap=cmap, lw=0, **kwargs) return sca def label_clusters(data, isgood=None, vmin=18, vmax=32, cmap=None, ax=None, sizescale=1.0): if isgood is None: isgood = (np.ones(data.shape) == 1) if cmap is None: cmap=make_cmap() if ax is None: ax = pylab.gca() agenorm = matplotlib.colors.Normalize(vmin, vmax, clip=True) xr,yr = pylab.xlim(), pylab.ylim() index = np.where(isgood & (data['x'] > xr[0]) & (data['x'] < xr[1]) & (data['y'] > yr[0]) & (data['y'] < yr[1]) & (data['x'] != 0) & (data['y'] != 0))[0] ii = np.argsort(data['nunique'][index]) for x,y,label,age,s in data[index][['x','y','subreddit', 'age', 'nunique']][ii]: if len(label) == 0: continue color=cmap(agenorm(age)) # s = np.clip(s, 4,12)*sizescale fs = np.clip(12*(s/200.0), 3, 12)*sizescale tmp = text(x,y,label, color=color, ha='left', va='bottom', fontsize=fs, clip_on=True, clip_path=ax.patch, outline=True) tmp.set_clip_path(ax.patch) sub_width = 400 sub_xoffset = 70 setup_clusters(sub_width, sub_xoffset, figsize=(12,6), subplt=(1,2,1)) sca = plot_cluster(data, cmap=pylab.cm.rainbow, vmin=18, vmax=32) icolorbar(sca, loc=2, borderpad=0.75, tickfmt='{:.0f}') setup_clusters(sub_width, sub_xoffset, subplt=(1,2,2)) sca = plot_cluster(data, cmap=make_cmap(), vmin=18, vmax=32) icolorbar(sca, loc=2, borderpad=0.75, tickfmt='{:.0f}') main_width = 1500 sub_width = 400 sub_xoffset = 70 setup_clusters(main_width, figsize=(12,4), subplt=(1,3,1)) box([-sub_width+sub_xoffset,sub_width+sub_xoffset], [-sub_width,sub_width], lw=0, alpha=0.1) plot_cluster(data) setup_clusters(sub_width, sub_xoffset, subplt=(1,3,2)) sca = plot_cluster(data) icolorbar(sca, loc=2, borderpad=0.75, tickfmt='{:.0f}') setup_clusters(sub_width, sub_xoffset, subplt=(1,3,3)) plot_cluster(data) label_clusters(data, (data['nunique'] > 500)) pylab.tight_layout() # pylab.savefig('/Users/ajmendez/Desktop/subreddits.png', dpi=200) sub_width = 600 sub_xoffset = 20 sub_yoffset = -50 setup_clusters(sub_width, sub_xoffset, sub_yoffset, figsize=(12,12), subplt=(2,2,1), title='Age < 21') plot_cluster(data, cmap=make_cmap(), alpha=0.1, maxsize=20) isage = (data['age'] < 21) & (data['nunique'] > 10) sca = plot_cluster(data, isage, sizescale=2.0) label_clusters(data, isage, sizescale=2.0) setup_clusters(sub_width, sub_xoffset, sub_yoffset, subplt=(2,2,2), title='Age > 30') plot_cluster(data, cmap=make_cmap(), alpha=0.1, maxsize=20) isage = (data['age'] > 30) & (data['nunique'] > 10) sca = plot_cluster(data, isage, sizescale=2.0) label_clusters(data, isage, sizescale=2.0) sub_width = 60 sub_xoffset = 430 sub_yoffset = -330 setup_clusters(sub_width, sub_xoffset, sub_yoffset, subplt=(2,2,3), title='Sports Cluster') plot_cluster(data, cmap=pylab.cm.Greys, alpha=0.1, maxsize=20) isage = (data['nunique'] > 10) sca = plot_cluster(data, isage, sizescale=2.0) label_clusters(data, isage, sizescale=2.0) sub_width = 70 sub_xoffset = 1000 sub_yoffset = 150 setup_clusters(sub_width, sub_xoffset, sub_yoffset, subplt=(2,2,4)) plot_cluster(data, cmap=pylab.cm.Greys, alpha=0.1, maxsize=20) isage = (data['nunique'] > 5) & (data['age'] > 0) sca = plot_cluster(data, isage, sizescale=2.0) label_clusters(data, isage, sizescale=2.0) icolorbar(sca, loc=1) sub_width = 1450 sub_xoffset = 380 sub_yoffset = 100 setup_clusters(sub_width, sub_xoffset, sub_yoffset, figsize=(12,12), title='Programming Subreddits') plot_cluster(data, alpha=0.1, maxsize=20) isage = (data['nunique'] > 10) & (data['age'] > 0) & (data['isprogramming'] ==1) sca = plot_cluster(data, isage, sizescale=2.0) icolorbar(sca) label_clusters(data, isage, sizescale=2.0) ii = np.argsort(data[isage]['age']) for subreddit, age in data[isage][ii][['subreddit', 'age']]: print '{:12s} {:5.1f}'.format(subreddit, age) sub_width = 450 sub_xoffset = -180 sub_yoffset = 100 setup_clusters(sub_width, sub_xoffset, sub_yoffset, figsize=(12,12), title='Cities and Countries') plot_cluster(data, alpha=0.1, maxsize=20) isage = (data['nunique'] > 10) & (data['age'] > 0) & (data['iscity'] ==1) sca = plot_cluster(data, isage, sizescale=2.0) icolorbar(sca) label_clusters(data, isage, sizescale=2.0) tmp = data[np.argsort(-data['age'])] iscity = (tmp['nunique'] > 20) & (tmp['age'] > 10) & (tmp['iscity'] > 0) ncity = len(np.where(iscity)[0]) cmap = make_cmap() ax = setup(figsize=(16,4), grid=False, title='Cities and Countries', ylabel='Age', yr=[0, 32], xr=[-0.2, ncity+0.2], xtickv=np.arange(ncity)+0.5, xticknames=['' for x in tmp['subreddit'][iscity]], xtickrotate=90) for i, subreddit in enumerate(tmp['subreddit'][iscity]): pylab.text(i+0.6, 1, '/r/'+subreddit, color='w', fontsize=14, fontweight='bold', ha='center', va='bottom', rotation=90) # ax.set_xticklabels(tmp['subreddit'][iscity], rotation=90, ha='center') pylab.bar(left=np.arange(ncity)+0.1, width=0.8, height=tmp['age'][iscity], lw=0, alpha=0.8, color=cmap(agenorm(tmp['age'][iscity])))
_____no_output_____
MIT
reddit/visualize.ipynb
ajmendez/explore
Build data.json
vizit = json.load(open('/Users/ajmendez/data/reddit/vizit_data.json', 'r')) ii = np.where(data['age'] > 0) ageit = dict(nodes=[], edges=[]) node_ids = [] for node in vizit['nodes']: subreddit = node['label'] i = np.where( (data['subreddit'] == subreddit) & (data['age'] > 0) )[0] if len(i) != 0: newnode = copy.copy(node) newnode['color'] = 'rgb({:0.0f}, {:0.0f}, {:0.0f})'.format(*data['rgba'][i][0][:-1]*256) newnode['size'] = 4.0*float(newnode['size']) newnode['age'] = float(data['age'][i]) else: newnode = copy.copy(node) newnode['color'] = 'rgb({:0.0f}, {:0.0f}, {:0.0f})'.format(0,0,0) newnode['age'] = 0 newnode['size'] = 0.5*float(newnode['size']) ageit['nodes'].append(newnode) node_ids.append(newnode['id']) for edge in vizit['edges']: if (edge['source'] in node_ids) and (edge['target'] in node_ids): ageit['edges'].append(copy.copy(edge)) print 'Nodes: {:,d} Edges: {:,d}'.format(len(ageit['nodes']), len(ageit['edges'])) data['age'][1] pprint(vizit['nodes'][-2]) pprint(vizit['edges'][1]) json.dump(ageit, open('/Users/ajmendez/data/reddit/ageit_data.json', 'w'), indent=2)
_____no_output_____
MIT
reddit/visualize.ipynb
ajmendez/explore
Class AttributesIn practice a dog as a color, breed, age, and other attributes, and it can do things like eat, run, sleep, bark etc.
class Dog: # Atributes age = 0 name = 'noname' breed = 'nobreed' color = 'nocolor' my_dog = Dog() print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) my_dog = Dog() my_dog.age = 2 my_dog.name = 'Fido' my_dog.color = 'brown' my_dog.breed = 'Labradoodle' print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed))
Fido is a 2-year old brown Labradoodle.
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Object Constructor
class Dog: def __init__(self, age ,name ,breed ,color): self.age = age self.name = name self.breed = breed self.color = color my_dog = Dog('4','Coco','Corgie','Brown') print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) class Dog: def __init__(self, age ,name ,breed ,color): self.age = age self.name = name self.breed = breed self.color = color def info(self): print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) my_dog = Dog('4','Coco','Corgie','Brown') my_dog.info() class Dog: def __init__(self, age = 0 ,name = 'noname' ,breed = 'nobreed' ,color = 'nocolor'): self.age = age self.name = name self.breed = breed self.color = color def info(self): print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) my_dog = Dog() my_dog.info() class Dog: #Global Attributes species = 'mammal' def __init__(self, age = 0 ,name = 'noname' ,breed = 'nobreed' ,color = 'nocolor'): self.age = age self.name = name self.breed = breed self.color = color def info(self): print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) my_dog = Dog(name = 'Ralph', age = 7, color = 'gray', breed = 'Chihuahua') my_dog.info() print(my_dog.species)
Ralph is a 7-year old gray Chihuahua. mammal
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
A physics example
class Projectile(): gravityConstant = 9.81 # m/s^2 def __init__(self, initVelocity): self.initVelocity = initVelocity #self.time = time def getHeight(self, time): return self.initVelocity*time-.5*self.gravityConstant*time**2 ball = Projectile(initVelocity = 10) height = ball.getHeight(.1) print(height) print(ball.initVelocity) class Projectile(): gravityConstant = 9.81 # m/s^2 def __init__(self, initVelocity): self.initVelocity = initVelocity #self.time = time def getHeight(self, time): return self.initVelocity*time-.5*self.gravityConstant*time**2
_____no_output_____
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Inhertiance
class childName(parentName): ## list of all new, sub-class specific attributes and methods # including the sub-class constructor class Animal: #Animal Constructor def __init__(self,age = 0, weight = 0, animal_is_alive = True): self.age = age self.weigt = weight self.animal_is_alive = animal_is_alive #eat food def eat(self, food = None): if food == None: print("There is nothing to eat :-(") else: print('Eating {}...yum yum....'.format(food)) #sleeping def sleep(self): print('Sleeping...zzzzzzzz....') Coco = Animal(3,10,True) Coco.sleep() Coco.eat(food = 'bananas') class Dog(Animal): #Dog Constructor def __init__(self, age = 0, weight = 0, animal_is_alive = True, breed = 'nobreed', color = 'nocolor', name = 'noname', bark_sound = 'ruff'): self.breed = breed self.color = color self.bark_sound = bark_sound self.name = name Animal.__init__(self,age,weight,animal_is_alive) # barking method def bark(self, num_barks = 3): for i in range(num_barks): print('{}'.format(self.bark),end = ' ') def info(self): print('{} is a {}-year old {} {}.'.format(my_dog.name,my_dog.age,my_dog.color,my_dog.breed)) Fido = Dog(age = 1, weight = 15, animal_is_alive = True, breed='Husky',color = 'gray',name = 'Fido') Fido.info() Fido.bark(3)
<bound method Dog.bark of <__main__.Dog object at 0x7f3e86e1ba30>> <bound method Dog.bark of <__main__.Dog object at 0x7f3e86e1ba30>> <bound method Dog.bark of <__main__.Dog object at 0x7f3e86e1ba30>>
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Overloading and Multiple Inheritances
class MotherDog(Animal): def __init__(self,age = 0,weight = 0,animal_is_alive = True, breed = 'nobreed', color = 'nocolor', name = 'noname',): def bark(self, num_barks = 3): for i in range(num_barks): print('arf', end = ' ') class FatherDog(Animal)
_____no_output_____
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Polymorphism
Tito = FatherDog(age=12,breed='Doberman',)
_____no_output_____
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Overloading Operations and Functions
class Vector: def __init__(self,x_comp,y_comp): self.x_comp = x_comp self.y_comp = y_comp def __abs__(self): return (self.x_comp**2+self.y_comp**2)**(0.5) x = Vector(1,2) print(x) class Vector: def __init__(self,x_comp,y_comp): self.x_comp = x_comp self.y_comp = y_comp def __abs__(self): return (self.x_comp**2+self.y_comp**2)**(0.5) def __len__(self): return 2 def __add__(self,other): return Vector(self.x_comp + other.x_comp, self.y_comp + other.y_comp) x = Vector(1,2) y = Vector(3,7) z = x+y print(z.x_comp) print(z.y_comp) class Vector: def __init__(self,x_comp,y_comp): self.x_comp = x_comp self.y_comp = y_comp def __abs__(self): return (self.x_comp**2+self.y_comp**2)**(0.5) def __len__(self): return 2 def __add__(self,other): return Vector(self.x_comp + other.x_comp, self.y_comp + other.y_comp) def __mul__(self,other): return Vector(self.x_comp*other, self.y_comp*other) #def __mul__(other,self): #return Vector(other*self.x_comp, other*self.y_comp) x = Vector(1,2) y = 2 z = x*y print(z.x_comp) print(z.y_comp) z2 = y*x class Vector: def __init__(self,x_comp,y_comp): self.x_comp = x_comp self.y_comp = y_comp def __abs__(self): return (self.x_comp**2+self.y_comp**2)**(0.5) def __len__(self): return 2 def __add__(self,other): return Vector(self.x_comp + other.x_comp, self.y_comp + other.y_comp) def __mul__(self,other): return Vector(self.x_comp*other, self.y_comp*other) def __rmul__(self,other): return Vector(self.x_comp*other, self.y_comp*other)
_____no_output_____
MIT
Object Oriented Programming.ipynb
Ryan-Bui/DATA3401
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
TensorFlow Addons Optimizers: ConditionalGradient View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis notebook will demonstrate how to use the Conditional Graident Optimizer from the Addons package. ConditionalGradient> Constraining the parameters of a neural network has been shown to be beneficial in training because of the underlying regularization effects. Often, parameters are constrained via a soft penalty (which never guarantees the constraint satisfaction) or via a projection operation (which is computationally expensive). Conditional gradient (CG) optimizer, on the other hand, enforces the constraints strictly without the need for an expensive projection step. It works by minimizing a linear approximation of the objective within the constraint set. In this notebook, we demonstrate the appliction of Frobenius norm constraint via the CG optimizer on the MNIST dataset. CG is now available as a tensorflow API. More details of the optimizer are available at https://arxiv.org/pdf/1803.06453.pdf Setup
import tensorflow as tf import tensorflow_addons as tfa from matplotlib import pyplot as plt # Hyperparameters batch_size=64 epochs=10
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Build the Model
model_1 = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ])
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Prep the Data
# Load MNIST dataset as NumPy arrays dataset = {} num_validation = 10000 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Preprocess the data x_train = x_train.reshape(-1, 784).astype('float32') / 255 x_test = x_test.reshape(-1, 784).astype('float32') / 255
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Define a Custom Callback Function
def frobenius_norm(m): """This function is to calculate the frobenius norm of the matrix of all layer's weight. Args: m: is a list of weights param for each layers. """ total_reduce_sum = 0 for i in range(len(m)): total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2) norm = total_reduce_sum**0.5 return norm CG_frobenius_norm_of_weight = [] CG_get_weight_norm = tf.keras.callbacks.LambdaCallback( on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append( frobenius_norm(model_1.trainable_weights).numpy()))
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Train and Evaluate: Using CG as OptimizerSimply replace typical keras optimizers with the new tfa optimizer
# Compile the model model_1.compile( optimizer=tfa.optimizers.ConditionalGradient( learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) history_cg = model_1.fit( x_train, y_train, batch_size=batch_size, validation_data=(x_test, y_test), epochs=epochs, callbacks=[CG_get_weight_norm])
Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 5s 85us/sample - loss: 0.3745 - accuracy: 0.8894 - val_loss: 0.2323 - val_accuracy: 0.9275 Epoch 2/10 60000/60000 [==============================] - 3s 50us/sample - loss: 0.1908 - accuracy: 0.9430 - val_loss: 0.1538 - val_accuracy: 0.9547 Epoch 3/10 60000/60000 [==============================] - 3s 49us/sample - loss: 0.1497 - accuracy: 0.9548 - val_loss: 0.1473 - val_accuracy: 0.9560 Epoch 4/10 60000/60000 [==============================] - 3s 49us/sample - loss: 0.1306 - accuracy: 0.9612 - val_loss: 0.1215 - val_accuracy: 0.9609 Epoch 5/10 60000/60000 [==============================] - 3s 49us/sample - loss: 0.1211 - accuracy: 0.9636 - val_loss: 0.1114 - val_accuracy: 0.9660 Epoch 6/10 60000/60000 [==============================] - 3s 48us/sample - loss: 0.1125 - accuracy: 0.9663 - val_loss: 0.1260 - val_accuracy: 0.9640 Epoch 7/10 60000/60000 [==============================] - 3s 50us/sample - loss: 0.1108 - accuracy: 0.9665 - val_loss: 0.1009 - val_accuracy: 0.9697 Epoch 8/10 60000/60000 [==============================] - 3s 51us/sample - loss: 0.1081 - accuracy: 0.9676 - val_loss: 0.1129 - val_accuracy: 0.9647 Epoch 9/10 60000/60000 [==============================] - 3s 50us/sample - loss: 0.1065 - accuracy: 0.9675 - val_loss: 0.1058 - val_accuracy: 0.9683 Epoch 10/10 60000/60000 [==============================] - 3s 51us/sample - loss: 0.1039 - accuracy: 0.9683 - val_loss: 0.1126 - val_accuracy: 0.9646
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Train and Evaluate: Using SGD as Optimizer
model_2 = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ]) SGD_frobenius_norm_of_weight = [] SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback( on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append( frobenius_norm(model_2.trainable_weights).numpy())) # Compile the model model_2.compile( optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) history_sgd = model_2.fit( x_train, y_train, batch_size=batch_size, validation_data=(x_test, y_test), epochs=epochs, callbacks=[SGD_get_weight_norm])
Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 3s 46us/sample - loss: 0.9498 - accuracy: 0.7523 - val_loss: 0.4306 - val_accuracy: 0.8844 Epoch 2/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3851 - accuracy: 0.8916 - val_loss: 0.3298 - val_accuracy: 0.9068 Epoch 3/10 60000/60000 [==============================] - 3s 42us/sample - loss: 0.3230 - accuracy: 0.9064 - val_loss: 0.2917 - val_accuracy: 0.9150 Epoch 4/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2897 - accuracy: 0.9169 - val_loss: 0.2676 - val_accuracy: 0.9241 Epoch 5/10 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2658 - accuracy: 0.9237 - val_loss: 0.2485 - val_accuracy: 0.9288 Epoch 6/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2467 - accuracy: 0.9301 - val_loss: 0.2374 - val_accuracy: 0.9285 Epoch 7/10 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2308 - accuracy: 0.9343 - val_loss: 0.2201 - val_accuracy: 0.9358 Epoch 8/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2169 - accuracy: 0.9388 - val_loss: 0.2096 - val_accuracy: 0.9388 Epoch 9/10 60000/60000 [==============================] - 2s 42us/sample - loss: 0.2046 - accuracy: 0.9421 - val_loss: 0.2009 - val_accuracy: 0.9404 Epoch 10/10 60000/60000 [==============================] - 2s 41us/sample - loss: 0.1939 - accuracy: 0.9448 - val_loss: 0.1900 - val_accuracy: 0.9442
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Frobenius Norm of Weights: CG vs SGD The current implementation of CG optimizer is based on Frobenius Norm, with considering Frobenius Norm as regularizer in the target function. Therefore, we compare CGโ€™s regularized effect with SGD optimizer, which has not imposed Frobenius Norm regularizer.
plt.plot( CG_frobenius_norm_of_weight, color='r', label='CG_frobenius_norm_of_weights') plt.plot( SGD_frobenius_norm_of_weight, color='b', label='SGD_frobenius_norm_of_weights') plt.xlabel('Epoch') plt.ylabel('Frobenius norm of weights') plt.legend(loc=1)
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
Train and Validation Accuracy: CG vs SGD
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train') plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test') plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train') plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(loc=4)
_____no_output_____
Apache-2.0
site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb
ilyaspiridonov/docs-l10n
**Amazon Lookout for Equipment** - ์ต๋ช…ํ™”ํ•œ ์ต์ŠคํŽœ๋” ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ ๋ฐ๋ชจ*ํŒŒํŠธ 5: ์ •๊ธฐ์ ์ธ ์ถ”๋ก  ํ˜ธ์ถœ ์Šค์ผ€์ค„๋ง*
BUCKET = '<YOUR_BUCKET_NAME_HERE>' PREFIX = 'data/scheduled_inference'
_____no_output_____
MIT-0
notebooks/5_inference_scheduling.ipynb
youngmki/lookout-for-equipment-demo