markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Define which columns we need to sum, and which columns don't need to be summed, but we still need to keep. Note: If we don't care about monthly stuff we can delete the second block of code.
#Define the columns that are necessary but are not summable allCols = eiaDict[fileNameMap.values()[0]].columns nonSumCols = ["PLANT_ID", "PLANT_NAME", "YEAR"] #Define the columns that contain the year's totals (Used to calc fuel type %) yearCols = ["TOTAL_FUEL_CONSUMPTION_QUANTITY", "ELEC_FUEL_CONSUMPTION_QUANTITY", ...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Get a list of all the different fuel type codes. If we don't care about all of them, then just hardcode the list
fuelTypes = [] fuelTypes.extend([fuelType for df in eiaDict.values() for fuelType in df["REPORTED_FUEL_TYPE_CODE"].tolist()]) fuelTypes = set(fuelTypes) fuelTypes
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
3 parts to aggregate by facility, and to calculate the % of each type of fuel. This will take a few minutes to run. The end result is aggEIADict.
#Actually calculate the % type for each facility grouping def calcPerc(group, aggGroup, fuelType, col): #Check to see if the facility has a record for the fuel type, and if the total column > 0 if len(group[group["REPORTED_FUEL_TYPE_CODE"] == fuelType]) > 0 and aggGroup[col] > 0: #summing fuel type beca...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Column order doesn't match in all years
aggEIADict[2007].head() aggEIADict[2015].head()
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Export the EIA 923 data as pickle Just sending the dictionary to a pickle file for now. At least doing this will save several min of time loading and processing the data in the future.
filename = 'EIA 923.pkl' path = '../Clean Data' fullpath = os.path.join(path, filename) pickle.dump(aggEIADict, open(fullpath, 'wb'))
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Combine all df's from the dict into one df Concat all dataframes, reset the index, determine the primary fuel type for each facility, filter to only include fossil power plants, and export as a csv
all923 = pd.concat(aggEIADict) all923.head() all923.reset_index(drop=True, inplace=True) # Check column numbers to use in the function below all923.iloc[1,1:27] def top_fuel(row): #Fraction of largest fuel for electric heat input try: fuel = row.iloc[1:27].idxmax()[29:] except: return N...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Export the EIA 923 data dataframe as csv Export the dataframe with primary fuel and filtered to only include fossil plants
filename = 'Fossil EIA 923.csv' path = '../Clean Data' fullpath = os.path.join(path, filename) fossil923.to_csv(fullpath)
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Loading the EPA Data, the path may need to be updated...
#Read the EPA files into a dataframe path2 = os.path.join('EPA air markets') epaNames = os.listdir(path2) filePaths = {dn:os.path.join(path2, dn, "*.txt") for dn in epaNames} filePaths = {dn:glob.glob(val) for dn, val in filePaths.iteritems()} epaDict = {key:pd.read_csv(fp, index_col = False) for key, val in filePaths....
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
First rename the column name so we can merge on that column, then change the datatype of date to a datetime object
#Rename the column names to remove the leading space. for key, df in epaDict.iteritems(): colNames = [name.upper().strip() for name in df.columns] colNames[colNames.index("FACILITY ID (ORISPL)")] = "PLANT_ID" epaDict[key].columns = colNames #Convert DATE to datetime object #Add new column DATETIME with...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
The DataFrames in epaDict contain all power plants in Texas. We can filter on NERC REGION so that it only includes ERCOT.
set(epaDict['2015 July-Dec'].loc[:,'NERC REGION']) #Boolean filter to only keep ERCOT plants for key, df in epaDict.iteritems(): epaDict[key] = df[df["NERC REGION"] == "ERCOT"].reset_index(drop = True) set(epaDict['2015 July-Dec'].loc[:,'NERC REGION']) epaDict['2015 July-Dec'].head()
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Export EPA data as a series of dataframes The whole dictionary is too big as a pickle file
# pickle with gzip, from http://stackoverflow.com/questions/18474791/decreasing-the-size-of-cpickle-objects def save_zipped_pickle(obj, filename, protocol=-1): with gzip.open(filename, 'wb') as f: pickle.dump(obj, f, protocol) filename = 'EPA hourly dictionary.pgz' path = '../Clean Data' fullpath = os.path...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Finally join the two data sources Switch to an inner join? No need to join. Can keep them as separate databases, since one is hourly data and the other is annual/monthly Create a clustering dataframe with index of all plant IDs (from the EPA hourly data), add columns with variables. Calculate the inputs in separate dat...
#Join the two data sources on PLANT_ID fullData = {key:df.merge(aggEIADict[df["YEAR"][0]], on="PLANT_ID") for key, df in epaDict.iteritems()} fullData[fullData.keys()[0]].head()
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
BIT, SUB, LIG, NG, DFO, RFO
[x for x in fullData[fullData.keys()[0]].columns]
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Loading EIA 860 Data
# Iterate through the directory to find all the files to import path = os.path.join('EIA Data', '860-No_Header') full_path = os.path.join(path, '*.*') eia860Names = os.listdir(path) # Rename the keys for easier merging later fileName860Map = { 'GenY07.xls':2007, 'GenY08.xls':2008, ...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Export EIA 860 data
filename = 'EIA 860.pkl' path = '../Clean Data' fullpath = os.path.join(path, filename) pickle.dump(eia860Dict, open(fullpath, 'wb'))
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Creating Final DataFrame for Clustering Algorithm: clusterDict {year : cluster_DF} For each PLANT_ID in aggEIADict, fetch the corresponding aggregated NAMEPLATE_CAPACITY(MW)
clusterDict = dict() for key, df in eia860Dict.iteritems(): clusterDict[key] = pd.merge(aggEIADict[key], eia860Dict[key], how='left', on='PLANT_ID')[['PLANT_ID', 'NAMEPLATE_CAPACITY(MW)']] clusterDict[key].rename(columns={'NAMEPLATE_CAPACITY(MW)': 'capacity', 'PLANT_ID': 'plant_id'}, inplace=True) # verify for...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Function to get fuel type
fuel_cols = [col for col in aggEIADict[2008].columns if 'ELEC_FUEL_CONSUMPTION_MMBTU %' in col] def top_fuel(row): #Fraction of largest fuel for electric heat input try: fuel = row.idxmax()[29:] except: return None return fuel # clusterDict[2008]['fuel'] = aggEIADict[2008][fuel_cols]....
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Calculate Capacity factor, Efficiency, Fuel type
for key, df in clusterDict.iteritems(): clusterDict[key]['year'] = key clusterDict[key]['capacity_factor'] = aggEIADict[key]['NET_GENERATION_(MEGAWATTHOURS)'] / (8670*clusterDict[key]['capacity']) clusterDict[key]['efficiency'] = (aggEIADict[key]['NET_GENERATION_(MEGAWATTHOURS)']*3.412)/(1.0*aggEIADict[key]...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Merge all epa files in one df
columns = ['PLANT_ID', 'YEAR', 'DATE', 'HOUR', 'GROSS LOAD (MW)'] counter = 0 for key, df in epaDict.iteritems(): if counter == 0: result = epaDict[key][columns] counter = 1 else: result = result.append(epaDict[key][columns], ignore_index=True) # Change nan to 0 result.fillna(0,...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Function to calculate the ramp rate for every hour
def plant_gen_delta(df): """ For every plant in the input df, calculate the change in gross load (MW) from the previous hour. input: df: dataframe of EPA clean air markets data return: df: concatanated list of dataframes """ df_list = [] for plant in df['PLANT_ID'].u...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Get the max ramp rate for every plant for each year
cols = ['PLANT_ID', 'YEAR', 'Gen Change'] ramp_rate_list = [] for year in ramp_df['YEAR'].unique(): for plant in ramp_df.loc[ramp_df['YEAR']==year,'PLANT_ID'].unique(): # 95th percentile ramp rate per plant per year ramp_95 = ramp_df.loc[(ramp_df['PLANT_ID']== plant) & ...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
Save dict to csv
# re-arrange column order columns = ['year', 'plant_id', 'capacity', 'capacity_factor', 'efficiency', 'ramp_rate', 'fuel_type'] filename = 'Cluster_Data_2.csv' path = '../Clean Data' fullpath = os.path.join(path, filename) counter = 0 for key, df in clusterDict.iteritems(): # create the csv file if counter ==...
Raw Data/Merging EPA and EIA.ipynb
gschivley/ERCOT_power
mit
First example
import numpy as np import matplotlib.pyplot as plt
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
We'll show the wave of head in the subsurface due to a sinusoidal tide at $x=0$. So we choose the parameters that we considere known ($kD$, $S$, $\omega$ and the amplitude $A$). Then we choose values of $x$ for which to compute the head change $s(x, t)$ and some times for each of which we compute $s(x,t)$. This gives...
times = np.arange(0, 24) / 24 # hourly values x = np.linspace(0, 400, num=401) # the x values packed in an numpy array # Aquifer kD = 600 # [m2/d], transmissivty S = 0.1 # [-], storage coefficient # Wave A = 1.25 # [m] the amplitude T = 0.5 # [d] cycle time omega = 2 * np.pi / T # [radians/d] angle velocity # Co...
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
This picture shows what we wanted to, but it is not nice. There are too many times in the legend. And the picture is also quite small. Further, it may become boring to have to type all these individual plot instructions all the time. There is a better way. We could define a function that does all setup work for us and ...
def newfig(title='forgot title?', xlabel='forgot xlabel?', ylabel='forgot ylabel?', xlim=None, ylim=None, xscale='linear', yscale='linear', size_inches=(12,7)): fig, ax = plt.subplots() fig.set_size_inches(size_inches) ax.set_title(title) ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax....
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
This looks good, both the more concise code as the picture in size and legend. The title now contains all information pertaining to the problem we want to solve. So this can never be forgotton. I used a string for the title and the subtitle (second line of the title). Adding two strings like str1 + str2 glues the two s...
kD = 600 # [m2/d], transmissivty S = 0.1 # [-], storage coefficient thetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles amplitudes = [1.25, 0.6, 1.7, 0.8, 1.1] # [m] the 5 amplitudes Times = np.array([0.1, 0.25, 0.33, 0.67, 1.0]) # [d] cycle times omegas = 2 * np.pi / Times # [radians/d] angle...
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Next we'll only show the total effect, i.e. the sum of all individual waves, but we'll do that for a sequence of times
kD = 600 # [m2/d], transmissivty S = 0.1 # [-], storage coefficient thetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles amplitudes = [1.25, 0.6, 1.7, 0.8, 1.1] # [m] the 5 amplitudes Times = np.array([0.1, 0.25, 0.33, 0.67, 1.0]) # [d] cycle times omegas = 2 * np.pi / Times # [radians/d] angle...
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Finally, we'll animate the wave. You may read more about making animations using matplotlib here. Once the idea of animation is understood, animating functions become straightforward. The computation and saving of the animation may require some time. But once the the computions are finished the animation well be shown ...
from matplotlib.animation import FuncAnimation #import matplotlib.animation # We need dit to make sure the video is launched %matplotlib kD = 600 # [m2/d], transmissivty S = 0.1 # [-], storage coefficient thetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles amplitudes = [1.25, 0.6, 1.7, 0.8, 1...
exercises_notebooks/chap5_4_waves.ipynb
Olsthoorn/TransientGroundwaterFlow
gpl-3.0
Predictions and residuals of the FIA model Remember that the model was: $$log(Biomass) \sim log(SppN) + Z(x) + \epsilon$$ Where Z(x) is a gaussian random field with mean 0 and $\Sigma^{+} = \rho(x,x^{'})$ We have done that analysis in former notebooks. This notebook considers that the file: /RawDataCSV/predictions_res...
path = "/RawDataCSV/predictions_residuals.csv" data = pd.read_csv(path,index_col='Unnamed: 0') data.shape data.shape data.columns =['Y','logBiomass','logSppN','modres'] data = data.dropna() data.loc[:5] plt.scatter(np.exp(data.logSppN),np.exp(data.Y)) from statsmodels.api import OLS mod_lin = OLS.from_formula('log...
notebooks/.ipynb_checkpoints/predictions_and_residuals_of_FIA_logbiomasS_logsppn_with_spautocor-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Standard Error Using the White’s (1980) heteroskedasticity robust standard errors. I used the others: MacKinnon and White’s (1985) alternative heteroskedasticity robust standard error The values were the same. Standard Errors: * Intercept 0.011947 * logSppN 0.006440
path = "/RawDataCSV/predictions2.csv" data2 = pd.read_csv(path,index_col='Unnamed: 0') #data2 = data2.dropna() #plt.scatter(data.logSppN,data.logBiomass,label="Observations") plt.scatter(data.logSppN,data2.mean,c='Red',label="Predicted (GLS)") #plt.scatter(data.logSppN,lnY,c='Green',label="Prediction (OLS)") #plt.titl...
notebooks/.ipynb_checkpoints/predictions_and_residuals_of_FIA_logbiomasS_logsppn_with_spautocor-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Igrid[atomI] appears to be off by a factor of 12.6!
# This is probably due to a unit conversion in a multiplicative prefactor # This multiplicative prefactor is based on nanometers r_min = 0.14 r_max = 1.0 print (1/r_min - 1/r_max) # This multiplicative prefactor is based on angstroms r_min = 1.4 r_max = 10.0 print (1/r_min - 1/r_max)
Example/test_GB.ipynb
CCBatIIT/AlGDock
mit
Switching from nanometers to angstroms makes the multiplicative prefactor smaller, which is opposite of the desired effect!
4*np.pi
Example/test_GB.ipynb
CCBatIIT/AlGDock
mit
Igrid[atomI] appears to be off by a factor of 4*pi!
# This is after multiplication by 4*pi # Sum for atom 0: 1.55022, 2.96246 # Sum for atom 1: 1.56983, 2.96756 # Sum for atom 2: 1.41972, 2.90796 # Sum for atom 3: 1.45936, 3.02879 # Sum for atom 4: 2.05316, 3.32989 # Sum for atom 5: 1.5354, 3.06405 # Sum for atom 6: 1.43417, 3.02438 # Sum for atom 7: 1.85508, 3.21875 # ...
Example/test_GB.ipynb
CCBatIIT/AlGDock
mit
Next we are going to load our data and generate random negative data aka gibberish data. The clean data files has negatives created from the data sets pulled from phosphoELM and dbptm. In generate_random_data the amino acid parameter represents the amino acid being modified aka the target amino acid modification, the ...
y = Predictor() y.load_data(file="Data/Training/clean_Y.csv")
old/Tyrosine Phosphorylation Example.ipynb
vzg100/Post-Translational-Modification-Prediction
mit
Next we vectorize the sequences, we are going to use the sequence vector. Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. By setting random data to 1
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function="ADASYN", random_data=1)
old/Tyrosine Phosphorylation Example.ipynb
vzg100/Post-Translational-Modification-Prediction
mit
Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. The array outputed contains the precision, recall, fscore, and total numbers correctly estimated.
y.supervised_training("mlp_adam")
old/Tyrosine Phosphorylation Example.ipynb
vzg100/Post-Translational-Modification-Prediction
mit
Next we can check against the benchmarks pulled from dbptm.
y.benchmark("Data/Benchmarks/phos.csv", "Y")
old/Tyrosine Phosphorylation Example.ipynb
vzg100/Post-Translational-Modification-Prediction
mit
Want to explore the data some more, easily generate PCA and TSNE diagrams of the training set.
y.generate_pca() y.generate_tsne()
old/Tyrosine Phosphorylation Example.ipynb
vzg100/Post-Translational-Modification-Prediction
mit
Using A For Loop Create a for loop goes through the list and capitalizes each
# create a variable for the for loop results regimentNamesCapitalized_f = [] # for every item in regimentNames for i in regimentNames: # capitalize the item and add it to regimentNamesCapitalized_f regimentNamesCapitalized_f.append(i.upper()) # View the outcome regimentNamesCapitalized_f
python/applying_functions_to_list_items.ipynb
tpin3694/tpin3694.github.io
mit
Using Map() Create a lambda function that capitalizes x
capitalizer = lambda x: x.upper()
python/applying_functions_to_list_items.ipynb
tpin3694/tpin3694.github.io
mit
Map the capitalizer function to regimentNames, convert the map into a list, and view the variable
regimentNamesCapitalized_m = list(map(capitalizer, regimentNames)); regimentNamesCapitalized_m
python/applying_functions_to_list_items.ipynb
tpin3694/tpin3694.github.io
mit
Using List Comprehension Apply the expression x.upper to each item in the list called regiment names. Then view the output
regimentNamesCapitalized_l = [x.upper() for x in regimentNames]; regimentNamesCapitalized_l
python/applying_functions_to_list_items.ipynb
tpin3694/tpin3694.github.io
mit
Let's grab all spaxels with an Ha-flux > 25 from MPL-5.
config.setRelease('MPL-5') f = 'emline_gflux_ha_6564 > 25' q = Query(search_filter=f) print(q) # let's run the query r = q.run() r.totalcount r.results
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
sdss/marvin
bsd-3-clause
Spaxel queries are queries on individual spaxels, and thus will always return a spaxel x and y satisfying your input condition. There is the potential of returning a large number of results that span only a few actual galaxies. Let's see how many..
# get a list of the plate-ifus plateifu = r.getListOf('plateifu') # look at the unique values with Python set print('unique galaxies', set(plateifu), len(set(plateifu)))
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
sdss/marvin
bsd-3-clause
Optimize your query Unless specified, spaxel queries will query across all bintypes and stellar templates. If you only want to search over a certain binning mode, this must be specified. If your query is taking too long, or returning too many results, consider filtering on a specific bintype and template.
f = 'emline_gflux_ha_6564 > 25 and bintype.name == SPX' q = Query(search_filter=f, return_params=['template.name']) print(q) # run it r = q.run() r.results
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
sdss/marvin
bsd-3-clause
Global+Local Queries To combine global and local searches, simply combine them together in one filter condition. Let's look for all spaxels that have an H-alpha EW > 3 in galaxies with NSA redshift < 0.1 and a log sersic_mass > 9.5
f = 'nsa.sersic_logmass > 9.5 and nsa.z < 0.1 and emline_sew_ha_6564 > 3' q = Query(search_filter=f) print(q) r = q.run() # Let's see how many spaxels we returned from how many galaxies plateifu = r.getListOf('plateifu') print('spaxels returned', r.totalcount) print('from galaxies', len(set(plateifu))) r.results[0:5...
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
sdss/marvin
bsd-3-clause
Query Functions Marvin also contains more advanced queries in the form of predefined functions. For example, let's say you want to ask Marvin "Give me all galaxies that have an H-alpha flux > 25 in more than 20% of their good spaxels" you can do so using the query function npergood. npergood accepts as input a standa...
config.mode='remote' config.setRelease('MPL-4') f = 'npergood(emline_gflux_ha_6564 > 5) >= 20' q = Query(search_filter=f) r = q.run() r.results
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
sdss/marvin
bsd-3-clause
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next...
def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, le...
tensorboard/.ipynb_checkpoints/Anna KaRNNa Name Scoped-checkpoint.ipynb
kazzz24/deep-learning
mit
Define Common Parameters
# Probabilities P_G = 0.8 # Return on investment rates ROI_G = 1 ROI_L = -0.2 # Principal (initial capital) P = 1
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
Question 1. Starting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process. Define the functions that will evolve the principal capital P a Binomial process.
# Takes the principal P and performs the evolution of the capital using # the result x of the random binomial variable after n trials def evolve_with_binomial(P, x, n): return P * ((1 + ROI_G) ** x) * ((1 + ROI_L) ** (n - x))
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
Run the simulation using the Binomial process which is equivalent to performing a very large (~1000's) Bernoulli processes and grouping their results. Since the order in which 1's and 0's occur in the sequence does not affect the final result.
# Number of iterations years = 5 iterations_per_year = 2 n = iterations_per_year * (years) # Sorted array of unique values ocurring in instance of Binomial process x_binomial = linspace(0,n,n+1) # Arrays of data to plot data_dict = { 'x': [], 'y': []} data_dict['x'] = [evolve_with_binomial(P, x, max(x_binomial)) for...
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
Question 2. Plot the time evolution of the principal P through the Binomial process. Where a more intense color means a higher probability and a less intense color means a lower probability.
# Number of iterations years = 5 iterations_per_year = 2 n = iterations_per_year * (years) # Arrays of data to plot data_dict = { 'values': [], 'probs': np.array([]), 'iterations': [], 'mean': [], 'most_prob': [], 'uniq_iterations': []} # For each iteration less than the maximun number of iterations i = 1 while i <=...
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
The previous plot shows the evolution of the capital throughout the Binomial process, alongside we show the mean and the most probable value of the possible outcomes. As one increases the number of iterations the mean surpassess the most probable value for good while maintaining a very close gap. Question 4. We want to...
# Calculate the possible capital declines and their respective probabilities data_dict["decline_values"] = [] data_dict["decline_probs"] = [] data_dict["decline_iterations"] = [] for index, val in enumerate(data_dict["values"]): if val < 1: data_dict["decline_values"].append((1-val)*100) data_dict[...
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
Question 5. Obtain the probability of bankrupcty after N iterations, bankruptcy is defined for the purposes of this notebook as the event in which the principal perceives a capital decline bigger than or equal to X percent
# Capital percentage decline of bankruptcy CP_br = 20 # Variable to store the plot data data_dict["bankruptcy_probs"] = [] data_dict["bankruptcy_iterations"] = [] # Calculate for each iteration the probability of bankruptcy iter_counter = 0 for i, iteration in enumerate(data_dict["decline_iterations"]): if data_d...
business_analysis.ipynb
aukintux/business_binomial_analysis
mit
Marvin Queries This tutorial goes through a few basics of how to perform queries on the MaNGA dataset using the Marvin Query tool. Please see the Marvin Query page for more details on how to use Queries. This tutorial covers the basics of: querying on metadata information from the NSA catalog how to combine multiple ...
# we should be using DR15 MaNGA data from marvin import config config.release # import the Query tool from marvin.tools.query import Query
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Query Basics Querying on Metadata Let's go through some Query basics of how to do a query on metadata. The two main keyword arguments to Query are search_filter and return_params. search_filter is a string representing the SQL where condition you'd like to filter on. This tutorial assumes a basic familiarity with th...
# filter for galaxies with a redshift < 0.1 my_filter = 'nsa.z < 0.1' # construct the query q = Query(search_filter=my_filter) q
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
The Query tool works with a local or remote manga database. Without a database, the Query tool submits your inputs to the remote server using the API, rather than doing anything locally. We run the query with the run method. Your inputs are sent to the server where the query is built dynamically and run. The result...
# run the query r = q.run() # print some results information print(r) print('number of results:', r.totalcount)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
After constructing queries, we can run them with q.run(). This returns a Marvin Results object. Let's take a look. This query returned 4275 objects. For queries with large results, the results are automatically paginated in sets of 100 objects. Default parameters returned in queries always include the mangaid and p...
# look at the current page of results (subset of 10) print('number in current set:', len(r.results)) print(r.results[0:10])
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Finding Available Parameters We can use the Query datamodel to look up all the available parameters one can use in the search_filter or in the return_params keyword arguments to Query. The Query datamodel contains many parameters, which we bundle into groups for easier navigation. We currently offer four groups of pa...
# look up the available query datamodel groups q.datamodel.groups
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Each group contains a list of QueryParameters. A QueryParameter contains the designation and syntax used for specifying parameters in your query. The full attribute indicates the table.parameter name needed as input into the search_filter or return_params keyword arguments to the Marvin Query. The redshift is a para...
# select and print the NSA parameter group nsa = q.datamodel.groups['nsa'] nsa.parameters
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Multiple Search Criteria and Returning Additional Parameters We can easily combine query filter conditions by constructing a boolean string using AND. Let's search for galaxies with a redshift < 0.1 and log M$_\star$ < 10. The NSA catalog contains the Sersic profile determination for stellar mass, which is the sersic...
my_filter = 'nsa.z < 0.1 and nsa.sersic_logmass < 10' q = Query(search_filter=my_filter, return_params=['cube.ra', 'cube.dec']) r = q.run() print(r) print('Number of objects:', r.totalcount)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
This query return 1932 objects and now includes the RA, Dec, redshift and log Sersic stellar mass parameters.
# print the first 10 rows r.results[0:10]
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Radial Queries in Marvin Cone searches can be performed with Marvin Queries using a special functional syntax in your SQL string. Cone searches can be performed using the special radial string function. The syntax for a cone search query is radial(RA, Dec, radius). Let's search for all galaxies within 0.5 degrees cen...
# build the radial filter condition my_filter = 'radial(232.5447, 48.6902, 0.5)' q = Query(search_filter=my_filter) r = q.run() print(r) print(r.results)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Queries using DAPall parameters. MaNGA provides derived analysis properties in its dapall summary file. Marvin allows for queries on any of the parameters in the file. The table name for these parameters is dapall. Let's find all galaxies that have a total measure star-formation rate > 5 M$_\odot$/year. The total S...
my_filter = 'dapall.sfr_tot > 5' q = Query(search_filter=my_filter) r = q.run() print(r) print(r.results)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
The query returns 6 results, but looking at the plateifu, we see there are only 3 unique targets. This is because the DAPall file provides measurements for multiple bintypes and by default will return entries for all bintypes. We can select those out using the bintype.name parameter. Let's filter on only the HYB10 bi...
my_filter = 'dapall.sfr_tot > 5 and bintype.name==HYB10' q = Query(search_filter=my_filter) r = q.run() print(r) print(r.results)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Query on Quality and Target Flags Marvin includes the ability to perform queries using quality or target flag information. These work using the special quality and targets keyword arguments. These keywords accept a list of flag maskbit labels provided by the Maskbit Datamodel. These keywords are inclusive, meaning th...
# create the targets list of labels targets = ['primary', 'secondary', 'color-enhanced'] q = Query(targets=targets) r = q.run() print(r) print('There are {0} galaxies in the main sample'.format(r.totalcount)) print(r.results[0:5])
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
The targets keyword is equivalent to the cube.manga_targetX search parameter, where X is 1, 2, or 3. The bits for the primary, secondary, and color-enhanced samples are 10, 11, and 12, respectively. These combine into the value 7168. The above query is equivalent to the filter condition cube.manga_target1 &amp; 7168
value = 1<<10 | 1<<11 | 1<<12 my_filter = 'cube.manga_target1 & {0}'.format(value) q = Query(search_filter=my_filter) r = q.run() print(r)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Let's search only for galaxies that are Milky Way Analogs or Dwarfs ancillary targets.
targets = ['mwa', 'dwarf'] q = Query(targets=targets) r = q.run() print(r) print('There are {0} galaxies from the Milky Way Analogs and Dwarfs ancillary target catalogs'.format(r.totalcount)) print(r.results)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Searching by Quality Flags The quality accepts all labels from the MANGA_DRPQUAL and MANGA_DAPQUAL maskbit schema. Let's find all galaxies that suffered from bad flux calibration. This is the flag BADFLUX (bit 8) from the MANGA_DRPQUAL maskbit schema.
quality = ['BADFLUX'] q = Query(quality=quality) r = q.run() print(r) print('There are {0} galaxies with bad flux calibration'.format(r.totalcount)) print(r.results[0:10])
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
The quality keyword is equivalent to the search parameters cube.quality for DRP flags or the file.quality for DAP flags. The above query is equivalent to cube.quality &amp; 256. You can also perform a NOT bitmask selection using the ~ symbol. To perform a NOT selection we can only use the cube.quality parameter. Let...
# the above query as a filter condition q = Query(search_filter='cube.quality & 256') r = q.run() print('Objects with bad flux calibration:', r.totalcount) # objects with bad quality other than bad flux calibration q = Query(search_filter='cube.quality & ~256') r = q.run() print('Bad objects with no bad flux calibrati...
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
To find exactly objects with good quality and no bad flags set, use cube.quality == 0.
q = Query(search_filter='cube.quality == 0') r = q.run() print(r) print('Objects with good quality:', r.totalcount)
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
sdss/marvin
bsd-3-clause
Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
notebookdir = os.getcwd() hs=hydroshare.hydroshare() homedir = hs.getContentPath(os.environ["HS_RES_ID"]) os.chdir(homedir)
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, yo...
""" 1/16-degree Gridded cell centroids """ # List of available data hs.getResourceFromHydroShare('ef2d82bf960144b4bfb1bae6242bcc7f') NAmer = hs.content['NAmer_dem_list.shp'] """ Sauk """ # Watershed extent hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284') sauk = hs.content['wbdhub12_17110006_WGS84_Basi...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Summarize the file availability from each watershed mapping file
%%time # map the mappingfiles from usecase1 mappingfile1=ogh.treatgeoself(shapefile=sauk, NAmer=NAmer, buffer_distance=0.06, mappingfile=os.path.join(homedir,'Sauk_mappingfile.csv'))
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
3. Compare Hydrometeorology This section performs computations and generates plots of the Livneh 2013 and Salathe 2014 mean temperature and mean total monthly precipitation in order to compare them with each other. The generated plots are automatically downloaded and saved as .png files within the "homedir" directory....
help(ogh.getDailyMET_livneh2013) help(oxl.get_x_dailymet_Livneh2013_raw)
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
NetCDF retrieval and clipping to a spatial extent The function get_x_dailywrf_salathe2014 retrieves and clips NetCDF files archived within the UW Rocinante NNRP repository. This archive contains daily data from January 1970 through December 1979 (10 years). Each netcdf file is comprised of meteorologic and VIC hydrolog...
maptable, nstations = ogh.mappingfileToDF(mappingfile1) spatialbounds = {'minx':maptable.LONG_.min(), 'maxx':maptable.LONG_.max(), 'miny':maptable.LAT.min(), 'maxy':maptable.LAT.max()} outputfiles = oxl.get_x_dailymet_Livneh2013_raw(homedir=homedir, subd...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Convert collection of NetCDF files into a collection of ASCII files Provide the home and subdirectory where the ASCII files will be stored, the source_directory of netCDF files, and the mapping file to which the resulting ASCII files will be cataloged. Also, provide the Pandas Datetime code for the frequency of the tim...
%%time # convert the netCDF files into daily ascii time-series files for each gridded location outfilelist = oxl.netcdf_to_ascii(homedir=homedir, subdir='livneh2013/Daily_MET_1970_1970/raw_ascii', source_directory=os.path.join(homedir, 'livneh2013/Da...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Create a dictionary of climate variables for the long-term mean (ltm). INPUT: gridded meteorology ASCII files located from the Sauk-Suiattle Mapping file. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevatio...
meta_file['sp_dailymet_livneh_1970_1970']['variable_list'] %%time ltm = ogh.gridclim_dict(mappingfile=mappingfile1, metadata=meta_file, dataset='sp_dailymet_livneh_1970_1970', variable_list=['Prec','Tmax','Tmin']) sorted(ltm.keys())
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Compute the total monthly and yearly precipitation, as well as the mean values across time and across stations INPUT: daily precipitation for each station from the long-term mean dictionary (ltm) <br/>OUTPUT: Append the computed dataframes and values into the ltm dictionary
# extract metadata dr = meta_file['sp_dailymet_livneh_1970_1970'] # compute sums and mean monthly an yearly sums ltm = ogh.aggregate_space_time_sum(df_dict=ltm, suffix='Prec_sp_dailymet_livneh_1970_1970', start_date=dr['start_date'], ...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Visualize the "average monthly total precipitations" INPUT: dataframe with each month as a row and each station as a column. <br/>OUTPUT: A png file that represents the distribution across stations (in Wateryear order)
# # two lowest elevation locations lowE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=164) # one highest elevation location highE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=2216) # combine references together reference_lines = highE_ref + lowE_ref refer...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
generate a raster
help(oxl.rasterDimensions) # generate a raster raster, row_list, col_list = oxl.rasterDimensions(minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000) raster.shape
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Higher resolution children of gridded cells get data from Lower resolution parent grid cells to the children
help(oxl.mappingfileToRaster) %%time # landlab raster node crossmap to gridded cell id nodeXmap, raster, m = oxl.mappingfileToRaster(mappingfile=mappingfile1, spatial_resolution=0.06250, minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000) # print the raster d...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Visualize the "average monthly total precipitation" 5. Save the results back into HydroShare <a name="creation"></a> Using the hs_utils library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. title, abstract, keywor...
len(files) # for each file downloaded onto the server folder, move to a new HydroShare Generic Resource title = 'Computed spatial-temporal summaries of two gridded data product data sets for Sauk-Suiattle' abstract = 'This resource contains the computed summaries for the Meteorology data from Livneh et al. 2013 and th...
tutorials/Observatory_usecase7_xmapLandlab.ipynb
ChristinaB/Observatory
mit
Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total...
dcgan-svhn/DCGAN_Exercises.ipynb
spencer2211/deep-learning
mit
Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a full...
def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) # transpose x1 = tf.reshape(x1, (-1, 4, 4, 512)) x1 = tf.layers.batch_normalization(x1, traini...
dcgan-svhn/DCGAN_Exercises.ipynb
spencer2211/deep-learning
mit
Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll n...
def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same') relu1 = tf.maximum(alpha * x1, x1) # Now 16x16x64 x2 = tf.layers.conv2d(relu1, 128,...
dcgan-svhn/DCGAN_Exercises.ipynb
spencer2211/deep-learning
mit
And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run...
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for ...
dcgan-svhn/DCGAN_Exercises.ipynb
spencer2211/deep-learning
mit
Hyperparameters GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise: Find hyperparameters to t...
real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 25 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset...
dcgan-svhn/DCGAN_Exercises.ipynb
spencer2211/deep-learning
mit
Morph volumetric source estimate This example demonstrates how to morph an individual subject's :class:mne.VolSourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Data will be morphed based on an affine transformation and a nonlinear registration method known as Symmetric Diffeomorp...
# Author: Tommy Clausner <tommy.clausner@gmail.com> # # License: BSD-3-Clause import os import nibabel as nib import mne from mne.datasets import sample, fetch_fsaverage from mne.minimum_norm import apply_inverse, read_inverse_operator from nilearn.plotting import plot_glass_brain print(__doc__)
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setup paths
sample_dir_raw = sample.data_path() sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample') subjects_dir = os.path.join(sample_dir_raw, 'subjects') fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif') fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif') fname_t1_fsaverage = os.pa...
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute example data. For reference see ex-inverse-volume. Load data:
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) # Apply inverse operator stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM") # To save time stc.crop(0.09, 0.09)
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from :class:src &lt;mne.SourceSpaces&gt;, and subject_to is set to 'fsaverage' by default. subjects_dir can be None when set in the environment. In that case SourceMorph can be initialized taking src as only argument. See :class:mne....
src_fs = mne.read_source_spaces(fname_src_fsaverage) morph = mne.compute_source_morph( inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir, niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed src_to=src_fs, verbose=True)
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the :meth:morph.apply() &lt;mne.SourceMorph.apply&gt; method. <div class="alert alert-info"><h4>Note</h4><p>Volumetric morphing is much slower than surface morphing because the volume for ea...
stc_fsaverage = morph.apply(stc)
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using :meth:morph.apply(..., output='nifti1') &lt;mne.SourceMorph.apply&gt;.
# Create mri-resolution volume of results img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot results
# Load fsaverage anatomical image t1_fsaverage = nib.load(fname_t1_fsaverage) # Plot glass brain (change to plot_anat to display an overlaid anatomical T1) display = plot_glass_brain(t1_fsaverage, title='subject results to fsaverage', draw_cross=False, ...
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Script settings
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl') hp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data)
scripts/Demo_WaterGasElekVisualisation.ipynb
EnergyID/opengrid
gpl-2.0
Fill in here (chosen type [0-2]) what type of data you'd like to plot:
chosen_type = 0 # 0 =water, 1 = gas, 2 = electricity UtilityTypes = ['water', 'gas','electricity'] # {'water','gas','electricity'} utility = UtilityTypes[chosen_type] # here 'electricity' #default values: FL_units = ['l/day', 'm^3/day ~ 10 kWh/day','Ws/day'] #TODO, to be checked!! Base_Units = ['l/min', 'kW','kW']...
scripts/Demo_WaterGasElekVisualisation.ipynb
EnergyID/opengrid
gpl-2.0
Available data is loaded in one big dataframe, the columns are the sensors of chosen type. also, it is rescaled to more "managable" units (to be verified!)
#load data print 'Loading', utility ,'-data and converting from ',fl_unit ,' to ',bUnit,':' df = hp.get_data(sensortype=utility) df = df.diff() #data is cumulative, we need to take the derivative df = df[df>0] #filter out negative values # conversion dependent on type of utility (to be checked!!) df = df*bCorr # plo...
scripts/Demo_WaterGasElekVisualisation.ipynb
EnergyID/opengrid
gpl-2.0
Tests with the tmpo-based approach
start = pd.Timestamp('20150201') end = pd.Timestamp('20150301') dfcum = hp.get_data(sensortype='electricity', head= start, tail = end) dfcum.shape dfcum.columns dfcum.tail() dfi = dfcum.resample(rule='900s', how='max') dfi = dfi.interpolate(method='time') dfi=dfi.diff()*3600/900 dfi.plot() #dfi.ix['20150701'].plot...
scripts/Demo_WaterGasElekVisualisation.ipynb
EnergyID/opengrid
gpl-2.0
Data Exploration In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you w...
# Display a description of the dataset display(data.describe())
Customer Segments/customer_segments.ipynb
simmy88/UdacityMLND
mit