markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Extract Results
def extract_results(response): """ Function accepts a NOAA query response (JSON) return the results key values as well as the number of records (for use in validation). """ data= response['results'] # for quality control to verify retrieval of all rows length= len(data) return ...
NOAA_sandbox.ipynb
baumanab/noaa_requests
gpl-3.0
CSV Generator
def gen_csv(df, query_dict): """ Arguments: PANDAS DataFrame, a query parameters dictionary Returns: A CSV of the df with dropped index and named by dict params """ # extract params station= query_dict['stationid'] start= query_dict['startdate'] end= query_dict['enddate'] #...
NOAA_sandbox.ipynb
baumanab/noaa_requests
gpl-3.0
Not surprisingly, solving the example gives a very accurate result:
x = generate_variables('x')[0] sdp = SdpRelaxation([x]) sdp.get_relaxation(1, objective=x**2) sdp.solve() print(sdp.primal, sdp.dual)
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Notice that even in this formulation, there is an implicit constraint on a moment: the top left element of the moment matrix is 1. Given a representing measure, this means that $\int_\mathbf{K} \mathrm{d}\mu=1$. It is actually because of this that a $\lambda$ dual variable appears in the dual formulation: $$max_{\lambd...
moments = [x-1, 1-x] sdp = SdpRelaxation([x]) sdp.get_relaxation(1, objective=x**2, momentinequalities=moments) sdp.solve() print(sdp.primal, sdp.dual)
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
The dual changed, slightly. Let $\gamma_\beta=\int_\mathbf{K} x^\beta\mathrm{d}\mu$ for $\beta=0, 1$. Then the dual reads as $$max_{\lambda_\beta, \sigma_0} \sum_{\beta=0}^1\lambda_\beta \gamma_\beta$$ such that $$2x^2 - \sum_{\beta=0}^1\lambda_\beta x^\beta = \sigma_0\ \sigma_0\in \Sigma{[x]}, \mathrm{deg}\sigma_0\leq...
coeffs = [-sdp.extract_dual_value(0, range(1))] coeffs += [sdp.y_mat[2*i+1][0][0] - sdp.y_mat[2*i+2][0][0] for i in range(len(moments)//2)] sigma_i = sdp.get_sos_decomposition() print(coeffs, [sdp.dual, sigma_i[1]-sigma_i[2]])
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Moment constraints play a crucial role in the joint+marginal approach of the SDP relaxation of polynomial optimization problems, and hence also indirectly in the bilevel polynomial optimization problems. Joint+marginal approach In a parametric polynomial optimization problem, we can separate two sets of variables, and ...
def J(x): return -(1-x**2)*x def Jk(x, coeffs): return sum(ci*x**i for i, ci in enumerate(coeffs)) x = generate_variables('x')[0] y = generate_variables('y')[0]
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Next, we define the level of the relaxation and the moment constraints:
level = 4 gamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)] marginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)])
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Finally we define the objective function and the constraints that define the semialgebraic sets, and we generate and solve the relaxation.
f = -x*y**2 inequalities = [1.0-x**2-y**2, 1-x, x, 1-y, y] sdp = SdpRelaxation([x, y], verbose=0) sdp.get_relaxation(level, objective=f, momentinequalities=marginals, inequalities=inequalities) sdp.solve() print(sdp.primal, sdp.dual, sdp.status) coeffs = [sdp.extract_dual_value(0, range(len(inequali...
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
To check the correctness of the approximation, we plot the optimal and the approximated functions over the domain.
x_domain = [i/100. for i in range(100)] plt.plot(x_domain, [J(xi) for xi in x_domain], linewidth=2.5) plt.plot(x_domain, [Jk(xi, coeffs) for xi in x_domain], linewidth=2.5) plt.show()
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Example 2 The set $\mathbf{X}=[0,1]$ remains the same. Let $\mathbf{K}={(x,y): 1-y_1^2-y_2^2\geq 0}$, and $f(x,y) = xy_1 + (1-x)y_2$. Now the optimal $J(x)$ will be $-\sqrt{x^2+(1-x)^2}$.
def J(x): return -sqrt(x**2+(1-x)**2) x = generate_variables('x')[0] y = generate_variables('y', 2) f = x*y[0] + (1-x)*y[1] gamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)] marginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)]) inequalities = [1-y[0]**2-y[1]...
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Example 3 Note that this is Example 4 in the paper. The set $\mathbf{X}=[0,1]$ remains the same, whereas $\mathbf{K}={(x,y): xy_1^2+y_2^2-x= 0, y_1^2+xy_2^2-x= 0}$, and $f(x,y) = (1-2x)(y_1+y_2)$. The optimal $J(x)$ is $-2|1-2x|\sqrt{x/(1+x)}$. We enter the equalities as pairs of inequalities.
def J(x): return -2*abs(1-2*x)*sqrt(x/(1+x)) x = generate_variables('x')[0] y = generate_variables('y', 2) f = (1-2*x)*(y[0] + y[1]) gamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)] marginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)]) inequalities = [x*y[0...
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Bilevel problem of nonconvex lower level We define the bilevel problem as follows: $$ \min_{x\in\mathbb{R}^n, y\in\mathbb{R}^m}f(x,y)$$ such that $$g_i(x, y) \geq 0, i=1,\ldots,s,\ y\in Y(x)=\mathrm{argmin}_{w\in\mathbb{R}^m}{G(x,w): h_j(w)\geq 0, j=1,...,r}.$$ The more interesting case is when the when the lower level...
x = generate_variables('x')[0] y = generate_variables('y')[0] f = x + y g = [x <= 1.0, x >= -1.0] G = x*y**2/2.0 - y**3/3.0 h = [y <= 1.0, y >= -1.0] epsilon = 0.001 M = 1.0 level = 3
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
We define the relaxation of the parametric polynomial optimization problem that returns an approximation of $J(x)$ from the dual:
def lower_level(k, G, h, M): gamma = [integrate(x**i, (x, -M, M))/(2*M) for i in range(1, 2*k+1)] marginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*k+1)]) inequalities = h + [x**2 <= M**2] lowRelaxation = SdpRelaxation([x, y]) lowRelaxation.get_relaxation(k, objective...
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Finally, we put it all together:
Jk = lower_level(level, G, h, M) inequalities = g + h + [G - Jk <= epsilon] highRelaxation = SdpRelaxation([x, y], verbose=0) highRelaxation.get_relaxation(level, objective=f, inequalities=inequalities) highRelaxation.solve() print("High-level:", highRelaxation.primal, highRelaxation.statu...
Parameteric and Bilevel Polynomial Optimization Problems.ipynb
peterwittek/ipython-notebooks
gpl-3.0
Next, import relevent modules
import LMIPy as lmi import os import json import shutil from pprint import pprint from datetime import datetime from tqdm import tqdm
docs/Update_GFW_Layers_Vault.ipynb
Vizzuality/gfw
mit
First, pull the gfw repo and check that the following path correctly finds the data/layers folder, inside which, you should find a production and staging folder.
envs = ['staging', 'production'] path = './backup/configs' # Create directory and archive previous datasets with open(path + '/metadata.json') as f: date = json.load(f)[0]['updatedAt'] shutil.make_archive(f'./backup/archived/archive_{date}', 'zip', path) # Check correct folders are found if not all([folder...
docs/Update_GFW_Layers_Vault.ipynb
Vizzuality/gfw
mit
Run the following to save, build .json files and log changes. Update record
%%time for env in envs: # Get all old ids old_ids = [file.split('.json')[0] for file in os.listdir(path + f'/{env}') if '_metadata' not in file] old_datasets = [] files = os.listdir(path + f'/{env}') # Extract all old datasets for file in files: if '_metadata' not in file:...
docs/Update_GFW_Layers_Vault.ipynb
Vizzuality/gfw
mit
Load the head observations The first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file tha...
ho = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the oseries is:', type(ho))
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
The variable ho is now a pandas Series object. To see the first five lines, type ho.head().
ho.head()
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
The series can be plotted as follows
ho.plot(style='.', figsize=(12, 4)) plt.ylabel('Head [m]'); plt.xlabel('Time [years]');
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
Load the stresses The head variation shown above is believed to be caused by two stresses: rainfall and evaporation. Measured rainfall is stored in the file rain_nb1.csv and measured potential evaporation is stored in the file evap_nb1.csv. The rainfall and potential evaporation are loaded and plotted.
rain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the rain series is:', type(rain)) evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the evap series is', type(evap)) plt.figur...
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
Recharge As a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.
recharge = rain - evap plt.figure(figsize=(12, 4)) recharge.plot() plt.xlabel('Time [years]') plt.ylabel('Recharge (m/d)');
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
First time series model Once the time series are read from the data files, a time series model can be constructed by going through the following three steps: Creat a Model object by passing it the observed head series. Store your model in a variable so that you can use it later on. Add the stresses that are expected ...
ml = ps.Model(ho) sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec') ml.add_stressmodel(sm1) ml.solve(tmin='1985', tmax='2010')
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The m...
ml.plot(figsize=(12, 4)); ml = ps.Model(ho) sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec') ml.add_stressmodel(sm1) ml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares) ml = ps.Model(ho) sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec') ml.add_stressmodel(sm1)...
examples/notebooks/02_fix_parameters.ipynb
pastas/pasta
mit
A normal old python function to return the Nth fibonacci number. Interative implementation of fibonacci, just iteratively adds a and b to calculate the nth number in the sequence. &gt;&gt; [software_fibonacci(x) for x in range(10)] [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
def software_fibonacci(n): a, b = 0, 1 for i in range(n): a, b = b, a + b return a
ipynb-examples/introduction-to-hardware.ipynb
UCSBarchlab/PyRTL
bsd-3-clause
Attempt 1 Let's convert this into some hardware that computes the same thing. Our first go will be to just replace the 0 and 1 with WireVectors to see what happens.
def attempt1_hardware_fibonacci(n, bitwidth): a = pyrtl.Const(0) b = pyrtl.Const(1) for i in range(n): a, b = b, a + b return a
ipynb-examples/introduction-to-hardware.ipynb
UCSBarchlab/PyRTL
bsd-3-clause
The above looks really nice does not really represent a hardware implementation of fibonacci. Let's reason through the code, line by line, to figure out what it would actually build. a = pyrtl.Const(0) This makes a wirevector of bitwidth=1 that is driven by a zero. Thus a is a wirevector. Seems good. b = pyrtl....
def attempt2_hardware_fibonacci(n, bitwidth): a = pyrtl.Register(bitwidth, 'a') b = pyrtl.Register(bitwidth, 'b') a.next <<= b b.next <<= a + b return a
ipynb-examples/introduction-to-hardware.ipynb
UCSBarchlab/PyRTL
bsd-3-clause
This is looking much better. Two registers, a and b store the values from which we can compute the series. The line a.next &lt;&lt;= b means that the value of a in the next cycle should be simply be b from the current cycle. The line b.next &lt;&lt;= a + b says to build an adder, with inputs of a and b from the cur...
def attempt3_hardware_fibonacci(n, bitwidth): a = pyrtl.Register(bitwidth, 'a') b = pyrtl.Register(bitwidth, 'b') i = pyrtl.Register(bitwidth, 'i') i.next <<= i + 1 a.next <<= b b.next <<= a + b return a, i == n
ipynb-examples/introduction-to-hardware.ipynb
UCSBarchlab/PyRTL
bsd-3-clause
Attempt 4 This is now far enough along that we can simulate the design and see what happens...
def attempt4_hardware_fibonacci(n, req, bitwidth): a = pyrtl.Register(bitwidth, 'a') b = pyrtl.Register(bitwidth, 'b') i = pyrtl.Register(bitwidth, 'i') local_n = pyrtl.Register(bitwidth, 'local_n') done = pyrtl.WireVector(bitwidth=1, name='done') with pyrtl.conditional_assignment: with...
ipynb-examples/introduction-to-hardware.ipynb
UCSBarchlab/PyRTL
bsd-3-clause
Define test tables
# Define test data and register it as tables # This is a classic example of employee and department relational tables # Test data will be used in the examples later in this notebook from pyspark.sql import Row Employee = Row("id", "name", "email", "manager_id", "dep_id") df_emp = sqlContext.createDataFrame([ ...
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
Examples of how to use %SQL magic functions with Spark Use %sql to run SQL and return a DataFrame, lazy evaluation
# Example of line magic, a shortcut to run SQL in pyspark # Pyspark has lazy evaluation, so the query is not executed in this exmaple df = %sql select * from employee df
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
Use %sql_show to run SQL and show the top lines of the result set
# Example of line magic, the SQL is executed and the result is displayed # the maximum number of displayed lines is configurable (max_show_lines) %sql_show select * from employee
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
Example of cell magic to run SQL spanning multiple lines
%%sql_show select emp.id, emp.name, emp.email, emp.manager_id, dep.dep_name from employee emp, department dep where emp.dep_id=dep.dep_id
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
Use %sql_display to run SQL and display the results as a HTML table Example of cell magic that runs SQL and then transforms it to Pandas. This will display the output as a HTML table in Jupyter notebooks
%%sql_display select emp.id, emp.name, emp.email, emp2.name as manager_name, dep.dep_name from employee emp left outer join employee emp2 on emp2.id=emp.manager_id join department dep on emp.dep_id=dep.dep_id
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
Use %sql_explain to display the execution plan
%%sql_explain select emp.id, emp.name, emp.email, emp2.name as manager_name, dep.dep_name from employee emp left outer join employee emp2 on emp2.id=emp.manager_id join department dep on emp.dep_id=dep.dep_id
Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb
LucaCanali/Miscellaneous
apache-2.0
TPOT uses a genetic algorithm (implemented with DEAP library) to pick an optimal pipeline for a regression task. What is a pipeline? Pipeline is composed of preprocessors: * take polynomial transformations of features * TPOTBase is key class parameters: population_size: int (default: 100) The number of pip...
!sudo pip install deap update_checker tqdm xgboost tpot import pandas as pd import numpy as np import psycopg2 import os import json from tpot import TPOTClassifier from sklearn.metrics import classification_report conn = psycopg2.connect( user = os.environ['REDSHIFT_USER'] ,password = os.environ['REDSHIFT_...
notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb
pramitchoudhary/Experiments
unlicense
Sklearn model:
from sklearn.ensemble import RandomForestClassifier sklearn_model = RandomForestClassifier() sklearn_model.fit(X_train, y_train) sklearn_predictions = sklearn_model.predict(X_test) print classification_report(y_test, sklearn_predictions)
notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb
pramitchoudhary/Experiments
unlicense
TPOT Classifier
tpot_model = TPOTClassifier(generations=3, population_size=10, verbosity=2, max_time_mins=10) tpot_model.fit(X_train, y_train) tpot_predictions = tpot_model.predict(X_test) print classification_report(y_test, tpot_predictions)
notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb
pramitchoudhary/Experiments
unlicense
Export Pseudo Pipeline Code
tpot_model.export('optimal-saleability-model.py') !cat optimal-saleability-model.py
notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb
pramitchoudhary/Experiments
unlicense
Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_ph...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're us...
import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)]
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
# Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each ...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
# read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1))
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
labels_vecs = # Your one-hot encoded labels array here
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typic...
train_x, train_y = val_x, val_y = test_x, test_y = print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape)
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected...
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations logits = # output layer logits cost = # cross entropy loss optimizer = # training optimizer # Operations for validation/test accuracy pre...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_siz...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to ...
saver = tf.train.Saver() with tf.Session() as sess: # TODO: Your training code here saver.save(sess, "checkpoints/flowers.ckpt")
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Testing Below you see the test accuracy. You can also see the predictions returned for images.
with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from sci...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: ...
transfer-learning/Transfer_Learning.ipynb
efoley/deep-learning
mit
Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset Trend and anomaly analyses are widely used in atmospheric and oceanographic research for detecting long term change. An example is presented in this notebook of a numerical analysis of Sea Surface Temperature (SST) where the global change rate per decade ha...
% matplotlib inline from pylab import * import numpy as np import datetime from netCDF4 import netcdftime from netCDF4 import Dataset as netcdf # netcdf4-python module from netcdftime import utime import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap import matplotlib.dates as mdates from matplot...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2. Read SST data and pick variables 2.1 Read SST
ncset= netcdf(r'data/sst.mnmean.nc') lons = ncset['lon'][:] lats = ncset['lat'][:] sst = ncset['sst'][1:421,:,:] # 1982-2016 to make it divisible by 12 nctime = ncset['time'][1:421] t_unit = ncset['time'].units try : t_cal =ncset['time'].calendar except AttributeError : # Attribute doesn't exist ...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.2 Parse time
utime = netcdftime.utime(t_unit, calendar = t_cal) datevar = utime.num2date(nctime) print(datevar.shape) datevar[0:5]
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.3 Read mask (1=ocen, 0=land)
lmfile = 'data\lsmask.nc' lmset = netcdf(lmfile) lsmask = lmset['mask'][0,:,:] lsmask = lsmask-1 num_repeats = nt lsm = np.stack([lsmask]*num_repeats,axis=-1).transpose((2,0,1)) lsm.shape
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.3 Mask out Land
sst = np.ma.masked_array(sst, mask=lsm)
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3. Trend Analysis 3.1 Linear trend calculation
#import scipy.stats as stats sst_grd = sst.reshape((nt, ngrd), order='F') x = np.linspace(1,nt,nt)#.reshape((nt,1)) sst_rate = np.empty((ngrd,1)) sst_rate[:,:] = np.nan for i in range(ngrd): y = sst_grd[:,i] if(not np.ma.is_masked(y)): z = np.polyfit(x, y, 1) sst_rate[i,0]...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3.2 Visualize SST trend
m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats), urcrnrlon=max(lons), urcrnrlat=max(lats)) x, y = m(*np.meshgrid(lons, lats)) clevs = np.linspace(-0.5, 0.5, 21) cs = m.contourf(x, y, sst_rate.squeeze(), clevs, cmap=plt.cm.RdBu_r) m.drawcoastlines() #m.fillcontinents(color='#000000',lake_...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4. Anomaly analysis 4.1 Convert sst data into nyear x12 x lat x lon
sst_grd_ym = sst.reshape((12,nt/12, ngrd), order='F').transpose((1,0,2)) sst_grd_ym.shape
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.2 Calculate seasonal cycle
sst_grd_clm = np.mean(sst_grd_ym, axis=0) sst_grd_clm.shape
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.3 Remove seasonal cycle
sst_grd_anom = (sst_grd_ym - sst_grd_clm).transpose((1,0,2)).reshape((nt, nlat, nlon), order='F') sst_grd_anom.shape
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.4 Calculate area-weights 4.4.1 Make sure lat-lon grid direction
print(lats[0:12]) print(lons[0:12])
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.4.2 Calculate area-weights with cos(lats)
lonx, latx = np.meshgrid(lons, lats) weights = np.cos(latx * np.pi / 180.) print(weights.shape)
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.4.3 Calculate valid grids total eareas for Global, NH and SH
sst_glb_avg = np.zeros(nt) sst_nh_avg = np.zeros(nt) sst_sh_avg = np.zeros(nt) for it in np.arange(nt): sst_glb_avg[it] = np.ma.average(sst_grd_anom[it, :], weights=weights) sst_nh_avg[it] = np.ma.average(sst_grd_anom[it,0:nlat/2,:], weights=weights[0:nlat/2,:]) sst_sh_avg[it] = np.ma.average(sst_gr...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
5. Visualize monthly SST anomaly time series
fig, ax = plt.subplots(1, 1 , figsize=(15,5)) ax.plot(datevar, sst_glb_avg, color='b', linewidth=2, label='GLB') ax.plot(datevar, sst_nh_avg, color='r', linewidth=2, label='NH') ax.plot(datevar, sst_sh_avg, color='g', linewidth=2, label='SH') ax.axhline(0, linewidth=1, color='k') ax.legend() ax.set_title('Monthly S...
ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
defaultdict The whole point is that it will always return a value even if you query for a key that doesnt exist. That value you set ahead of time is called a factory object. That key also gets turned into a new key/value pair with the factory object
from collections import defaultdict d = {'k1':1} d["k1"] d["k2"] # this will get an error because the k2 key doesnt exist d = defaultdict(object) d['one'] # this doesn't exist, but in calling for it it will create a new element {'one' : object} d['two'] # same, this will add {'two' : object} for k, v in d.items(...
Advanced Modules/Collections Module.ipynb
spacedrabbit/PythonBootcamp
mit
orderedDict dictionary subclass that remembers the order items were added
d_norm = {} d_norm['a'] = 1 d_norm['b'] = 2 d_norm['c'] = 3 d_norm['d'] = 4 d_norm['e'] = 5 # order isn't preserved since a dict is just a mapping for k,v in d_norm.items(): print k,v from collections import OrderedDict d_ordered = OrderedDict() d_ordered['a'] = 1 d_ordered['b'] = 2 d_ordered['c'] = 3 d_ordered...
Advanced Modules/Collections Module.ipynb
spacedrabbit/PythonBootcamp
mit
Apache ORC Reader <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/io/tutorials/orc"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/gi...
!pip install tensorflow-io import tensorflow as tf import tensorflow_io as tfio
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
Download a sample dataset file in ORC The dataset you will use here is the Iris Data Set from UCI. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. It has 4 attributes: (1) sepal length, (2) sepal width, (3) petal length, (4) petal width, and the last column contain...
!curl -OL https://github.com/tensorflow/io/raw/master/tests/test_orc/iris.orc !ls -l iris.orc
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
Create a dataset from the file
dataset = tfio.IODataset.from_orc("iris.orc", capacity=15).batch(1)
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
Examine the dataset:
for item in dataset.take(1): print(item)
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
Let's walk through an end-to-end example of tf.keras model training with ORC dataset based on iris dataset. Data preprocessing Configure which columns are features, and which column is label:
feature_cols = ["sepal_length", "sepal_width", "petal_length", "petal_width"] label_cols = ["species"] # select feature columns feature_dataset = tfio.IODataset.from_orc("iris.orc", columns=feature_cols) # select label columns label_dataset = tfio.IODataset.from_orc("iris.orc", columns=label_cols)
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
A util function to map species to float numbers for model training:
vocab_init = tf.lookup.KeyValueTensorInitializer( keys=tf.constant(["virginica", "versicolor", "setosa"]), values=tf.constant([0, 1, 2], dtype=tf.int64)) vocab_table = tf.lookup.StaticVocabularyTable( vocab_init, num_oov_buckets=4) label_dataset = label_dataset.map(vocab_table.lookup) dataset = tf.data...
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
Build, compile and train the model Finally, you are ready to build the model and train it! You will build a 3 layer keras model to predict the class of the iris plant from the dataset you just processed.
model = tf.keras.Sequential( [ tf.keras.layers.Dense( 10, activation=tf.nn.relu, input_shape=(4,) ), tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3), ] ) model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(fro...
site/en-snapshot/io/tutorials/orc.ipynb
tensorflow/docs-l10n
apache-2.0
As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
%matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary()
2.2/tutorials/requiv_crit_detached.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Detached Systems Detached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.
b['requiv_max@component@primary'] b['requiv_max@constraint@primary']
2.2/tutorials/requiv_crit_detached.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We can see that the default system is well within this critical value by printing all radii and critical radii.
print(b.filter(qualifier='requiv*', context='component'))
2.2/tutorials/requiv_crit_detached.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute().
b['requiv@primary'] = 2.2 print(b.run_checks())
2.2/tutorials/requiv_crit_detached.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Here, we load the DataFrame for the full training set and repeat the classification approach identified in step 2.
%%time raw_input = pd.read_pickle('train_full.pkl') raw_input.head() raw_input.info()
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Check for Class Imbalance
level_counts = raw_input.level.value_counts().sort_index() group_counts = raw_input.group.value_counts().sort_index() _, ax = plt.subplots(1, 2, figsize=(10, 5)) _ = level_counts.plot(kind='bar', title='Feature Instances per Level', ax=ax[0], rot=0) _ = group_counts.plot(kind='bar', title='Feature Instances per Group...
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Level Classification Based on Text Here we apply the same approach of converting text to bag-of-words features and then using a maximum entropy classifier. The difference is we are now running on the full dataset which is much larger. The optimizer now requires more steps to converge, so we change the max_iters attribu...
import nltk nltk.download('stopwords') nltk.download('punkt') from nltk.corpus import stopwords en_stopwords = set(stopwords.words('english')) print(en_stopwords) from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix def display_results(y, y_pred): """Given some predicat...
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Group Classification Based on Text
%%time groups, groups_predicted = classify(raw_input, target_label='group') display_results(groups, groups_predicted)
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Classification on Test Set Finally we report the performance on our classfier on both the leve and group classification tasks using the test dataset. For this we re-build the model using the hyperparameters used above, and train it using the entire train dataset.
from functools import lru_cache @lru_cache(maxsize=1) def get_test_dataset(): return pd.read_pickle('test.pkl') def report_test_perf(train_df, target_label='level'): """Produce classification report and confusion matrix on the test Dataset for a given ``target_label``.""" test_df = get_test_dataset() ...
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Level Classification on Test Set
%% time train_df = raw_input report_test_perf(train_df, 'level')
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Group Classification on Test Set
%%time report_test_perf(train_df, 'group')
text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb
dipanjank/ml
gpl-3.0
Let's show the symbols data, to see how good the recommender has to be.
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) # Simulate (with new envs, each time) n_epochs = 4 for i in range(n_epochs): tic = time() env.reset(STARTING_DAYS_AHEAD) results_list = si...
notebooks/prod/n08_simple_q_learner_fast_learner_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Let's run the trained agent, with the test set First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
TEST_DAYS_AHEAD = 20 env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=False, ...
notebooks/prod/n08_simple_q_learner_fast_learner_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=True, starting_days_ahead=T...
notebooks/prod/n08_simple_q_learner_fast_learner_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
What are the metrics for "holding the position"?
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:])))) import pickle with open('../../data/simple_q_learner_fast_learner_full_training.pkl', 'wb') as best_agent: pickle.dump(agents[0], best_agent)
notebooks/prod/n08_simple_q_learner_fast_learner_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Observem que diferente do método simples, para cada cenário (seed) ocorre uma situação diferente, ou seja, o número de segurados que se aposenta é diferente:
print(lista_nap)
notebooks/CalculoEstoqueMetodoProb.ipynb
cpatrickalves/simprev
gpl-3.0
Se calcularmos a média, temos um valor bem próximo ao do Método determinístico.
media = np.mean(lista_nap) print('Média: {}'.format(media))
notebooks/CalculoEstoqueMetodoProb.ipynb
cpatrickalves/simprev
gpl-3.0
Porém, com diferentes cenários, podemos calcular medidas de dispersão, como o desvio padrão.
std = np.std(lista_nap) print('Desvio padrão: {}'.format(std))
notebooks/CalculoEstoqueMetodoProb.ipynb
cpatrickalves/simprev
gpl-3.0
Visualizando em um gráfico:
import matplotlib.pyplot as plt %matplotlib inline medias = [350] * len(seeds) fig, ax = plt.subplots() ax.plot(seeds, lista_nap, '--', linewidth=2, label='Método Probabilístico') ax.plot(seeds, medias,label='Método Determinístico') ax.set_ylabel('Número de Aposentados') ax.set_xlabel('Seed') ax.set_title('Cálculo do...
notebooks/CalculoEstoqueMetodoProb.ipynb
cpatrickalves/simprev
gpl-3.0
Aplicando o método probabilístico no cálculo dos estoques (onde as probabilidades são aplicadas), teremos para cada seed, uma projeção/resultado diferente. Na média o resultado vai ser o mesmo obtido pelo método original, porém teremos diversas curvas ou pontos para cada ano, o que nos permite calcular medidas de dispe...
np.var(lista_nap)
notebooks/CalculoEstoqueMetodoProb.ipynb
cpatrickalves/simprev
gpl-3.0
Formula for standard deviation $$\sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2}$$
distribution = np.random.normal(0.75,size=1000) np.sqrt(np.sum((np.mean(distribution)-distribution)**2)/len(distribution)) np.std(distribution) import scipy.stats as stats stats.kurtosis(distribution) stats.skew(distribution) chi_squared_df2 = np.random.chisquare(10, size=10000) stats.skew(chi_squared_df2) chi_sq...
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week4/Week+4.ipynb
Z0m6ie/Zombie_Code
mit
Hypothesis Testing
df = pd.read_csv('grades.csv') df.head() len(df) early = df[df['assignment1_submission'] <= '2015-12-31'] late = df[df['assignment1_submission'] > '2015-12-31'] early.mean() late.mean() from scipy import stats stats.ttest_ind? stats.ttest_ind(early['assignment1_grade'], late['assignment1_grade']) stats.ttest_in...
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week4/Week+4.ipynb
Z0m6ie/Zombie_Code
mit
Let's point the distributed client to the Dask cluster on Coiled and output the link to the dashboard:
from dask.distributed import Client client = Client(cluster) print('Dashboard:', client.dashboard_link)
dask/create-cluster.ipynb
koverholt/notebooks
bsd-3-clause
Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) Exercícios
# Exercício 1 - Crie uma lista de 3 elementos e calcule a terceira potência de cada elemento. list1 = [3,4,5] quadrado = [item**3 for item in list1] print(quadrado) # Exercício 2 - Reescreva o código abaixo, usando a função map(). O resultado final deve ser o mesmo! palavras = 'A Data Science Academy oferce os melhor...
Cap04/Notebooks/DSA-Python-Cap04-Exercicios-Solucao.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Explore event-related dynamics for specific frequency bands The objective is to show you how to explore spectrally localized effects. For this purpose we adapt the method described in [1]_ and use it on the somato dataset. The idea is to track the band-limited temporal evolution of spatial patterns by using the :term:G...
# Authors: Denis A. Engemann <denis.engemann@gmail.com> # Stefan Appelhoff <stefan.appelhoff@mailbox.org> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import somato from mne.baseline import rescale from mne.stats import boots...
0.20/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = somato.data_path() subject = '01' task = 'somato' raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg', 'sub-{}_task-{}_meg.fif'.format(subject, task)) # let's explore some frequency bands iter_freqs = [ ('Theta', 4, 7), ('Alpha', 8, 12), ('Beta', 13, 25), ('Ga...
0.20/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We create average power time courses for each frequency band
# set epoching parameters event_id, tmin, tmax = 1, -1., 3. baseline = None # get the header to extract events raw = mne.io.read_raw_fif(raw_fname) events = mne.find_events(raw, stim_channel='STI 014') frequency_map = list() for band, fmin, fmax in iter_freqs: # (re)load the data to save memory raw = mne.io....
0.20/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause