markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Local time The tz.tzlocal() class is a tzinfo implementation that uses the OS hooks in Python's time module to get the local system time.
# Temporarily changes the TZ file on *nix systems. from helper_functions import TZEnvContext print_tzinfo(dt.astimezone(tz.tzlocal())) with TZEnvContext('UTC'): print_tzinfo(dt.astimezone(tz.tzlocal())) with TZEnvContext('PST8PDT'): print_tzinfo((dt + timedelta(days=180)).astimezone(tz.tzlocal()))
timezone_troubles.ipynb
pganssle/pybay-2017-timezones-talk
cc0-1.0
Local time: Windows tz.win.tzwinlocal() directly queries the Windows registry for its time zone data and uses that to construct a tzinfo. Fixes this bug: ```python dt = datetime(2014, 2, 11, 17, 0) print(dt.replace(tzinfo=tz.tzlocal()).tzname()) Eastern Standard Time print(dt.replace(tzinfo=tz.win.tzwinlocal()).tzna...
NYC = tz.gettz('America/New_York') NYC
timezone_troubles.ipynb
pganssle/pybay-2017-timezones-talk
cc0-1.0
The IANA database contains historical time zone transitions:
print_tzinfo(datetime(2017, 8, 12, 14, tzinfo=NYC)) # Eastern Daylight Time print_tzinfo(datetime(1944, 1, 6, 12, 15, tzinfo=NYC)) # Eastern War Time print_tzinfo(datetime(1901, 9, 6, 16, 7, tzinfo=NYC)) # Local solar mean
timezone_troubles.ipynb
pganssle/pybay-2017-timezones-talk
cc0-1.0
tz.gettz() The most general way to get a time zone is to pass the relevant timezone string to the gettz() function, which will try parsing it a number of different ways until it finds a relevant string.
tz.gettz() # Passing nothing gives you local time # If your TZSTR is an an Olson file, it is prioritized over the /etc/localtime tzfile. with TZEnvContext('CST6CDT'): print(gettz()) # If it doesn't find a tzfile, but it finds a valid abbreviation for the local zone, # it returns tzlocal() with TZEnvContext('...
timezone_troubles.ipynb
pganssle/pybay-2017-timezones-talk
cc0-1.0
Elogs is collection of KGS TIFF files; las is KGS .las files
elogs = pd.read_csv('temp/ks_elog_scans.txt', parse_dates=True) lases = pd.read_csv('temp/ks_las_files.txt', parse_dates=True) elogs_mask = elogs['KID'].isin(lases['KGS_ID']) # Create mask for elogs both_elog = elogs[elogs_mask] # select items elog that fall in both both_elog_unique = both_elog.drop_duplicates('KID')...
data/kgs/DownloadLogs_v1.ipynb
willsa14/ras2las
mit
Trying again after filtering out any wells that have duplicate logs
elogs_nodup_mask = elogs_nodup['KID'].isin(lases_nodup['KGS_ID']) # Create mask for elogs both_elog_nodup = elogs_nodup[elogs_nodup_mask] # select items elog that fall in both print('How many logs fall in both and have unique KGS_ID? '+str(both_elog_nodup.shape[0])) lases_nodup_mask = lases_nodup['KGS_ID'].isin(elo...
data/kgs/DownloadLogs_v1.ipynb
willsa14/ras2las
mit
Select logs from 1980 onward
both_elog_nodup.loc['1980-1-1' : '2017-1-1'].shape
data/kgs/DownloadLogs_v1.ipynb
willsa14/ras2las
mit
We can now use this Environment class to define a grid world like so:
env = Environment([ [ 0, 0, 0, 0, 0, None, None], [ 0, 0, 5, 0, 0, 0, None], [ 0, 0, None, 5, None, 0, None], [None, 0, 5, 5, None, 10, 0] ])
examples/reinforcement_learning/q_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
The agent Now we'll move on to designing the agent. A few notes here: We have a learning_rate parameter here (it takes a value from 0 to 1). This is useful for stochastic (non-deterministic) environments; it allows you to control how much new information overwrites existing information about the environment. In stocha...
class QLearner(): def __init__(self, state, environment, rewards, discount=0.5, explore=0.5, learning_rate=1): """ - state: the agent's starting state - rewards: a reward function, taking a state as input, or a mapping of states to a reward value - discount: how much the agent values...
examples/reinforcement_learning/q_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
With the agent defined, we can try running it in our environment:
import time import random # try discount=0.1 and discount=0.9 pos = random.choice(env.starting_positions) agent = QLearner(pos, env, env.reward, discount=0.9, learning_rate=1) print('before training...') agent.explore = 0 for i in range(10): game_over = False # start at a random position pos = random.choi...
examples/reinforcement_learning/q_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Here we're training the agent for 500 episodes, which should be enough for it to thoroughly explore the space. If you don't train an agent enough it may fail to learn optimal behaviors - it hasn't experienced enough yet. The Q lookup table the agent has learned is a bit hard to parse. Let's visualize the policy it's le...
import math import textwrap from PIL import Image, ImageDraw, ImageFont font = ImageFont.load_default() class Renderer(): """renders a grid with values (for the gridworld)""" def __init__(self, grid, cell_size=60): self.grid = grid self.cell_size = cell_size grid_h = len(grid) ...
examples/reinforcement_learning/q_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
A simple example: Gaussian mixture models Let's start off with a simple example. Perhaps we have a data set like the one below, which is made up of not a single blob, but many blobs. It doesn't seem like any of the simple distributions that pomegranate has implemented can fully capture what's going on in the data.
X = numpy.concatenate([numpy.random.normal((7, 2), 1, size=(100, 2)), numpy.random.normal((2, 3), 1, size=(150, 2)), numpy.random.normal((7, 7), 1, size=(100, 2))]) plt.figure(figsize=(8, 6)) plt.scatter(X[:,0], X[:,1]) plt.show()
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
It seems clear to us that this data is composed of three blobs. Accordingly, rather than trying to find some complex single distribution that can describe this data, we can describe it as a mixture of three Gaussian distributions. In the same way that we could initialize a basic distribution using the from_samples meth...
model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 3, X)
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
Now we can look at the probability densities if we had used a single Gaussian distribution versus using this mixture of three Gaussian models.
x = numpy.arange(-1, 10.1, .1) y = numpy.arange(-1, 10.1, .1) xx, yy = numpy.meshgrid(x, y) x_ = numpy.array(list(zip(xx.flatten(), yy.flatten()))) p1 = MultivariateGaussianDistribution.from_samples(X).probability(x_).reshape(len(x), len(y)) p2 = model.probability(x_).reshape(len(x), len(y)) plt.figure(figsize=(14,...
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
It looks like, unsurprisingly, the mixture model is able to better capture the structure of the data. The single model is so bad that the region of highest density (the darkest blue ellipse) has very few real points in it. In contrast, the darkest regions in the mixture densities correspond to where there are the most ...
mu = numpy.random.normal(7, 1, size=250) std = numpy.random.lognormal(-2.0, 0.4, size=250) std[::2] += 0.3 dur = numpy.random.exponential(250, size=250) dur[::2] -= 140 dur = numpy.abs(dur) data = numpy.concatenate([numpy.random.normal(mu_, std_, int(t)) for mu_, std_, t in zip(mu, std, dur)]) plt.figure(figsize=(1...
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
We can show the distribution of segment properties to highlight the differences.
plt.figure(figsize=(14, 4)) plt.subplot(131) plt.title("Segment Means", fontsize=14) plt.hist(mu[::2], bins=20, alpha=0.7) plt.hist(mu[1::2], bins=20, alpha=0.7) plt.subplot(132) plt.title("Segment STDs", fontsize=14) plt.hist(std[::2], bins=20, alpha=0.7) plt.hist(std[1::2], bins=20, alpha=0.7) plt.subplot(133) plt....
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
In this situation, it looks like the segment noise is going to be the feature with the most difference between the learned distributions. Let's use pomegranate to learn this mixture, modeling each feature with an appropriate distribution.
X = numpy.array([mu, std, dur]).T.copy() model = GeneralMixtureModel.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], 2, X) model
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
It looks like the model is easily able to fit the differences that we observed in the data. The durations of the two components are similar, the means are almost identical to each other, but it's the noise that's the most different between the two. The API Initialization The API for general mixture models is similar to...
model = GeneralMixtureModel([NormalDistribution(4, 1), NormalDistribution(7, 1)])
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
There is no reason these distributions have to be the same type. You can create a non-homogenous mixture by passing in different types of distributions.
model2 = GeneralMixtureModel([NormalDistribution(5, 1), ExponentialDistribution(0.3)])
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
They will produce very different probability distributions, but both work as a mixture.
x = numpy.arange(0, 10.01, 0.05) plt.figure(figsize=(14, 3)) plt.subplot(121) plt.title("~Norm(4, 1) + ~Norm(7, 1)", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model.probability(x)) plt.ylim(0, 0.25) plt.subplot(122) plt.title("~Norm(5, 1) + ~Exp(0.3)", fontsize=14) plt.ylabel(...
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
pomegranate offers a lot of flexibility when it comes to making mixtures directly from data. The normal option is just to specify the type of distribution, the number of components, and then pass in the data. Make sure that if you're making 1 dimensional data that you're still passing in a matrix whose second dimension...
X = numpy.random.normal(3, 1, size=(200, 1)) X[::2] += 3 model = GeneralMixtureModel.from_samples(NormalDistribution, 2, X)
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
We can take a quick look at what it looks like:
plt.figure(figsize=(6, 3)) plt.title("Learned Model", fontsize=14) plt.ylabel("Probability Density", fontsize=14) plt.fill_between(x, 0, model.probability(x)) plt.ylim(0, 0.25) plt.show()
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
However, we can also fit a non-homogenous mixture by passing in a list of univariate distributions to be fit to univariate data. Let's try to fit to data of which some of it is normally distributed and some is exponentially distributed.
X = numpy.concatenate([numpy.random.normal(5, 1, size=(200, 1)), numpy.random.exponential(1, size=(50, 1))]) model = GeneralMixtureModel.from_samples([NormalDistribution, ExponentialDistribution], 2, X) x = numpy.arange(0, 12.01, 0.01) plt.figure(figsize=(6, 3)) plt.title("Learned Model", fontsize=14) plt.ylabel("Pr...
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
It looks like the mixture is capturing the distribution fairly well. Next, if we want to make a mixture model that describes each feature using a different distribution, we can pass in a list of distributions---one or each feature---and it'll use that distribution to model the corresponding feature. For example, in the...
X = numpy.array([mu, std, dur]).T.copy() model = GeneralMixtureModel.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], 2, X)
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
This is the same command we used in the second example. It creates two components, each of which model the three features respectively with those three distributions. The difference between this and the previous example is that the data here is multivariate and so univariate distributions are assumed to model different...
X = numpy.random.normal(5, 1, size=(500, 1)) X[::2] += 3 x = numpy.arange(0, 10, .01) model = GeneralMixtureModel.from_samples(NormalDistribution, 2, X) plt.figure(figsize=(8, 4)) plt.hist(X, bins=50, normed=True) plt.plot(x, model.distributions[0].probability(x), label="Distribution 1") plt.plot(x, model.distributi...
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
The prediction task is identifying, for each point, whether it falls under the orange distribution or the green distribution. This is done using Bayes' rule, where the probability of each sample under each component is divided by the probability of the sample under all components of the mixture. The posterior probabili...
X = numpy.arange(4, 9, 0.5).reshape(10, 1) model.predict_proba(X)
tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
jmschrei/pomegranate
mit
Creating pages and initialise the cellpy batch object If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages.
# Please fill in here project = "prebens_experiment" name = "test" batch_col = "b01"
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
Initialisation
print(" INITIALISATION OF BATCH ".center(80, "=")) b = batch.init(name, project, batch_col=batch_col, log_level="INFO")
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
Set parameters
# setting some prms b.experiment.export_raw = False b.experiment.export_cycles = False b.experiment.export_ica = False # b.experiment.all_in_memory = True # store all data in memory, defaults to False
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
Run
# load info from your db and write the journal pages b.create_journal() b.pages b.link() # create the apropriate folders b.paginate() # load the data (and save .csv-files if you have set export_(raw/cycles/ica) = True) # (this might take some time) b.update() # collect summary-data (e.g. charge capacity vs cycle nu...
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
4. Looking at the data Summaries
prms.Batch.summary_plot_height = 800 prms.Batch.summary_plot_width = 900 # Plot the charge capacity and the C.E. (and resistance) vs. cycle number (standard plot) b.plot_summaries() from bokeh.plotting import figure, show from bokeh.models import ColumnDataSource, ColumnarDataSource data = ColumnDataSource( pd.D...
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
Cycles
labels = b.experiment.cell_names labels d = b.experiment.data["20160805_test001_47_cc"] b.experiment.data.experiment.cell_data_frames %%opts Curve (color=hv.Palette('Magma')) voltage_curves = dict() for label in b.experiment.cell_names: d = b.experiment.data[label] curves = d.get_cap(label_cycle_number=True,...
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
Selecting specific cells and investigating them
# This will show you all your cell names cell_labels = b.experiment.cell_names cell_labels # This is how to select the data (CellpyData-objects) data1 = b.experiment.data["20160805_test001_45_cc"]
dev_utils/batch_notebooks/cellpy_batch_processing_DEV.ipynb
jepegit/cellpy
mit
To access the 3D ensemble mean final density contrast use:
#3D ensemble mean for the final density contrast mean=data['mean']
borg_sdss_density/borg_sdss_density.ipynb
florent-leclercq/borg_sdss_data_release
gpl-3.0
Individual voxels in this 3D volumetric data cube can be accessed as follows:
k=10;j=127;i=243 delta_ijk=mean[k,j,i]
borg_sdss_density/borg_sdss_density.ipynb
florent-leclercq/borg_sdss_data_release
gpl-3.0
where i,j and k index voxel positions along the x,y and z axes respectively. All indices run from 0 to 255. Similarly, to access voxel-wise standard deviations, use:
#3D pixelwise standard deviation for the final density contrast stdv=data['stdv']
borg_sdss_density/borg_sdss_density.ipynb
florent-leclercq/borg_sdss_data_release
gpl-3.0
Individual voxels of this 3D volumetric data cube can be accessed as described above. The ranges describing the extent of the cubic cartesian volume along the x,y and z axes can be accessed as follows:
#Minimum and maximum position along the x-axis in Mpc/h xmin=data['ranges'][0] xmax=data['ranges'][1] #Minimum and maximum position along the y-axis in Mpc/h ymin=data['ranges'][2] ymax=data['ranges'][3] #Minimum and maximum position along the z-axis in Mpc/h zmin=data['ranges'][4] zmax=data['ranges'][5]
borg_sdss_density/borg_sdss_density.ipynb
florent-leclercq/borg_sdss_data_release
gpl-3.0
Units are Mpc/h. (Note that all the maps that are part of the BORG SDSS data products have consistent coordinate systems. The coordinate transform to change from Cartesian to spherical coordinates and vice versa is given in appendix B of Jasche et al. 2015). Example plot
from matplotlib import pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable %matplotlib inline f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(10,5)) im1 = ax1.imshow(mean[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], cmap="magma") ax1.set_title("Density contrast: mean") divider = ma...
borg_sdss_density/borg_sdss_density.ipynb
florent-leclercq/borg_sdss_data_release
gpl-3.0
getUserSessions tinkering
_rmDF = rmdf1522 userId = '8829514a-cb9f-47fb-aaeb-3167776f1062' #userId = getRandomRedMetricsGUID(_rmDF) #def getUserSessions( _rmDF, userId): result = _rmDF.loc[:,['userId','sessionId']][_rmDF['userId']==userId]['sessionId'].drop_duplicates().dropna(how='any') result _sessionIndex = randint(0,len(result)-1) _guid =...
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
getTranslatedForm tinkering - from 0.4 GF correct answers questionsAnswersTranslationsFR.T questionsAnswersTranslationsFR.loc["Are you interested in video games?"] questionsAnswersTranslationsFR.loc["Do you play video games?"] localizedFormFR = gformFR returns an English-indexed, English-localized answer dataframe from...
from random import randint uniqueUsers = rmdf1522['userId'].dropna().unique() userCount = len(uniqueUsers) testlocalplayerguid = '0' while (not isGUIDFormat(testlocalplayerguid)): userIndex = randint(0,userCount-1) testlocalplayerguid = uniqueUsers[userIndex] testlocalplayerguid sessionscount = rmdf1522["sessi...
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
print("part100="+str(part100.head(1))) print("part131="+str(part131.head(1))) print("part132="+str(part132.head(1))) print("part133="+str(part133.head(1))) print("part140="+str(part140.head(1))) print("part150="+str(part150.head(1))) print("part151="+str(part151.head(1))) print("part152="+str(part152.head(1))) print("d...
testGUID = '"4dbc2f43-421c-4e23-85d4-f17723ff8c66"' # includewithoutusers=True will count sessions that do not have any userId attached getSessionsCount( rmdf1522, testGUID)
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
print("part100="+str(part100.columns)) print("part131="+str(part131.columns)) print("part132="+str(part132.columns)) print("part133="+str(part133.columns)) print("part140="+str(part140.columns)) print("part150="+str(part150.columns)) print("part151="+str(part151.columns)) print("part152="+str(part152.columns)) print("d...
sessionsList = getUserSessions(rmdf1522, testGUID) sessionsList sessionsList = rmdf1522[rmdf1522['type']=='start'] sessionsList = sessionsList.drop('type', 1) sessionsList = sessionsList.dropna(how='any') userSessionsList = sessionsList[sessionsList['userId']==testGUID] userSessionsList #print(testGUID) sessionsList ...
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
getRandomSessionGUID tinkering
getRandomSessionGUID() _userId = '"e8fed737-7c65-49c8-bf84-f8ae71c094f8"' type(rmdf1522['userId'].dropna().unique()), type(getUserSessions( rmdf1522, _userId )) _userId = 'e8fed737-7c65-49c8-bf84-f8ae71c094f8' _uniqueSessions = getUserSessions( rmdf1522, _userId ) len(_uniqueSessions) _uniqueSessions #_userId = '' _...
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
getFirstEventDate tinkering
userId = testGUID userId = getRandomRedMetricsGUID() #print('----------------------uid='+str(uid)+'----------------------') sessions = getUserSessions(rmdf1522, userId) firstGameTime = pd.to_datetime('2050-12-31T12:59:59.000Z', utc=True) for session in sessions: #print('-----------------------------------------s...
v1.52.2/Tests/1.1 Game sessions tests.ipynb
CyberCRI/dataanalysis-herocoli-redmetrics
cc0-1.0
This is pretty readable.
def is_prime(p): return all(p % divisor != 0 for divisor in range(2, p)) def gen_primes(start=2): return filter(is_prime, count(start)) check()
20170720-dojo-primes-revisited.ipynb
james-prior/cohpy
mit
Do it all in one expression. It works, but is ugly and hard to read.
def gen_primes(start=2): return filter( lambda p: all(p % divisor != 0 for divisor in range(2, p)), count(start) ) check()
20170720-dojo-primes-revisited.ipynb
james-prior/cohpy
mit
With the exception of the random forest, we have so far considered machine learning models as stand-alone entities. Combinations of models that jointly produce a classification are known as ensembles. There are two main methodologies that create ensembles: bagging and boosting. Bagging Bagging refers to bootstrap aggr...
from sklearn.linear_model import Perceptron p=Perceptron() p
chapters/machine_learning/notebooks/ensemble.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
The training data and the resulting perceptron separating boundary are shown in Figure. The circles and crosses are the sampled training data and the gray separating line is the perceptron's separating boundary between the two categories. The black squares are those elements in the training data that the perceptron mis...
from sklearn.ensemble import BaggingClassifier bp = BaggingClassifier(Perceptron(),max_samples=0.50,n_estimators=3) bp
chapters/machine_learning/notebooks/ensemble.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
<!-- dom:FIGURE: [fig-machine_learning/ensemble_003.png, width=500 frac=0.85] Each panel with the single gray line is one of the perceptrons used for the ensemble bagging classifier on the lower right. <div id="fig:ensemble_003"></div> --> <!-- begin figure --> <div id="fig:ensemble_003"></div> <p>Each panel with the ...
from sklearn.ensemble import AdaBoostClassifier clf=AdaBoostClassifier(Perceptron(),n_estimators=3, algorithm='SAMME', learning_rate=0.5) clf
chapters/machine_learning/notebooks/ensemble.ipynb
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
mit
Load and process review dataset For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
products = graphlab.SFrame('datasets/')
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations: Remove punctuation using Python's built-in string functionality. Compute word counts (only for the important_words)
# feature processing # --------------------------------------------------------------- import json with open('important_words.json', 'r') as f: # Reads the list of most frequent words important_words = json.load(f) important_words = [str(s) for s in important_words] def remove_punctuation(text): import strin...
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Convert SFrame to NumPy array Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note: The feature matrix includes ...
from algorithms.sframe_get_numpy_data import get_numpy_data
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Building on logistic regression with no L2 penalty assignment The link function for logistic regression can be defined as: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the rev...
''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): scores = np.dot(feature_matrix, coefficients) predictions = 1.0 / (1 + np.exp(-1*scores)) return predictions
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. The L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail. Recall that for logistic regression without an L2 penalty, the derivative of the log likeli...
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): derivative = np.dot(errors, feature) if not feature_is_constant: derivative = derivative -2.0*l2_penalty*coefficient return derivative
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood. $$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty): indicator = (sentiment==+1) scores = np.dot(feature_matrix, coefficients) lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2) return lp
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty.
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): predictions = predict_probability(feature_matrix, coefficients) # ...
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
sorted_frame = table.sort(['coefficients [L2=0]'], ascending=False) sorted_table = sorted_frame['word'] positive_words = sorted_table[:10] print(positive_words) negative_words = sorted_table[-10:] print(negative_words) sorted_frame['word']
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Run the following cell to generate the plot of the Coefficient path.
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Measuring accuracy Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ Recall from lecture that that the class prediction is calculated using $$ \hat{y}_i = \left{ \begin{arra...
def get_classification_accuracy(feature_matrix, sentiment, coefficients): scores = np.dot(feature_matrix, coefficients) apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.) predictions = apply_threshold(scores) num_correct = (predictions == sentiment).sum() accuracy = num_correct / l...
notebooks/linear-classifier-regularization.ipynb
leon-adams/datascience
mpl-2.0
Versions
import os print("Path at terminal when executing this file") print(os.getcwd() + "\n") %matplotlib inline %load_ext watermark %watermark -d -v -m -p pandas,scipy,matplotlib
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Docker commands building a CPU based docker Install Docker Linux apt-get install docker.io OSX https://download.docker.com/mac/stable/Docker.dmg OR Install Homebrew ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install Cask brew install caskroom/cask/brew-cask Install docker ...
Base images are images that have no parent image, usually images with an OS like ubuntu, busybox or debian. We start with specifying our base image. Use the FROM keyword to do that FROM ubuntu:16.04 docker pull ubuntu:12.04
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
The next step usually is to write the commands of copying the files and installing the dependencies. Install TensorFlow CPU version from central repo
RUN pip --no-cache-dir install \ http://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.0.0-cp27-none-linux_x86_64.whl
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Pick up TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ curl \ libfreetype6-dev \ libpng12-dev \ libzmq3-dev \ pkg-config \ python \ python-dev \ rsync \ software-properties-common \ unzip \ && ...
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Install PIP
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ python get-pip.py && \ rm get-pip.py
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Install Python libs
RUN pip --no-cache-dir install \ ipykernel \ jupyter \ matplotlib \ numpy \ scipy \ sklearn \ pandas \ Pillow \ && \ python -m ipykernel.kernelspec
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Set up our notebook config
COPY jupyter_notebook_config.py /root/.jupyter/
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Open ports
# TensorBoard EXPOSE 6006 # IPython EXPOSE 8888
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Jupyter config
# jupyter_notebook_config.py import os from IPython.lib import passwd c.NotebookApp.ip = '*' c.NotebookApp.port = int(os.getenv('PORT', 8888)) c.NotebookApp.open_browser = False # sets a password if PASSWORD is set in the environment if 'PASSWORD' in os.environ: c.NotebookApp.password = passwd(os.environ['PASSWORD...
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Docker components Docker consists of the following components: Images Containers Daemon Clients Registries Images Images are read-only templates which provide functionality for running an instance of this image (container). An example for a image is the latest release of Ubuntu. Images are defined as layers, for exampl...
root@shlomo::~# docker search --stars=5 "postgresql-9.3" Flag --stars has been deprecated, use --filter=stars=3 instead NAME DESCRIPTION STARS OFFICIAL AUTOMATED helmi03/docker-postgis PostGIS 2.1 in PostgreSQL 9.3 24 [OK]
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Build an image
docker build -t quantscientist/deep-ml-meetups -f Dockerfile.cpu .
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
View available images
root@shlomo:~# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE quantscientist/deep-ml-meetups latest 3871333c6375 5 weeks ago 5.146 GB root@shlomo:~# docker images REPOSITORY TAG ...
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Stop all docker containers - Bash
#!/bin/bash # Delete all containers docker rm $(docker ps -a -q) # Delete all images docker rmi $(docker images -q) # Delete all stopped containers docker rm $( docker ps -q -f status=exited) # Delete all dangling (unused) images docker rmi $( docker images -q -f dangling=true)
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
"Login" into a container
docker exec -it <containerIdOrName> bash ( Docker version 1.3 or greater) docker network ls FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
References https://prakhar.me/docker-curriculum/ Verbatim bash script
shlomokashani@tf-docker-meetup:~/dev/deep-ml-meetups/nice-docker$ history 1 mkdir dev 2 cd dev/ 3 apt-get install docker.io 4 sudo apt-get install docker.io 5 git clone git@github.com:QuantScientist/deep-ml-meetups.git 6 ssh-keygen -t rsa -b 4096 -C "shlomokashani@gmail.com" 7 cat /ho...
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Data Lab - ON GOOGLE
shlomokashani@cloudshell:~$ gcloud config set project tf-docker Updated property [core/project]. shlomokashani@tf-docker:~$ datalab create tf-docker-datalab
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Data Lab - LOCAL
git clone https://github.com/GoogleCloudPlatform/datalab.git cd datalab/containers/datalab # Replace the MyProjectID value in the next line with your project ID PROJECT_ID=MyProjectID ./build.sh && ./run.sh
docker/dokcer_tf_meetup.ipynb
QuantScientist/Deep-Learning-Boot-Camp
mit
Download Isochrones We use a log-space age grid for ages less than a billion years, and a linear grid of every 0.5 Gyr thereafter.
from astropy.coordinates import Distance import astropy.units as u from padova import AgeGridRequest, IsochroneRequest from starfisher import LibraryBuilder z_grid = [0.012, 0.015, 0.019, 0.024, 0.03] delta_gyr = 0.5 late_ages = np.log10(np.arange(1e9 + delta_gyr, 13e9, delta_gyr * 1e9)) print late_ages if not os.pat...
notebooks/Brick 23 IR.ipynb
jonathansick/androcmd
mit
Build the Isochrone Library and Synthesize CMD planes
from collections import namedtuple from starfisher import Lockfile from starfisher import Synth from starfisher import ExtinctionDistribution from starfisher import ExtantCrowdingTable from starfisher import ColorPlane from m31hst.phatast import PhatAstTable if not os.path.exists(os.path.join(STARFISH, synth_dir)): ...
notebooks/Brick 23 IR.ipynb
jonathansick/androcmd
mit
Export the dataset for StarFISH
from astropy.table import Table from m31hst import phat_v2_phot_path if not os.path.exists(os.path.join(STARFISH, fit_dir)): os.makedirs(os.path.join(STARFISH, fit_dir)) data_root = os.path.join(fit_dir, "b23ir.") full_data_path = os.path.join(STARFISH, '{0}f110f160'.format(data_root)) brick_table = Table.read(ph...
notebooks/Brick 23 IR.ipynb
jonathansick/androcmd
mit
Run StarFISH SFH
from starfisher import SFH, Mask mask = Mask(colour_planes) sfh = SFH(data_root, synth, mask, fit_dir) if not os.path.exists(sfh.full_outfile_path): sfh.run_sfh() sfh_table = sfh.solution_table() print(sfh_table) from starfisher.sfhplot import SFHCirclePlot fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(1...
notebooks/Brick 23 IR.ipynb
jonathansick/androcmd
mit
Encounter view Note the first time the patient_query object is created, it also starts the Spark environment which takes some time. The total time for this and loading Encounters is ~25 seconds.
patient_query = query_lib.patient_query_factory( query_lib.Runner.SPARK, BASE_DIR, CODE_SYSTEM) flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL) flat_enc_df.head() flat_enc_df[flat_enc_df['locationId'].notna()].head()
dwh/test_query_lib.ipynb
GoogleCloudPlatform/openmrs-fhir-analytics
apache-2.0
Adding an encounter location constraint
# Add encounter location constraint patient_query.encounter_constraints(locationId=['58c57d25-8d39-41ab-8422-108a0c277d98']) flat_enc_df = patient_query.get_patient_encounter_view(BASE_URL) flat_enc_df.head() flat_enc_df[flat_enc_df['encPatientId'] == '8295eb5b-fba6-4e83-a5cb-2817b135cd27'] flat_enc = patient_query._...
dwh/test_query_lib.ipynb
GoogleCloudPlatform/openmrs-fhir-analytics
apache-2.0
Observation view Loading all Observation data needed for the view generation takes ~50 seconds.
_VL_CODE = '856' # HIV VIRAL LOAD _ARV_PLAN = '1255' # ANTIRETROVIRAL PLAN end_date='2018-01-01' start_date='1998-01-01' old_start_date='1978-01-01' # Creating a new `patient_query` to drop all previous constraints # and recreate flat views. patient_query = query_lib.patient_query_factory( query_lib.Runner.SPARK...
dwh/test_query_lib.ipynb
GoogleCloudPlatform/openmrs-fhir-analytics
apache-2.0
Inspecting underlying Spark data-frames The user of the library does not need to deal with the underlying distributed query processing system. However, the developer of the library needs an easy way to inspect the internal data of these systems. Here is how:
_DRUG1 = '1256' # START DRUGS _DRUG2 = '1260' # STOP ALL MEDICATIONS patient_query._obs_df.head().asDict() exp_obs = patient_query._obs_df.withColumn('coding', F.explode('code.coding')) exp_obs.head().asDict() exp_obs.where('coding.code = "159800AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"').head().asDict() exp_obs.where('cod...
dwh/test_query_lib.ipynb
GoogleCloudPlatform/openmrs-fhir-analytics
apache-2.0
Indicator library development This is an example to show how the indicator_lib.py functions can be incrementally developed based on the query library DataFrames.
patient_query._flat_obs.head().asDict() agg_df[(agg_df['code'] == _VL_CODE)].head() def _find_age_band(birth_date: str, end_date: datetime) -> str: """Given the birth date, finds the age_band for PEPFAR disaggregation.""" age = None try: # TODO handle all different formats (issues #174) birth = datetime...
dwh/test_query_lib.ipynb
GoogleCloudPlatform/openmrs-fhir-analytics
apache-2.0
Data loading This guide uses the 102 Category Flower Dataset for demonstration purposes. To get started, we first load the dataset:
BATCH_SIZE = 32 AUTOTUNE = tf.data.AUTOTUNE tfds.disable_progress_bar() data, dataset_info = tfds.load("oxford_flowers102", with_info=True, as_supervised=True) train_steps_per_epoch = dataset_info.splits["train"].num_examples // BATCH_SIZE val_steps_per_epoch = dataset_info.splits["test"].num_examples // BATCH_SIZE
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Next, we resize the images to a constant size, (224, 224), and one-hot encode the labels. Please note that keras_cv.layers.CutMix and keras_cv.layers.MixUp expect targets to be one-hot encoded. This is because they modify the values of the targets in a way that is not possible with a sparse label representation.
IMAGE_SIZE = (224, 224) num_classes = dataset_info.features["label"].num_classes def prepare(image, label): image = tf.image.resize(image, IMAGE_SIZE) image = tf.cast(image, tf.float32) label = tf.one_hot(label, num_classes) return {"images": image, "labels": label} def prepare_dataset(dataset, spli...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Let's inspect some samples from our dataset:
def visualize_dataset(dataset, title): plt.figure(figsize=(6, 6)).suptitle(title, fontsize=18) for i, samples in enumerate(iter(dataset.take(9))): images = samples["images"] plt.subplot(3, 3, i + 1) plt.imshow(images[0].numpy().astype("uint8")) plt.axis("off") plt.show() v...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Great! Now we can move onto the augmentation step. RandAugment RandAugment has been shown to provide improved image classification results across numerous datasets. It performs a standard set of augmentations on an image. To use RandAugment in KerasCV, you need to provide a few values: value_range describes the range ...
rand_augment = keras_cv.layers.RandAugment( value_range=(0, 255), augmentations_per_image=3, magnitude=0.3, magnitude_stddev=0.2, rate=0.5, ) def apply_rand_augment(inputs): inputs["images"] = rand_augment(inputs["images"]) return inputs train_dataset = load_dataset().map(apply_rand_augm...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Finally, let's inspect some of the results:
visualize_dataset(train_dataset, title="After RandAugment")
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Try tweaking the magnitude settings to see a wider variety of results. CutMix and MixUp: generate high-quality inter-class examples CutMix and MixUp allow us to produce inter-class examples. CutMix randomly cuts out portions of one image and places them over another, and MixUp interpolates the pixel values between two ...
cut_mix = keras_cv.layers.CutMix() mix_up = keras_cv.layers.MixUp() def cut_mix_and_mix_up(samples): samples = cut_mix(samples, training=True) samples = mix_up(samples, training=True) return samples train_dataset = load_dataset().map(cut_mix_and_mix_up, num_parallel_calls=AUTOTUNE) visualize_dataset(tr...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Great! Looks like we have successfully added CutMix and MixUp to our preprocessing pipeline. Customizing your augmentation pipeline Perhaps you want to exclude an augmentation from RandAugment, or perhaps you want to include the GridMask() as an option alongside the default RandAugment augmentations. KerasCV allows you...
layers = keras_cv.layers.RandAugment.get_standard_policy( value_range=(0, 255), magnitude=0.75, magnitude_stddev=0.3 )
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
First, let's filter out RandomRotation layers
layers = [ layer for layer in layers if not isinstance(layer, keras_cv.layers.RandomRotation) ]
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Next, let's add GridMask to our layers:
layers = layers + [keras_cv.layers.GridMask()]
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Finally, we can put together our pipeline
pipeline = keras_cv.layers.RandomAugmentationPipeline( layers=layers, augmentations_per_image=3 )
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Let's check out the results!
def apply_pipeline(inputs): inputs["images"] = pipeline(inputs["images"]) return inputs train_dataset = load_dataset().map(apply_pipeline, num_parallel_calls=AUTOTUNE) visualize_dataset(train_dataset, title="After custom pipeline")
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Awesome! As you can see, no images were randomly rotated. You can customize the pipeline however you like:
pipeline = keras_cv.layers.RandomAugmentationPipeline( layers=[keras_cv.layers.GridMask(), keras_cv.layers.Grayscale(output_channels=3)], augmentations_per_image=1, )
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
This pipeline will either apply GrayScale or GridMask:
def apply_pipeline(inputs): inputs["images"] = pipeline(inputs["images"]) return inputs train_dataset = load_dataset().map(apply_pipeline, num_parallel_calls=AUTOTUNE) visualize_dataset(train_dataset, title="After custom pipeline")
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Looks great! You can use RandomAugmentationPipeline however you want. Training a CNN As a final exercise, let's take some of these layers for a spin. In this section, we will use CutMix, MixUp, and RandAugment to train a state of the art ResNet50 image classifier on the Oxford flowers dataset.
def preprocess_for_model(inputs): images, labels = inputs["images"], inputs["labels"] images = tf.cast(images, tf.float32) return images, labels train_dataset = ( load_dataset() .map(apply_rand_augment, num_parallel_calls=AUTOTUNE) .map(cut_mix_and_mix_up, num_parallel_calls=AUTOTUNE) ) visu...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0
Next we should create a the model itself. Notice that we use label_smoothing=0.1 in the loss function. When using MixUp, label smoothing is highly recommended.
input_shape = IMAGE_SIZE + (3,) def get_model(): inputs = keras.layers.Input(input_shape) x = applications.ResNet50V2( input_shape=input_shape, classes=num_classes, weights=None )(inputs) model = keras.Model(inputs, x) model.compile( loss=losses.CategoricalCrossentropy(label_smooth...
guides/ipynb/keras_cv/cut_mix_mix_up_and_rand_augment.ipynb
keras-team/keras-io
apache-2.0