markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Moreover, because these projectors were created using epochs chosen specifically because they contain time-locked artifacts, we can do a joint plot of the projectors and their effect on the time-averaged epochs. This figure has three columns: The left shows the data traces before (black) and after (green) projectio...
# ideally here we would just do `picks_trace='ecg'`, but this dataset did not # have a dedicated ECG channel recorded, so we just pick a channel that was # very sensitive to the artifact fig = mne.viz.plot_projs_joint(ecg_projs, ecg_evoked, picks_trace='MEG 0111') fig.suptitle('ECG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Finally, note that above we passed reject=None to the ~mne.preprocessing.compute_proj_ecg function, meaning that all detected ECG epochs would be used when computing the projectors (regardless of signal quality in the data sensors during those epochs). The default behavior is to reject epochs based on signal amplitude:...
eog_evoked = create_eog_epochs(raw).average(picks='all') eog_evoked.apply_baseline((None, None)) eog_evoked.plot_joint()
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And we can do a joint image:
fig = mne.viz.plot_projs_joint(eog_projs, eog_evoked, 'eog') fig.suptitle('EOG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And finally, we can make a joint visualization with our EOG evoked. We will also make a bad choice here and select two EOG projectors for EEG and magnetometers, and we will see them show up as noise in the plot. Even though the projected time course (left column) looks perhaps okay, problems show up in the center (topo...
eog_projs_bad, _ = compute_proj_eog( raw, n_grad=1, n_mag=2, n_eeg=2, reject=None, no_proj=True) fig = mne.viz.plot_projs_joint(eog_projs_bad, eog_evoked, picks_trace='eog') fig.suptitle('Too many EOG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Raw Counts
# Calculate Maren TIG equations by mating status and exonic region marenRawCounts = marenEq(dfGt10, Eii='sum_line', Eti='sum_tester', group=['mating_status', 'fusion_id']) marenRawCounts['mag_cis'] = abs(marenRawCounts['cis_line']) marenRawCounts.columns
fear_ase_2016/scripts/cis_summary/maren_equations_summary.ipynb
McIntyre-Lab/papers
lgpl-3.0
F10482_SI This fusion has a weaker cis-line effects but trans-line effects look more linear.
# Plot F10482_SI FUSION='F10482_SI' dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy() dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester']) # Generate 3 panel plot fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(FUSION, fontsize=14, fontweight='bold') for...
fear_ase_2016/scripts/cis_summary/maren_equations_summary.ipynb
McIntyre-Lab/papers
lgpl-3.0
1. PCA Let us begin by running PCA — we're 'satisfied' once we find $n$ st $sum(axis ratios) > \alpha \%$
from sklearn.decomposition import PCA alpha = 0.99 n = 0 pca = PCA(n_components=n) ratios = [0] while sum(ratios) < alpha: n += 1 pca = PCA(n_components=n) pca.fit(frame_z) ratios = pca.explained_variance_ratio_ print "{}% accounted for with n={} PCs".format(alpha, n)
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
2. Orientation in Brain Space We believe this to be a sample of V1, and as such, we believe this to be highly conformative to the layering structure of mammalian cortex. To orient ourselves in cortical space, we anticipate finding a pial surface. Using Bock et al. Nature (2011), we can establish our bearings using ndv...
y_sum = [0] * len(vol[0,:,0]) for i in range(len(vol[0,:,0])): y_sum[i] = sum(sum(vol[:,i,:])) sns.barplot(x=range(len(y_sum[:40])), y=y_sum[:40]) sns.distplot(y_sum, bins=len(y_sum))
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Above, we see a histogram of y_sum that indicates that there is a local minimum at the 12th layer of y-sampling, which colocates with where we anticipate seeing the boundary between layers I and II. Here is the biological substantiation: As we can see, at about 1/3 of the 'depth' into cortex is the boundary to layer I...
sns.barplot(x=range(len(y_sum[:40])), y=y_sum[:40])
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Using these local maxima as delimiters, we can see a boundary at (12, 18, 24, 30), and then what is likely the boundary of Layer IVa/b at 33. 5. Do these layers stay approximately the same on different subvolumes? Let's examine the layering, reviewed above:
from scipy.signal import argrelextrema def local_minima(a): return argrelextrema(a, np.less) whole_volume_minima = local_minima(np.array(y_sum)) whole_volume_minima
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Now let's examine halves of the volume:
y_sum_left = [0] * len(vol[0,:,5:]) for i in range(len(vol[0,:,5:])): y_sum_left[i] = sum(sum(vol[:,i,5:])) left_volume_minima = local_minima(np.array(y_sum_left)) y_sum_right = [0] * len(vol[0,:,:5]) for i in range(len(vol[0,:,:5])): y_sum_right[i] = sum(sum(vol[:,i,:5])) right_volume_minima = local...
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Segmenting the picture of a raccoon face in regions This example uses :ref:spectral_clustering on a graph created from voxel-to-voxel difference on an image to break this image into multiple partly-homogeneous regions. This procedure (spectral clustering on an image) is an efficient approximate solution for finding nor...
print(__doc__) # Author: Gael Varoquaux <gael.varoquaux@normalesup.org>, Brian Cheung # License: BSD 3 clause import time import numpy as np import scipy as sp import matplotlib.pyplot as plt from sklearn.feature_extraction import image from sklearn.cluster import spectral_clustering from sklearn.utils.testing import...
programming/python/notebooks/scikit/clustering/plot_face_segmentation.ipynb
DoWhatILove/turtle
mit
Visualize the resulting regions
for assign_labels in ('kmeans', 'discretize'): t0 = time.time() labels = spectral_clustering(graph, n_clusters=N_REGIONS,assign_labels=assign_labels, random_state=1) t1 = time.time() labels = labels.reshape(face.shape) plt.figure(figsize=(5, 5)) plt.imshow(face, cmap=plt.cm.gray) for l in ra...
programming/python/notebooks/scikit/clustering/plot_face_segmentation.ipynb
DoWhatILove/turtle
mit
2. Helper methods
def load_results(infile,quartile,scores,metric,granularity): next(infile) for line in infile: queryid,numreplaced,match,score=line.strip().split() numreplaced=int(numreplaced) if metric not in scores: scores[metric]=dict() if quartile not in scores[metric]: ...
src/Notebooks/MetricComparison.ipynb
prashanti/similarity-experiment
mit
2. Plot decay and noise
scores=dict() quartile=50 granularity='E' f, axarr = plt.subplots(3, 3) i=j=0 titledict={'BPSym__Jaccard':'Jaccard','BPSym_AIC_Resnik':'Resnik','BPSym_AIC_Lin':'Lin' ,'BPSym_AIC_Jiang':'Jiang','_AIC_simGIC':'simGIC','BPAsym_AIC_HRSS':'HRSS','Groupwise_Jaccard':'Groupwise_Jaccard'}...
src/Notebooks/MetricComparison.ipynb
prashanti/similarity-experiment
mit
We follow the state-action pairs formulation approach.
L = n * m # Number of feasible state-action pairs s_indices, a_indices = sa_indices(n, m) # Reward vector R = np.zeros(L) # Transition probability array Q = sp.lil_matrix((L, n)) it = np.nditer((s_indices, a_indices), flags=['c_index']) for s, k in it: i = it.index if s == 0: Q[i, 0] = 1 elif s =...
ddp_ex_MF_7_6_6_py.ipynb
QuantEcon/QuantEcon.notebooks
bsd-3-clause
Let us use the backward_induction routine to solve our finite-horizon problem.
vs, sigmas = backward_induction(ddp, T, v_term) fig, axes = plt.subplots(1, 2, figsize=(12, 4)) ts = [0, 5] for i, t in enumerate(ts): axes[i].bar(np.arange(n), vs[t], align='center', width=1, edgecolor='k') axes[i].set_xlim(0-0.5, emax+0.5) axes[i].set_ylim(0, 1) axes[i].set_xlabel('Stock of Energy')...
ddp_ex_MF_7_6_6_py.ipynb
QuantEcon/QuantEcon.notebooks
bsd-3-clause
Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
n = 10 x = np.random.normal(size=n) print(x) print(x[1])
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
or initializing all elements to
import numpy as np n = 10 # define a matrix of dimension 10 x 10 and set all elements to one A = np.eye( n ) print(A)
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squarin...
import numpy as np import pandas as pd from IPython.display import display np.random.seed(100) # setting up a 10 x 5 matrix rows = 10 cols = 5 a = np.random.randn(rows,cols) df = pd.DataFrame(a) display(df) print(df.mean()) print(df.std()) display(df**2) print(df-df.mean())
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
and many other operations. The Series class is another important class included in pandas. You can view it as a specialization of DataFrame but where we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame, most operations are vectorized, achieving thereby a high perf...
# Importing various packages import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression x = np.random.rand(100,1) y = 2*x+0.01*np.random.randn(100,1) linreg = LinearRegression() linreg.fit(x,y) #ynew = linreg.predict(x) #xnew = np.array([[0],[1]]) ypredict = linreg.predict(x) ...
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
We have now read in the data, grouped them according to the variables we are interested in. We see how easy it is to reorganize the data using pandas. If we were to do these operations in C/C++ or Fortran, we would have had to write various functions/subroutines which perform the above reorganizations for us. Having ...
A = Masses['A'] Z = Masses['Z'] N = Masses['N'] Element = Masses['Element'] Energies = Masses['Ebinding'] #print(Masses) xx = A print(xx)
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
Code cells are numbered in the sequence in which they are executed. Magics can be used to insert and execute code not written in Python, e.g. HTML:
%%html <style> div.text_cell_render h3 { color: #c60; } </style>
pugmuc2015.ipynb
gertingold/lit2015
mit
Text cells Formatting can be down in markdown and HTML. Examples: * text in italics oder text in italics * text in bold oder text in bold * code * <span style="color:white;background-color:#c00">emphasized text</span> Mathematical typesetting LaTeX syntax can be used in text cells to display mathematical symbols like $...
from IPython.external import mathjax mathjax?
pugmuc2015.ipynb
gertingold/lit2015
mit
Selected features of the IPython shell Help
import numpy as np np.tensordot?
pugmuc2015.ipynb
gertingold/lit2015
mit
Description including code (if available)
np.tensordot??
pugmuc2015.ipynb
gertingold/lit2015
mit
Code completion with TAB
np.ALLOW_THREADS
pugmuc2015.ipynb
gertingold/lit2015
mit
Reference to earlier results
2**3 _-8 __**2
pugmuc2015.ipynb
gertingold/lit2015
mit
Access to all earlier input and output
In, Out
pugmuc2015.ipynb
gertingold/lit2015
mit
Magics in IPython
%lsmagic
pugmuc2015.ipynb
gertingold/lit2015
mit
Quick reference
%quickref
pugmuc2015.ipynb
gertingold/lit2015
mit
Timing of code execution
%timeit 2.5**100 import math %%timeit result = [] nmax = 100000 dx = 0.001 for n in range(nmax): result.append(math.sin(n*dx)) %%timeit nmax = 100000 dx = 0.001 x = np.arange(nmax)*dx result = np.sin(x)
pugmuc2015.ipynb
gertingold/lit2015
mit
Extended representations IPython allows for the representation of objects in formats as different as HTML Markdown SVG PNG JPEG LaTeX
from IPython.display import Image Image("./images/ipython_logo.png") from IPython.display import HTML HTML('<iframe src="http://www.ipython.org" width="700" height="500"></iframe>')
pugmuc2015.ipynb
gertingold/lit2015
mit
Even the embedding of audio and video files is possible.
from IPython.display import YouTubeVideo YouTubeVideo('F4rFuIb1Ie4')
pugmuc2015.ipynb
gertingold/lit2015
mit
Python allows for a textual representation of objects by means of the __repr__ method. example:
class MyObject(object): def __init__(self, obj): self.obj = obj def __repr__(self): return ">>> {0!r} / {0!s} <<<".format(self.obj) x = MyObject('Python') print(x)
pugmuc2015.ipynb
gertingold/lit2015
mit
A rich representation of objects is possible in the IPython notebook provided the corresponding methods are defined: _repr_pretty_ _repr_html_ _repr_markdown_ _repr_latex _repr_svg_ _repr_json_ _repr_javascript_ _repr_png_ _repr_jpeg_ Note: In contrast to __repr__ only one underscore is used.
class RGBColor(object): def __init__(self, r, g, b): self.colordict = {"r": r, "g":g, "b": b} def _repr_svg_(self): return '''<svg height="50" width="50"> <rect width="50" height="50" fill="rgb({r},{g},{b})" /> </svg>'''.format(**self.colordict) c ...
pugmuc2015.ipynb
gertingold/lit2015
mit
Interaction with widgets
from IPython.html.widgets import interact @interact(x=(0., 10.), y=(0, 10)) def power(y, x=2): print(x**y)
pugmuc2015.ipynb
gertingold/lit2015
mit
Data types and their associated widgets String (str, unicode) → Text Dictionary (dict) → Dropdown Boolean variable (bool) → Checkbox Float (float) → FloatSlider Integer (int) → IntSlider
@interact(x=(0, 5), text="Python is great!!!") def f(text, x=0): for _ in range(x): print(text) from IPython.html import widgets import numpy as np import matplotlib.pyplot as plt %matplotlib inline # otherwise matplotlib graphs will be displayed in an external window @interact(harmonics=widgets...
pugmuc2015.ipynb
gertingold/lit2015
mit
The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied ...
word2idx = dict(zip(list(vocab), range(len(vocab))))
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. ...
def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int) for word in text.split(' '): if word in word2idx: word_vector[word2idx[word]] += 1 return word_vector
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the ...
# Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### net = tflearn.input_data([None, len(vocab)]) # Input net = tflearn.fully_connected(net, 100, activation='ReLU') # Hidd...
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Create TensorFlow Session
config = tf.ConfigProto( log_device_placement=True, ) config.gpu_options.allow_growth=True config.gpu_options.per_process_gpu_memory_fraction = 0.4 config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1 print(config) sess = tf.Session(config=config) print(sess)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
Load Model Training and Test/Validation Data
num_samples = 100000 x_train = np.random.rand(num_samples).astype(np.float32) print(x_train) noise = np.random.normal(scale=0.01, size=len(x_train)) y_train = x_train * 0.1 + 0.3 + noise print(y_train) pylab.plot(x_train, y_train, '.') x_test = np.random.rand(len(x_train)).astype(np.float32) print(x_test) noise =...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
Train Model
%%time with tf.device("/device:XLA_GPU:0"): run_metadata = tf.RunMetadata() max_steps = 401 for step in range(max_steps): if (step < max_steps - 1): test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test}) train...
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
1. Understanding currents, fields, charges and potentials Cylinder app survey: Type of survey A: (+) Current electrode location B: (-) Current electrode location M: (+) Potential electrode location N: (-) Potential electrode location r: radius of cylinder xc: x location of cylinder center zc: z location of cylinde...
cylinder_app()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
2. Potential differences and Apparent Resistivities Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted. Computing Apparent Resistivity In practic...
plot_layer_potentials_app()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
3. Building Pseudosections 2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45...
MidpointPseudoSectionWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
DC pseudo-section app $\rho_1$: Resistivity of the first layer (thickness of the first layer is 5m) $\rho_2$: Resistivity of the cylinder resistivity of the second layer is 1000 $\Omega$m xc: x location of cylinder center zc: z location of cylinder center r: radius of cylinder surveyType: Type of survey
DC2DPseudoWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
4. Parametric Inversion In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, ...
DC2DfwdWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
Searching for GALEX visits Looks like there are 4 visits available in the database
exp_data = gFind(band='NUV', skypos=[ra, dec], exponly=True) exp_data
explore.ipynb
jradavenport/GALEX_Boyajian
mit
... and they seem to be spaced over about 2 months time. Alas, not the multi-year coverage I'd hoped for to compare with the results from Montet & Simon (2016)
(exp_data['NUV']['t0'] - exp_data['NUV']['t0'][0]) / (60. * 60. * 24. * 365.)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Making light curves Following examples in the repo...
# step_size = 20. # the time resolution in seconds target = 'KIC8462852' # phot_rad = 0.0045 # in deg # ap_in = 0.0050 # in deg # ap_out = 0.0060 # in deg # print(datetime.datetime.now()) # for k in range(len(exp_data['NUV']['t0'])): # photon_events = gAperture(band='NUV', skypos=[ra, dec], stepsz=step_size, rad...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Huh... that 3rd panel looks like a nice long visit. Let's take a slightly closer look!
# k=2 # data = read_lc(target+ '_' +str(k)+"_lc.csv") # t0k = int(data['t_mean'][0]) # plt.figure(figsize=(14,5)) # plt.errorbar(data['t_mean'] - t0k, data['flux_bgsub'], yerr=data['flux_bgsub_err'], marker='.', linestyle='none') # plt.xlabel('GALEX time (sec - '+str(t0k)+')') # plt.ylabel('NUV Flux')
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Any short timescale variability of note? Let's use a Lomb-Scargle to make a periodogram! (limited to the 10sec windowing I imposed... NOTE: gPhoton could easily go shorter, but S/N looks dicey) Answer: Some interesting structure around 70-80 sec, but nothing super strong Update: David Wilson says that although there ar...
# try cutting on flags=0 flg0 = np.where((data['flags'] == 0))[0] plt.figure(figsize=(14,5)) plt.errorbar(data['t_mean'][flg0] - t0k, data['flux_bgsub'][flg0]/(1e-15), yerr=data['flux_bgsub_err'][flg0]/(1e-15), marker='.', linestyle='none') plt.xlabel('GALEX time (sec - '+str(t0k)+')') # plt.ylabel('NUV F...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
How about the long-term evolution? Answer: looks flat!
t_unix = Time(exp_data['NUV']['t0'] + 315964800, format='unix') mjd_time_med = t_unix.mjd t0k = (mjd_time[0]) plt.figure(figsize=(9,5)) plt.errorbar(mjd_time_med - mjd_time_med[0], med_flux/1e-15, yerr=med_flux_err/1e-15, linestyle='none', marker='o') plt.xlabel('MJD - '+format(mjd_time[0], '9.3f')+' (day...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Conclusion...? Based on data from only 4 GALEX visits, spaced over ~70 days, we can't say much possible evolution of this star with GALEX.
# average time of the gPhoton data print(np.mean(exp_data['NUV']['t0'])) t_unix = Time(np.mean(exp_data['NUV']['t0']) + 315964800, format='unix') t_date = t_unix.yday print(t_date) mjd_date = t_unix.mjd print(mjd_date)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
The visits are centered in mid 2011 (Quarter 9 and 10, I believe) Note: there was a special GALEX pointing at the Kepler field that overlapped with Quarter 14 - approximately 1 year later. This data is not available via gPhoton, but it may be able to be used! The gPhoton data shown here occurs right before the "knee" i...
plt.errorbar([10, 14], [16.46, 16.499], yerr=[0.01, 0.006], linestyle='none', marker='o') plt.xlabel('Quarter (approx)') plt.ylabel(r'$m_{NUV}$ (mag)') plt.ylim(16.52,16.44)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
For time comparison, here is an example MJD from scan 15 of the GKM data. (note: I grabbed a random time-like number from here. YMMV, but it's probably OK for comparing to the Kepler FFI results)
gck_time = Time(1029843320.995 + 315964800, format='unix') gck_time.mjd # and to push the comparison to absurd places... # http://astro.uchicago.edu/~bmontet/kic8462852/reduced_lc.txt df = pd.read_table('reduced_lc.txt', delim_whitespace=True, skiprows=1, names=('time','raw_flux', 'norm_flux', 'mo...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Thinking about Dust - v1 Use basic values for the relative extinction in each band at a "standard" R_v = 3.1 Imagine if the long-term Kepler fading was due to dust. What is the extinction we'd expect in the NUV? (A: Much greater)
# considering extinction... # w/ thanks to the Padova Isochrone page for easy shortcut to getting these extinction values: # http://stev.oapd.inaf.it/cgi-bin/cmd A_NUV = 2.27499 # actually A_NUV / A_V, in magnitudes, for R_V = 3.1 A_Kep = 0.85946 # actually A_Kep / A_V, in magnitudes, for R_V = 3.1 A_W1 = 0.07134 # a...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Same plot as above, but with WISE W1 band, and considering a different time window unfortunately
plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(wise_err**2))], label='Observed', marker='o', c='purple') plt.plot([wave_Kep, wave_W1], [1-frac_kep_w, frac_w1], '--o', label='Extinction Model', c='green') plt.legend(fontsize=10, loc='lower right') plt.xlabel(r'Wavel...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Combining the fading and dust model for both the NUV and W1 data. In the IR we can't say much... so maybe toss it out since it doesn't constrain dust model one way or another
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))], label='Observed1', marker='o') plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--o', label='Extinction Model1') plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(w...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Dust 2.0: Let's fit some models to the Optical-NUV data! Start w/ simple dust model, try to fit for R_V The procedure will be: for each R_V I want to test figure out what A_Kep / A_V is for this R_V solve for A_NUV(R_V) given measured A_Kep
# the "STANDARD MODEL" for extinction A_V = 0.0265407 R_V = 3.1 ext_out = extinction.ccm89(np.array([wave_Kep, wave_NUV]), A_V, R_V) # (ext_out[1] - ext_out[0]) / ext_out[1] print(10**(ext_out[0]/(-2.5)), (1-frac_kep)) # these need to match (within < 1%) print(10**(ext_out[1]/(-2.5)), gflux[1]) # and then these won't...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
CCM89 gives us R_V = 5.02097489191 +/- 0.938304455977 satisfies both the Kepler and NUV fading we see. Such a high value of R_V~5 is not unheard of, particulary in protostars, however Boyajian's Star does not show any other indications of being such a source. NOTE: If we re-run using extinction.fitzpatrick99 instead of...
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))], label='Observed', marker='o', linestyle='none', zorder=0, markersize=10) plt.plot([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], label=r'$R_V$=5.0 Model', c='r', lw=3, alpha=0.7,zorder=1) plt.plot([wave_Kep, wav...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Another simple model: changes in Blackbody temperature We have 2 bands, thats enough to constrain how the temperature should have changed with a blackbody... So the procedure is: - assume the stellar radius doesn't change, only temp (good approximation) - given the drop in optical luminosity, what change in temp is nee...
# do simple thing first: a grid of temperatures starting at T_eff of the star (SpT = F3, T_eff = 6750) temp0 = 6750 * u.K wavelengths = [wave_Kep, wave_NUV] * u.AA wavegrid = np.arange(wave_NUV, wave_Kep) * u.AA flux_lam0 = blackbody_lambda(wavelengths, temp0) flux_lamgrid = blackbody_lambda(wavegrid, temp0) plt.plot...
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Notice how we don't have the topic of the articles! Let's use LDA to attempt to figure out clusters of the articles. Preprocessing
from sklearn.feature_extraction.text import TfidfVectorizer
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
max_df: float in range [0.0, 1.0] or int, default=1.0<br> When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vo...
tfidf = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english') dtm = tfidf.fit_transform(npr['Article']) dtm
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
NMF
from sklearn.decomposition import NMF nmf_model = NMF(n_components=7,random_state=42) # This can take awhile, we're dealing with a large amount of documents! nmf_model.fit(dtm)
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
Displaying Topics
len(tfidf.get_feature_names()) import random for i in range(10): random_word_id = random.randint(0,54776) print(tfidf.get_feature_names()[random_word_id]) for i in range(10): random_word_id = random.randint(0,54776) print(tfidf.get_feature_names()[random_word_id]) len(nmf_model.components_) nmf_mod...
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
These look like business articles perhaps... Let's confirm by using .transform() on our vectorized articles to attach a label number. But first, let's view all the 10 topics found.
for index,topic in enumerate(nmf_model.components_): print(f'THE TOP 15 WORDS FOR TOPIC #{index}') print([tfidf.get_feature_names()[i] for i in topic.argsort()[-15:]]) print('\n')
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
Attaching Discovered Topic Labels to Original Articles
dtm dtm.shape len(npr) topic_results = nmf_model.transform(dtm) topic_results.shape topic_results[0] topic_results[0].round(2) topic_results[0].argmax()
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
The sequence of procedures below is based on the one explained in the Mapping Reddit demo notebook. First we import the raw data and make a sparse matrix out of it.
print mldb.put('/v1/procedures/import_rcp', { "type": "import.text", "params": { "headers": ["user_id", "recipe_id"], "dataFileUrl": "file://mldb/mldb_test_data/favorites.csv.gz", "outputDataset": "rcp_raw", "runOnCreation": True } }) print mldb.post('/v1/procedures', { ...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We then train an SVD decomposition and do K-Means clustering
print mldb.post('/v1/procedures', { "id": "rcp_svd", "type" : "svd.train", "params" : { "trainingData": "select * from recipes", "columnOutputDataset" : "rcp_svd_embedding_raw", "runOnCreation": True } }) num_centroids = 16 print mldb.post('/v1/procedures', { "id" : "rcp_km...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
Now we import the actual recipe names, clean them up a bit, and get a version of our SVD embedding with the recipe names as column names.
print mldb.put('/v1/procedures/import_rcp_names_raw', { 'type': 'import.text', 'params': { 'dataFileUrl': 'file://mldb/mldb_test_data/recipes.csv.gz', 'outputDataset': "rcp_names_raw", 'delimiter':'', 'quoteChar':'', 'runOnCreation': True } }) print mldb.put('/v1/pro...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
With all that pre-processing done, let's look at the names of the 3 closest recipes to each cluster centroid to try to get a sense of what kind of clusters we got.
mldb.put("/v1/functions/nearestRecipe", { "type":"embedding.neighbors", "params": { "dataset": "rcp_svd_embedding", "defaultNumNeighbors": 3 } }) mldb.query(""" select nearestRecipe({coords: {*}})[neighbors] as * from rcp_kmeans_centroids """).applymap(lambda x: x.split('-')[1])
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We can see a bit of pattern just from the names of the recipes nearest to the centroids, but we can probably do better! Let's try to extract the most characteristic words used in the recipe names for each cluster. Topic Extraction with TF-IDF We'll start by preprocessing the recipe names a bit: taking out a few punctua...
print mldb.put('/v1/procedures/sum_words_per_cluster', { 'type': 'transform', 'params': { 'inputData': """ select sum({tokens.* as *}) as * named c.cluster from ( SELECT lower(n.name), tokenize('recipe ' + lower(n.name), {splitChar...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We can use this to create a TF-IDF score for each word in the cluster. Basically this score will give us an idea of the relative importance of a each word in a given cluster.
print mldb.put('/v1/procedures/train_tfidf', { 'type': 'tfidf.train', 'params': { 'trainingData': "select * from rcp_cluster_word_counts", 'modelFileUrl': 'file:///mldb_data/models/rcp_tfidf.idf', 'runOnCreation': True } }) print mldb.put('/v1/functions/rcp_tfidf', { 'type...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
If we transpose that dataset, we will be able to get the highest scored words for each cluster, and we can display them nicely in a word cloud.
import json from ipywidgets import interact from IPython.display import IFrame, display html = """ <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.6/d3.min.js"></script> <script src="https://static.mldb.ai/d3.layout.cloud.js"></script> <script src="https://static.mldb.ai/wordcloud.js"></script> <body> <scri...
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
Load the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) print(f"Total training examples: {len(x_train)}") print(f"Total test examples: {len(x_test)}")
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Define constants
BATCH_SIZE = 128 * strategy.num_replicas_in_sync EPOCHS = 90 START_LR = 0.1 AUTO = tf.data.AUTOTUNE
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Prepare data loaders
# Augmentation pipeline data_augmentation = tf.keras.Sequential( [ layers.experimental.preprocessing.Normalization(), layers.experimental.preprocessing.RandomCrop(32, 32), layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(fac...
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Model utilities
def get_model(n_classes=10): n = 2 depth = n * 9 + 2 n_blocks = ((depth - 2) // 9) - 1 # The input tensor inputs = layers.Input(shape=(32, 32, 3)) x = data_augmentation(inputs) # Normalize and augment # The Stem Convolution Group x = resnet20.stem(x) # The learner x = resnet20...
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Model training
# LR Scheduler reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", patience=3) # Optimizer and loss function optimizer = tf.keras.optimizers.SGD(learning_rate=START_LR, momentum=0.9) loss_fn = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.1)
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
One can obtain the initial weights that I used from here.
with strategy.scope(): rn_model = tf.keras.models.load_model("initial_model_resnet20") rn_model.compile(loss=loss_fn, optimizer=optimizer, metrics=["accuracy"]) history = rn_model.fit(train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[reduce_lr]) plt.plot(history.history["loss"], label="...
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Intermediate Pandas Pandas is a powerful Python library for working with data. For this lab, you should already know what a DataFrame and Series are and how to do some simple analysis of the data contained in those structures. In this lab we'll look at some more advanced capabilities of Pandas, such as filtering, group...
import pandas as pd airport_df = pd.DataFrame.from_records(( ('Atlanta', 498044, 2), ('Austin', 964254, 2), ('Kansas City', 491918, 8), ('New York City', 8398748, 3), ('Portland', 653115, 1), ('San Francisco', 883305, 3), ('Seattle', 744955, 2), ), columns=("City Name", "Population", "Airports")) airpo...
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you aren't familiar with the from_records() method, it is a way to create a DataFrame from data formatted in a tabular manner. In this case we have a tuple-of-tuples where each inner-tuple is a row of data for a city. Shape One interesting fact about a DataFrame is its shape. What is shape? Shape is the number of ro...
airport_df.shape
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The DataFrame has a shape of (7, 3). This means that the DataFrame has seven rows and three columns. If you are familiar with NumPy, you probably are also familiar with shape. NumPy arrays can have n-dimensional shapes while DataFrame objects tend to stick to two dimensions: rows and columns. Exercise 1: Finding Shape ...
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv" # Download the housing data # Print the shape of the data
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Columns Speaking of columns, it's possible to ask a DataFrame what columns it contains using the columns attribute:
airport_df.columns
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Notice that the columns are contained in an Index object. An Index wraps the list of columns. For basic usage, like loops, you can just use the Index directly:
for c in airport_df.columns: print(c)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you do need the columns in a lower level format, you can use .values to get a NumPy array:
type(airport_df.columns.values)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you need a basic Python list, you can then call .tolist() to get the core Python list of column names:
type(airport_df.columns.values.tolist())
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2: Pretty Print Columns The columns in the California housing dataset are not necessarily easy on the eyes. Columns like housing_median_age would be easier to read if they were presented as Housing Median Age. In the code block below, download the California housing dataset. Then find the names of the columns ...
import pandas as pd url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv" df = pd.read_csv(url) # Your Code Goes Here
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Missing Values It is common to find datasets with missing data. When this happens it's good to know that the data is missing so you can determine how to handle the situation. Let's recreate our city data but set some values to None:
import pandas as pd airport_df = pd.DataFrame.from_records(( ('Atlanta', 498044, 2), (None, 964254, 2), ('Kansas City', 491918, 8), ('New York City', None, 3), ('Portland', 653115, 1), ('San Francisco', 883305, None), ('Seattle', 744955, 2), ), columns=("City Name", "Population", "Airports")) airport_...
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can see that the population of New York and the number of airports in San Francisco are now represented by NaN values. This stands for 'Not a Number', which means that the value is an unknown numeric value. You'll also see that where 'Austin' once was, we now have a None value. This means that we are missing a non-...
airport_df.isna()
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Here we get True values where a data point is missing and False values where we have data. Using this, we can do powerful things like select all columns with populations or airports that have missing data:
airport_df[airport_df['Population'].isna() | airport_df['Airports'].isna()]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now that we know that we are missing the population of New York and the number of airports in San Francisco, we can look up that data and manually fix it. Sometimes the fixes aren't so easy. The data might be impossible to find, or there might be so many missing values that you can't individually fix them all. In these...
airport_df[airport_df['Airports'] > 2]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Let's deconstruct this statement. At its core we have: python airport_df['Airports'] &gt; 2 This expression compares every 'Airports' value in the airport_df DataFrame and returns True if there are more than two airports, False otherwise.
airport_df['Airports'] > 2
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
This data is returned as a Pandas Series. The series is then used as a boolean index for the airport_df the DataFrame. Boolean index is just a term used to refer to a Series (or other list-like structure) of boolean values used in the index operator, [], for the DataFrame. Ideally the boolean index length should be equ...
has_many_airports = airport_df['Airports'] > 2 airport_df[has_many_airports]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you are familiar with Boolean logic and Python, you probably know that you can create compound expressions using the or and and keywords. You can also use the keyword not to reverse an expression.
print(True and False) print(True or False) print(not True)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can do similar things in Pandas with boolean indices. However, and, or, and not don't work as expected. Instead you need to use the &amp;, |, and ! operators. and changes to &amp; or changes to | not changes to ! For normal numbers in Python, these are actually the 'bitwise logical operators'. When working on Pan...
has_many_airports = airport_df['Airports'] > 2 has_many_airports
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now we can find the rows that represent a city with less than a million residents:
small_cities = airport_df['Population'] < 1000000 small_cities
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0