markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Moreover, because these projectors were created using epochs chosen specifically because they contain time-locked artifacts, we can do a joint plot of the projectors and their effect on the time-averaged epochs. This figure has three columns: The left shows the data traces before (black) and after (green) projection. We can see that the ECG artifact is well suppressed by one projector per channel type. The center shows the topomaps associated with the projectors, in this case just a single topography for our one projector per channel type. The right again shows the data traces (black), but this time with those traces also projected onto the first projector for each channel type (red) plus one surrogate ground truth for an ECG channel (MEG 0111).
# ideally here we would just do `picks_trace='ecg'`, but this dataset did not # have a dedicated ECG channel recorded, so we just pick a channel that was # very sensitive to the artifact fig = mne.viz.plot_projs_joint(ecg_projs, ecg_evoked, picks_trace='MEG 0111') fig.suptitle('ECG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Finally, note that above we passed reject=None to the ~mne.preprocessing.compute_proj_ecg function, meaning that all detected ECG epochs would be used when computing the projectors (regardless of signal quality in the data sensors during those epochs). The default behavior is to reject epochs based on signal amplitude: epochs with peak-to-peak amplitudes exceeding 50 µV in EEG channels, 250 µV in EOG channels, 2000 fT/cm in gradiometer channels, or 3000 fT in magnetometer channels. You can change these thresholds by passing a dictionary with keys eeg, eog, mag, and grad (though be sure to pass the threshold values in volts, teslas, or teslas/meter). Generally, it is a good idea to reject such epochs when computing the ECG projectors (since presumably the high-amplitude fluctuations in the channels are noise, not reflective of brain activity); passing reject=None above was done simply to avoid the dozens of extra lines of output (enumerating which sensor(s) were responsible for each rejected epoch) from cluttering up the tutorial. <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.compute_proj_ecg` has a similar parameter ``flat`` for specifying the *minimum* acceptable peak-to-peak amplitude for each channel type.</p></div> While ~mne.preprocessing.compute_proj_ecg conveniently combines several operations into a single function, MNE-Python also provides functions for performing each part of the process. Specifically: mne.preprocessing.find_ecg_events for detecting heartbeats in a ~mne.io.Raw object and returning a corresponding :term:events array mne.preprocessing.create_ecg_epochs for detecting heartbeats in a ~mne.io.Raw object and returning an ~mne.Epochs object mne.compute_proj_epochs for creating projector(s) from any ~mne.Epochs object See the documentation of each function for further details. Repairing EOG artifacts with SSP Once again let's visualize our artifact before trying to repair it. We've seen above the large deflections in frontal EEG channels in the raw data; here is how the ocular artifacts manifests across all the sensors:
eog_evoked = create_eog_epochs(raw).average(picks='all') eog_evoked.apply_baseline((None, None)) eog_evoked.plot_joint()
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And we can do a joint image:
fig = mne.viz.plot_projs_joint(eog_projs, eog_evoked, 'eog') fig.suptitle('EOG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And finally, we can make a joint visualization with our EOG evoked. We will also make a bad choice here and select two EOG projectors for EEG and magnetometers, and we will see them show up as noise in the plot. Even though the projected time course (left column) looks perhaps okay, problems show up in the center (topomaps) and right plots (projection of channel data onto the projection vector): The second magnetometer topomap has a bilateral auditory field pattern. The uniformly-scaled projected temporal time course (solid lines) show that, while the first projector trace (red) has a large EOG-like amplitude, the second projector trace (blue-green) is much smaller. The re-normalized projected temporal time courses show that the second PCA trace is very noisy relative to the EOG channel data (yellow).
eog_projs_bad, _ = compute_proj_eog( raw, n_grad=1, n_mag=2, n_eeg=2, reject=None, no_proj=True) fig = mne.viz.plot_projs_joint(eog_projs_bad, eog_evoked, picks_trace='eog') fig.suptitle('Too many EOG projectors')
dev/_downloads/d194fa1c3e67c7ada95e4807f9e5f3e5/50_artifact_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Raw Counts
# Calculate Maren TIG equations by mating status and exonic region marenRawCounts = marenEq(dfGt10, Eii='sum_line', Eti='sum_tester', group=['mating_status', 'fusion_id']) marenRawCounts['mag_cis'] = abs(marenRawCounts['cis_line']) marenRawCounts.columns
fear_ase_2016/scripts/cis_summary/maren_equations_summary.ipynb
McIntyre-Lab/papers
lgpl-3.0
F10482_SI This fusion has a weaker cis-line effects but trans-line effects look more linear.
# Plot F10482_SI FUSION='F10482_SI' dfFus = marenRawCounts[marenRawCounts['fusion_id'] == FUSION].copy() dfFus['prop'] = 1 - dfFus['sum_tester'] / (dfFus['sum_line'] + dfFus['sum_tester']) # Generate 3 panel plot fig, axes = plt.subplots(1, 3, figsize=(12, 4)) fig.suptitle(FUSION, fontsize=14, fontweight='bold') for n, mdf in dfFus.groupby('mating_status'): # Plot the cis-line effects x proportion by fusion scatPlt(mdf, x='cis_line', y='prop', ax=axes[0], c='color', cmap=CMAP, marker=SHAPES[n], title='cis-line', xlab='cis-line', ylab='prop') # Plot the trans-line effects x proportion by fusion scatPlt(mdf, x='trans_line', y='prop', ax=axes[1], c='color', cmap=CMAP, marker=SHAPES[n], title='trans-line', xlab='trans-line', ylab='prop', diag='neg') # Plot the Tester effects x proportion by fusion scatPlt(mdf, x='cis_tester', y='prop', ax=axes[2], c='color', cmap=CMAP, marker=SHAPES[n], title='Tester', xlab='cis-tester', ylab='prop', diag=None) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_effects_v_prop.png'.format(FUSION)) plt.savefig(fname, bbox_inches='tight') print("Saved figure to: " + fname) plt.close() f1005 marenRawCounts.columns meanByMsLine = marenRawCounts[['mean_apn', 'cis_line', 'mating_status', 'line']].groupby(['mating_status', 'line']).mean() meanByMsLine.columns meanByMsLine.plot(kind='scatter', x='mean_apn', y='cis_line') def cisAPN(df, fusion, value='cis_line', xcutoff='>=150', ycutoff='<=-180'): """ Plot effects vs mean apn""" # Pull out fusion of interest dfSub = marenRawCounts[marenRawCounts['fusion_id'] == fusion] # Make scatter plot fig, ax = plt.subplots(1, 1, figsize=(10, 10)) dfSub.plot(kind='scatter', x='mean_apn', y='cis_line', ax=ax, title=fusion) # Annotate outliers xc = filt = dfSub.loc[(dfSub[value] eval(ycutoff)) | (dfSub['mean_apn'] eval(xcutoff)), ['line', 'mating_status', 'mean_apn', 'cis_line']] for row in filt.values: line, ms, apn, cis = row ax.annotate(line + '_' + ms, xy=(apn, cis)) fname = pjoin(PROJ, 'pipeline_output/cis_effects/scatter_plot_{}_{}_v_meanApn.png'.format(fusion, value)) plt.savefig(fname, bbox_inches='tight') eval("{} == 'M'".format(marenRawCounts['mating_status']))
fear_ase_2016/scripts/cis_summary/maren_equations_summary.ipynb
McIntyre-Lab/papers
lgpl-3.0
1. PCA Let us begin by running PCA — we're 'satisfied' once we find $n$ st $sum(axis ratios) > \alpha \%$
from sklearn.decomposition import PCA alpha = 0.99 n = 0 pca = PCA(n_components=n) ratios = [0] while sum(ratios) < alpha: n += 1 pca = PCA(n_components=n) pca.fit(frame_z) ratios = pca.explained_variance_ratio_ print "{}% accounted for with n={} PCs".format(alpha, n)
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
2. Orientation in Brain Space We believe this to be a sample of V1, and as such, we believe this to be highly conformative to the layering structure of mammalian cortex. To orient ourselves in cortical space, we anticipate finding a pial surface. Using Bock et al. Nature (2011), we can establish our bearings using ndviz. This epithelial surface is clearly visible in this view, indicating that our $y$ axis is the cortical depth axis. I expect learning more about PCA to get a better direction for axis than simply snapping to a cardinal axis. 3. Layer I Delimiterization We believe that this dataset extends through layers I, II, III, and possibly part of IV. To determine this, we rely on the fact that mammalian cortex Layer I is molecular, and is thus low in synaptic density, compared to layer II. So there should be clear delimiterization along our $y$ axis:
y_sum = [0] * len(vol[0,:,0]) for i in range(len(vol[0,:,0])): y_sum[i] = sum(sum(vol[:,i,:])) sns.barplot(x=range(len(y_sum[:40])), y=y_sum[:40]) sns.distplot(y_sum, bins=len(y_sum))
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Above, we see a histogram of y_sum that indicates that there is a local minimum at the 12th layer of y-sampling, which colocates with where we anticipate seeing the boundary between layers I and II. Here is the biological substantiation: As we can see, at about 1/3 of the 'depth' into cortex is the boundary to layer II. 4. Local maxima to find other cortical boundaries Consider these lamination illustrations (Ramon y Cajal) that show a 'wavelike' pattern to cellular density (and thus inversely, synaptic density). We see that our histogram follows this pattern:
sns.barplot(x=range(len(y_sum[:40])), y=y_sum[:40])
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Using these local maxima as delimiters, we can see a boundary at (12, 18, 24, 30), and then what is likely the boundary of Layer IVa/b at 33. 5. Do these layers stay approximately the same on different subvolumes? Let's examine the layering, reviewed above:
from scipy.signal import argrelextrema def local_minima(a): return argrelextrema(a, np.less) whole_volume_minima = local_minima(np.array(y_sum)) whole_volume_minima
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Now let's examine halves of the volume:
y_sum_left = [0] * len(vol[0,:,5:]) for i in range(len(vol[0,:,5:])): y_sum_left[i] = sum(sum(vol[:,i,5:])) left_volume_minima = local_minima(np.array(y_sum_left)) y_sum_right = [0] * len(vol[0,:,:5]) for i in range(len(vol[0,:,:5])): y_sum_right[i] = sum(sum(vol[:,i,:5])) right_volume_minima = local_minima(np.array(y_sum_right)) left_volume_minima, right_volume_minima
code/[Assignment 10] JM .ipynb
Upward-Spiral-Science/uhhh
apache-2.0
Segmenting the picture of a raccoon face in regions This example uses :ref:spectral_clustering on a graph created from voxel-to-voxel difference on an image to break this image into multiple partly-homogeneous regions. This procedure (spectral clustering on an image) is an efficient approximate solution for finding normalized graph cuts. There are two options to assign labels: with 'kmeans' spectral clustering will cluster samples in the embedding space using a kmeans algorithm whereas 'discrete' will iteratively search for the closest partition space to the embedding space.
print(__doc__) # Author: Gael Varoquaux <gael.varoquaux@normalesup.org>, Brian Cheung # License: BSD 3 clause import time import numpy as np import scipy as sp import matplotlib.pyplot as plt from sklearn.feature_extraction import image from sklearn.cluster import spectral_clustering from sklearn.utils.testing import SkipTest from sklearn.utils.fixes import sp_version if sp_version < (0, 12): raise SkipTest("Skipping because SciPy version earlier than 0.12.0 and " "thus does not include the scipy.misc.face() image.") # load the raccoon face as a numpy array try: face = sp.face(gray=True) except AttributeError: # Newer versions of scipy have face in misc from scipy import misc face = misc.face(gray=True) # Resize it to 10% of the original size to speed up the processing face = sp.misc.imresize(face, 0.10) / 255. # Convert the image into a graph with the value of the gradient on the # edges. graph = image.img_to_graph(face) # Take a decreasing function of the gradient: an exponential # The smaller beta is, the more independent the segmentation is of the # actual image. For beta=1, the segmentation is close to a voronoi beta = 5 eps = 1e-6 graph.data = np.exp(-beta * graph.data / graph.data.std()) + eps # Apply spectral clustering (this step goes much faster if you have pyamg # installed) N_REGIONS = 25
programming/python/notebooks/scikit/clustering/plot_face_segmentation.ipynb
DoWhatILove/turtle
mit
Visualize the resulting regions
for assign_labels in ('kmeans', 'discretize'): t0 = time.time() labels = spectral_clustering(graph, n_clusters=N_REGIONS,assign_labels=assign_labels, random_state=1) t1 = time.time() labels = labels.reshape(face.shape) plt.figure(figsize=(5, 5)) plt.imshow(face, cmap=plt.cm.gray) for l in range(N_REGIONS): plt.contour(labels == l, contours=1, colors=[plt.cm.spectral(l / float(N_REGIONS))]) plt.xticks(()) plt.yticks(()) title = 'Spectral clustering: %s, %.2fs' % (assign_labels, (t1 - t0)) print(title) plt.title(title) plt.show()
programming/python/notebooks/scikit/clustering/plot_face_segmentation.ipynb
DoWhatILove/turtle
mit
2. Helper methods
def load_results(infile,quartile,scores,metric,granularity): next(infile) for line in infile: queryid,numreplaced,match,score=line.strip().split() numreplaced=int(numreplaced) if metric not in scores: scores[metric]=dict() if quartile not in scores[metric]: scores[metric][quartile]=dict() if granularity not in scores[metric][quartile]: scores[metric][quartile][granularity]=dict() if numreplaced not in scores[metric][quartile][granularity]: scores[metric][quartile][granularity][numreplaced]=[] scores[metric][quartile][granularity][numreplaced].append(float(score)) infile.close() return scores def error(scorelist): return 2*(np.std(scorelist)/math.sqrt(len(scorelist)))
src/Notebooks/MetricComparison.ipynb
prashanti/similarity-experiment
mit
2. Plot decay and noise
scores=dict() quartile=50 granularity='E' f, axarr = plt.subplots(3, 3) i=j=0 titledict={'BPSym__Jaccard':'Jaccard','BPSym_AIC_Resnik':'Resnik','BPSym_AIC_Lin':'Lin' ,'BPSym_AIC_Jiang':'Jiang','_AIC_simGIC':'simGIC','BPAsym_AIC_HRSS':'HRSS','Groupwise_Jaccard':'Groupwise_Jaccard'} lines=[] legend=[] for profilesize in [10]: for metric in ['BPSym_AIC_Resnik','BPSym_AIC_Lin','BPSym_AIC_Jiang','_AIC_simGIC','BPSym__Jaccard', 'Groupwise_Jaccard','BPAsym_AIC_HRSS']: # plotting annotation replacement infile=open("../../results/FullDistribution/AnnotationReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv") scores=load_results(infile,quartile,scores,metric,granularity) infile.close() signallist=[] errorlist=[] numreplacedlist=sorted(scores[metric][quartile][granularity].keys()) for numreplaced in numreplacedlist : signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced])) errorlist.append(error(scores[metric][quartile][granularity][numreplaced])) line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='blue',linewidth=3) if len(lines)==0: lines.append(line) legend.append("Annotation Replacement") axarr[i][j].set_title(titledict[metric]) axarr[i][j].set_ylim(0,1) # plotting Ancestral Replacement ancestralreplacementfile="../../results/FullDistribution/AncestralReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv" if os.path.isfile(ancestralreplacementfile): infile=open(ancestralreplacementfile) scores=load_results(infile,quartile,scores,metric,granularity) infile.close() signallist=[] errorlist=[] numreplacedlist=sorted(scores[metric][quartile][granularity].keys()) for numreplaced in numreplacedlist : signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced])) errorlist.append(error(scores[metric][quartile][granularity][numreplaced])) line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='green',linewidth=3) if len(lines)==1: lines.append(line) legend.append("Ancestral Replacement") # plotting noise decaytype="AnnotationReplacement" if "simGIC" in metric or "Groupwise_Jaccard" in metric: noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_Noise_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv" else: noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_NoiseDecay_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv" if os.path.isfile(noisefile): noisedist= json.load(open(noisefile)) line=axarr[i][j].axhline(y=np.percentile(noisedist,99.9),linestyle='--',color='black',label='_nolegend_') if len(lines)==2: lines.append(line) legend.append("99.9 percentile noise") if j==2: j=0 i+=1 else: j+=1
src/Notebooks/MetricComparison.ipynb
prashanti/similarity-experiment
mit
We follow the state-action pairs formulation approach.
L = n * m # Number of feasible state-action pairs s_indices, a_indices = sa_indices(n, m) # Reward vector R = np.zeros(L) # Transition probability array Q = sp.lil_matrix((L, n)) it = np.nditer((s_indices, a_indices), flags=['c_index']) for s, k in it: i = it.index if s == 0: Q[i, 0] = 1 elif s == 1: Q[i, np.minimum(emax, s-1+e[k])] = p[k] * q[k] Q[i, 0] = 1 - p[k] * q[k] else: Q[i, np.minimum(emax, s-1+e[k])] = p[k] * q[k] Q[i, s-1] = p[k] * (1 - q[k]) Q[i, 0] = 1 - p[k] # Discount factor beta = 1 # Create a DiscreteDP ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)
ddp_ex_MF_7_6_6_py.ipynb
QuantEcon/QuantEcon.notebooks
bsd-3-clause
Let us use the backward_induction routine to solve our finite-horizon problem.
vs, sigmas = backward_induction(ddp, T, v_term) fig, axes = plt.subplots(1, 2, figsize=(12, 4)) ts = [0, 5] for i, t in enumerate(ts): axes[i].bar(np.arange(n), vs[t], align='center', width=1, edgecolor='k') axes[i].set_xlim(0-0.5, emax+0.5) axes[i].set_ylim(0, 1) axes[i].set_xlabel('Stock of Energy') axes[i].set_ylabel('Probability') axes[i].set_title('Survival Probability, Period {t}'.format(t=t)) plt.show()
ddp_ex_MF_7_6_6_py.ipynb
QuantEcon/QuantEcon.notebooks
bsd-3-clause
Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
n = 10 x = np.random.normal(size=n) print(x) print(x[1])
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
or initializing all elements to
import numpy as np n = 10 # define a matrix of dimension 10 x 10 and set all elements to one A = np.eye( n ) print(A)
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
import numpy as np import pandas as pd from IPython.display import display np.random.seed(100) # setting up a 10 x 5 matrix rows = 10 cols = 5 a = np.random.randn(rows,cols) df = pd.DataFrame(a) display(df) print(df.mean()) print(df.std()) display(df**2) print(df-df.mean())
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
and many other operations. The Series class is another important class included in pandas. You can view it as a specialization of DataFrame but where we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame, most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays. As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in. For multidimensional arrays, we recommend strongly xarray. xarray has much of the same flexibility as pandas, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both pandas and xarray. Reading Data and fitting In order to study various Machine Learning algorithms, we need to access data. Acccessing data is an essential step in all machine learning algorithms. In particular, setting up the so-called design matrix (to be defined below) is often the first element we need in order to perform our calculations. To set up the design matrix means reading (and later, when the calculations are done, writing) data in various formats, The formats span from reading files from disk, loading data from databases and interacting with online sources like web application programming interfaces (APIs). In handling various input formats, as discussed above, we will mainly stay with pandas, a Python package which allows us, in a seamless and painless way, to deal with a multitude of formats, from standard csv (comma separated values) files, via excel, html to hdf5 formats. With pandas and the DataFrame and Series functionalities we are able to convert text data into the calculational formats we need for a specific algorithm. And our code is going to be pretty close the basic mathematical expressions. Our first data set is going to be a classic from nuclear physics, namely all available data on binding energies. Don't be intimidated if you are not familiar with nuclear physics. It serves simply as an example here of a data set. We will show some of the strengths of packages like Scikit-Learn in fitting nuclear binding energies to specific functions using linear regression first. Then, as a teaser, we will show you how you can easily implement other algorithms like decision trees and random forests and neural networks. But before we really start with nuclear physics data, let's just look at some simpler polynomial fitting cases, such as, (don't be offended) fitting straight lines! Simple linear regression model using scikit-learn We start with perhaps our simplest possible example, using Scikit-Learn to perform linear regression analysis on a data set produced by us. What follows is a simple Python code where we have defined a function $y$ in terms of the variable $x$. Both are defined as vectors with $100$ entries. The numbers in the vector $\hat{x}$ are given by random numbers generated with a uniform distribution with entries $x_i \in [0,1]$ (more about probability distribution functions later). These values are then used to define a function $y(x)$ (tabulated again as a vector) with a linear dependence on $x$ plus a random noise added via the normal distribution. The Numpy functions are imported used the import numpy as np statement and the random number generator for the uniform distribution is called using the function np.random.rand(), where we specificy that we want $100$ random variables. Using Numpy we define automatically an array with the specified number of elements, $100$ in our case. With the Numpy function randn() we can compute random numbers with the normal distribution (mean value $\mu$ equal to zero and variance $\sigma^2$ set to one) and produce the values of $y$ assuming a linear dependence as function of $x$ $$ y = 2x+N(0,1), $$ where $N(0,1)$ represents random numbers generated by the normal distribution. From Scikit-Learn we import then the LinearRegression functionality and make a prediction $\tilde{y} = \alpha + \beta x$ using the function fit(x,y). We call the set of data $(\hat{x},\hat{y})$ for our training data. The Python package scikit-learn has also a functionality which extracts the above fitting parameters $\alpha$ and $\beta$ (see below). Later we will distinguish between training data and test data. For plotting we use the Python package matplotlib which produces publication quality figures. Feel free to explore the extensive gallery of examples. In this example we plot our original values of $x$ and $y$ as well as the prediction ypredict ($\tilde{y}$), which attempts at fitting our data with a straight line. The Python code follows here.
# Importing various packages import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression x = np.random.rand(100,1) y = 2*x+0.01*np.random.randn(100,1) linreg = LinearRegression() linreg.fit(x,y) #ynew = linreg.predict(x) #xnew = np.array([[0],[1]]) ypredict = linreg.predict(x) plt.plot(x, ypredict, "r-") plt.plot(x, y ,'ro') plt.axis([0,1.0,0, 5.0]) plt.xlabel(r'$x$') plt.ylabel(r'$y$') plt.title(r'Simple Linear Regression') plt.show()
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
We have now read in the data, grouped them according to the variables we are interested in. We see how easy it is to reorganize the data using pandas. If we were to do these operations in C/C++ or Fortran, we would have had to write various functions/subroutines which perform the above reorganizations for us. Having reorganized the data, we can now start to make some simple fits using both the functionalities in numpy and Scikit-Learn afterwards. Now we define five variables which contain the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
A = Masses['A'] Z = Masses['Z'] N = Masses['N'] Element = Masses['Element'] Energies = Masses['Ebinding'] #print(Masses) xx = A print(xx)
doc/pub/How2ReadData/ipynb/How2ReadData.ipynb
CompPhysics/MachineLearning
cc0-1.0
Code cells are numbered in the sequence in which they are executed. Magics can be used to insert and execute code not written in Python, e.g. HTML:
%%html <style> div.text_cell_render h3 { color: #c60; } </style>
pugmuc2015.ipynb
gertingold/lit2015
mit
Text cells Formatting can be down in markdown and HTML. Examples: * text in italics oder text in italics * text in bold oder text in bold * code * <span style="color:white;background-color:#c00">emphasized text</span> Mathematical typesetting LaTeX syntax can be used in text cells to display mathematical symbols like $\ddot x$ or entire formulae: $$\mathcal{L}{f(t)} = \int_0^\infty\text{d}z\text{e}^{-zt}f(t)$$ Mathematics is displayed with MathJax (www.mathjax.org) and requires either an internet connection or a local installation. Instructions for a local installation can be obtained as follows:
from IPython.external import mathjax mathjax?
pugmuc2015.ipynb
gertingold/lit2015
mit
Selected features of the IPython shell Help
import numpy as np np.tensordot?
pugmuc2015.ipynb
gertingold/lit2015
mit
Description including code (if available)
np.tensordot??
pugmuc2015.ipynb
gertingold/lit2015
mit
Code completion with TAB
np.ALLOW_THREADS
pugmuc2015.ipynb
gertingold/lit2015
mit
Reference to earlier results
2**3 _-8 __**2
pugmuc2015.ipynb
gertingold/lit2015
mit
Access to all earlier input and output
In, Out
pugmuc2015.ipynb
gertingold/lit2015
mit
Magics in IPython
%lsmagic
pugmuc2015.ipynb
gertingold/lit2015
mit
Quick reference
%quickref
pugmuc2015.ipynb
gertingold/lit2015
mit
Timing of code execution
%timeit 2.5**100 import math %%timeit result = [] nmax = 100000 dx = 0.001 for n in range(nmax): result.append(math.sin(n*dx)) %%timeit nmax = 100000 dx = 0.001 x = np.arange(nmax)*dx result = np.sin(x)
pugmuc2015.ipynb
gertingold/lit2015
mit
Extended representations IPython allows for the representation of objects in formats as different as HTML Markdown SVG PNG JPEG LaTeX
from IPython.display import Image Image("./images/ipython_logo.png") from IPython.display import HTML HTML('<iframe src="http://www.ipython.org" width="700" height="500"></iframe>')
pugmuc2015.ipynb
gertingold/lit2015
mit
Even the embedding of audio and video files is possible.
from IPython.display import YouTubeVideo YouTubeVideo('F4rFuIb1Ie4')
pugmuc2015.ipynb
gertingold/lit2015
mit
Python allows for a textual representation of objects by means of the __repr__ method. example:
class MyObject(object): def __init__(self, obj): self.obj = obj def __repr__(self): return ">>> {0!r} / {0!s} <<<".format(self.obj) x = MyObject('Python') print(x)
pugmuc2015.ipynb
gertingold/lit2015
mit
A rich representation of objects is possible in the IPython notebook provided the corresponding methods are defined: _repr_pretty_ _repr_html_ _repr_markdown_ _repr_latex _repr_svg_ _repr_json_ _repr_javascript_ _repr_png_ _repr_jpeg_ Note: In contrast to __repr__ only one underscore is used.
class RGBColor(object): def __init__(self, r, g, b): self.colordict = {"r": r, "g":g, "b": b} def _repr_svg_(self): return '''<svg height="50" width="50"> <rect width="50" height="50" fill="rgb({r},{g},{b})" /> </svg>'''.format(**self.colordict) c = RGBColor(205, 128, 255) c from fractions import Fraction class MyFraction(Fraction): def _repr_html_(self): return "<sup>%s</sup>&frasl;<sub>%s</sub>" % (self.numerator, self.denominator) def _repr_latex_(self): return r"$\frac{%s}{%s}$" % (self.numerator, self.denominator) def __add__(a, b): """a + b""" return MyFraction(a.numerator * b.denominator + b.numerator * a.denominator, a.denominator * b.denominator) MyFraction(12, 345)+MyFraction(67, 89) from IPython.display import display_latex display_latex(MyFraction(12, 345)+MyFraction(67, 89))
pugmuc2015.ipynb
gertingold/lit2015
mit
Interaction with widgets
from IPython.html.widgets import interact @interact(x=(0., 10.), y=(0, 10)) def power(y, x=2): print(x**y)
pugmuc2015.ipynb
gertingold/lit2015
mit
Data types and their associated widgets String (str, unicode) → Text Dictionary (dict) → Dropdown Boolean variable (bool) → Checkbox Float (float) → FloatSlider Integer (int) → IntSlider
@interact(x=(0, 5), text="Python is great!!!") def f(text, x=0): for _ in range(x): print(text) from IPython.html import widgets import numpy as np import matplotlib.pyplot as plt %matplotlib inline # otherwise matplotlib graphs will be displayed in an external window @interact(harmonics=widgets.IntSlider(min=1, max=10, description='Number of harmonics', padding='2ex'), function=widgets.RadioButtons(options=("square", "sawtooth", "triangle"), description='Function') ) def f(harmonics, function): params = {"square": {"sign":1, "stepsize": 2, "func": np.sin, "power": 1}, "sawtooth": {"sign": -1, "stepsize": 1, "func": np.sin, "power": 1}, "triangle": {"sign": 1, "stepsize": 2, "func": np.cos, "power": 2} } p = params[function] xvals, nvals = np.ogrid[-2*np.pi:2*np.pi:100j, 1:harmonics+1:p["stepsize"]] yvals = np.sum(p["sign"]**nvals*p["func"](nvals*xvals)/nvals**p["power"], axis=1) plt.plot(xvals, yvals)
pugmuc2015.ipynb
gertingold/lit2015
mit
The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
word2idx = dict(zip(list(vocab), range(len(vocab))))
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int) for word in text.split(' '): if word in word2idx: word_vector[word2idx[word]] += 1 return word_vector
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
# Network building def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### net = tflearn.input_data([None, len(vocab)]) # Input net = tflearn.fully_connected(net, 100, activation='ReLU') # Hidden 1 net = tflearn.fully_connected(net, 10, activation='ReLU') # Hidden 2 net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy') model = tflearn.DNN(net) return model
sentiment-analysis/Sentiment Analysis with TFLearn.ipynb
BeatHubmann/17F-U-DLND
mit
Create TensorFlow Session
config = tf.ConfigProto( log_device_placement=True, ) config.gpu_options.allow_growth=True config.gpu_options.per_process_gpu_memory_fraction = 0.4 config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1 print(config) sess = tf.Session(config=config) print(sess)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
Load Model Training and Test/Validation Data
num_samples = 100000 x_train = np.random.rand(num_samples).astype(np.float32) print(x_train) noise = np.random.normal(scale=0.01, size=len(x_train)) y_train = x_train * 0.1 + 0.3 + noise print(y_train) pylab.plot(x_train, y_train, '.') x_test = np.random.rand(len(x_train)).astype(np.float32) print(x_test) noise = np.random.normal(scale=.01, size=len(x_train)) y_test = x_test * 0.1 + 0.3 + noise print(y_test) pylab.plot(x_test, y_test, '.') with tf.device("/cpu:0"): W = tf.get_variable(shape=[], name='weights') print(W) b = tf.get_variable(shape=[], name='bias') print(b) with tf.device("/device:XLA_GPU:0"): x_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='x_observed') print(x_observed) y_pred = W * x_observed + b print(y_pred) learning_rate = 0.025 with tf.device("/device:XLA_GPU:0"): y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed') print(y_observed) loss_op = tf.reduce_mean(tf.square(y_pred - y_observed)) optimizer_op = tf.train.GradientDescentOptimizer(learning_rate) train_op = optimizer_op.minimize(loss_op) print("Loss Scalar: ", loss_op) print("Optimizer Op: ", optimizer_op) print("Train Op: ", train_op)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
Train Model
%%time with tf.device("/device:XLA_GPU:0"): run_metadata = tf.RunMetadata() max_steps = 401 for step in range(max_steps): if (step < max_steps - 1): test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test}) train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train}) else: test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test}) train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train}, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE), run_metadata=run_metadata) trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline-xla-gpu.json', 'w') as trace_file: trace_file.write(trace.generate_chrome_trace_format(show_memory=True)) if step % 10 == 0: print(step, sess.run([W, b])) train_summary_writer.add_summary(train_summary_log, step) train_summary_writer.flush() test_summary_writer.add_summary(test_summary_log, step) test_summary_writer.flush() pylab.plot(x_train, y_train, '.', label="target") pylab.plot(x_train, sess.run(y_pred, feed_dict={x_observed: x_train, y_observed: y_train}), ".", label="predicted") pylab.legend() pylab.ylim(0, 1.0)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/06a_Train_Model_XLA_GPU.ipynb
fluxcapacitor/source.ml
apache-2.0
1. Understanding currents, fields, charges and potentials Cylinder app survey: Type of survey A: (+) Current electrode location B: (-) Current electrode location M: (+) Potential electrode location N: (-) Potential electrode location r: radius of cylinder xc: x location of cylinder center zc: z location of cylinder center $\rho_1$: Resistivity of the halfspace $\rho_2$: Resistivity of the cylinder Field: Field to visualize Type: which part of the field Scale: Linear or Log Scale visualization
cylinder_app()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
2. Potential differences and Apparent Resistivities Using the widgets contained in this notebook you will develop a better understand of what values are actually measured in a DC resistivity survey and how these measurements can be processed, plotted, inverted, and interpreted. Computing Apparent Resistivity In practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations: \begin{align} V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \ V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] \end{align} where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows, \begin{equation} \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G} \end{equation} and the resistivity of the halfspace $\rho$ is equal to, $$ \rho = \frac{\Delta V_{MN}}{IG} $$ In this equation $G$ is often referred to as the geometric factor. In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference. In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. Two layer app A: (+) Current electrode location B: (-) Current electrode location M: (+) Potential electrode location N: (-) Potential electrode location $\rho_1$: Resistivity of the top layer $\rho_2$: Resistivity of the bottom layer h: thickness of the first layer Plot: Field to visualize Type: which part of the field
plot_layer_potentials_app()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
3. Building Pseudosections 2D profiles are often plotted as pseudo-sections by extending $45^{\circ}$ lines downwards from the A-B and M-N midpoints and plotting the corresponding $\Delta V_{MN}$, $\rho_a$, or misfit value at the intersection of these lines as shown below. For pole-dipole or dipole-pole surveys the $45^{\circ}$ line is simply extended from the location of the pole. By using this method of plotting, the long offset electrodes plot deeper than those with short offsets. This provides a rough idea of the region sampled by each data point, but the vertical axis of a pseudo-section is not a true depth. In the widget below the red dot marks the midpoint of the current dipole or the location of the A electrode location in a pole-dipole array while the green dots mark the midpoints of the potential dipoles or M electrode locations in a dipole-pole array. The blue dots then mark the location in the pseudo-section where the lines from Tx and Rx midpoints intersect and the data is plotted. By stepping through the Tx (current electrode pairs) using the slider you can see how the pseudo section is built up. The figures shown below show how the points in a pseudo-section are plotted for pole-dipole, dipole-pole, and dipole-dipole arrays. The color coding of the dots match those shown in the widget. <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/PoleDipole.png?raw=true"> <center>Basic skematic for a uniformly spaced pole-dipole array. <br /> <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/DipolePole.png?raw=true"> <center>Basic skematic for a uniformly spaced dipole-pole array. <br /> <br /> <br /> <img style="float: center; width: 60%; height: 60%" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/dc/DipoleDipole.png?raw=true"> <center>Basic skematic for a uniformly spaced dipole-dipole array. <br /> Pseudo-section app
MidpointPseudoSectionWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
DC pseudo-section app $\rho_1$: Resistivity of the first layer (thickness of the first layer is 5m) $\rho_2$: Resistivity of the cylinder resistivity of the second layer is 1000 $\Omega$m xc: x location of cylinder center zc: z location of cylinder center r: radius of cylinder surveyType: Type of survey
DC2DPseudoWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
4. Parametric Inversion In this final widget you are able to forward model the apparent resistivity of a cylinder embedded in a two layered earth. Pseudo-sections of the apparent resistivity can be generated using dipole-dipole, pole-dipole, or dipole-pole arrays to see how survey geometry can distort the size, shape, and location of conductive bodies in a pseudo-section. Due to distortion and artifacts present in pseudo-sections trying to interpret them directly is typically difficult and dangerous due to the risk of misinterpretation. Inverting the data to find a model which fits the observed data and is geologically reasonable should be standard practice. By systematically varying the model parameters and comparing the plots of observed vs. predicted apparent resistivity a parametric inversion can be preformed by hand to find the "best" fitting model. Normalized data misfits, which provide a numerical measure of the difference between the observed and predicted data, are useful for quantifying how well and inversion model fits the observed data. The manual inversion process can be difficult and time consuming even with small examples sure as the one presented here. Therefore, numerical optimization algorithms are typically utilized to minimized the data misfit and a model objective function, which provides information about the model structure and complexity, in order to find an optimal solution. Parametric DC inversion app Definition of variables: - $\rho_1$: Resistivity of the first layer - $\rho_2$: Resistivity of the cylinder - xc: x location of cylinder center - zc: z location of cylinder center - r: radius of cylinder - predmis: toggle which allows you to switch the bottom pannel from predicted apparent resistivity to normalized data misfit - suveyType: toggle which allows you to switch between survey types. Knonw information - resistivity of the second layer is 1000 $\Omega$m - thickness of the first layer is known: 5m Unknowns are: $\rho_1$, $\rho_2$, xc, zc, and r
DC2DfwdWidget()
notebooks/dcip/DC_SurveyDataInversion.ipynb
geoscixyz/gpgLabs
mit
Searching for GALEX visits Looks like there are 4 visits available in the database
exp_data = gFind(band='NUV', skypos=[ra, dec], exponly=True) exp_data
explore.ipynb
jradavenport/GALEX_Boyajian
mit
... and they seem to be spaced over about 2 months time. Alas, not the multi-year coverage I'd hoped for to compare with the results from Montet & Simon (2016)
(exp_data['NUV']['t0'] - exp_data['NUV']['t0'][0]) / (60. * 60. * 24. * 365.)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Making light curves Following examples in the repo...
# step_size = 20. # the time resolution in seconds target = 'KIC8462852' # phot_rad = 0.0045 # in deg # ap_in = 0.0050 # in deg # ap_out = 0.0060 # in deg # print(datetime.datetime.now()) # for k in range(len(exp_data['NUV']['t0'])): # photon_events = gAperture(band='NUV', skypos=[ra, dec], stepsz=step_size, radius=phot_rad, # annulus=[ap_in, ap_out], verbose=3, csvfile=target+ '_' +str(k)+"_lc.csv", # trange=[int(exp_data['NUV']['t0'][k]), int(exp_data['NUV']['t1'][k])+1], # overwrite=True) # print(datetime.datetime.now(), k) med_flux = np.array(np.zeros(4), dtype='float') med_flux_err = np.array(np.zeros(4), dtype='float') time_big = np.array([], dtype='float') mag_big = np.array([], dtype='float') flux_big = np.array([], dtype='float') for k in range(4): data = read_lc(target+ '_' +str(k)+"_lc.csv") med_flux[k] = np.nanmedian(data['flux_bgsub']) med_flux_err[k] = np.std(data['flux_bgsub']) time_big = np.append(time_big, data['t_mean']) flux_big = np.append(flux_big, data['flux_bgsub']) mag_big = np.append(mag_big, data['mag']) # t0k = Time(int(data['t_mean'][0]) + 315964800, format='unix').mjd flg0 = np.where((data['flags'] == 0))[0] # for Referee: convert GALEX time -> MJD t_unix = Time(data['t_mean'] + 315964800, format='unix') mjd_time = t_unix.mjd t0k = (mjd_time[0]) plt.figure() plt.errorbar((mjd_time - t0k)*24.*60.*60., data['flux_bgsub']/(1e-15), yerr=data['flux_bgsub_err']/(1e-15), marker='.', linestyle='none', c='k', alpha=0.75, lw=0.5, markersize=2) plt.errorbar((mjd_time[flg0] - t0k)*24.*60.*60., data['flux_bgsub'][flg0]/(1e-15), yerr=data['flux_bgsub_err'][flg0]/(1e-15), marker='.', linestyle='none') # plt.xlabel('GALEX time (sec - '+str(t0k)+')') plt.xlabel('MJD - '+ format(t0k, '9.3f') +' (seconds)') plt.ylabel('NUV Flux \n' r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)') plt.savefig(target+ '_' +str(k)+"_lc.pdf", dpi=150, bbox_inches='tight', pad_inches=0.25) flagcol = np.zeros_like(mjd_time) flagcol[flg0] = 1 dfout = pd.DataFrame(data={'MJD':mjd_time, 'flux':data['flux_bgsub']/(1e-15), 'fluxerr':data['flux_bgsub_err']/(1e-15), 'flag':flagcol}) dfout.to_csv(target+ '_' +str(k)+'data.csv', index=False, columns=('MJD', 'flux','fluxerr', 'flag'))
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Huh... that 3rd panel looks like a nice long visit. Let's take a slightly closer look!
# k=2 # data = read_lc(target+ '_' +str(k)+"_lc.csv") # t0k = int(data['t_mean'][0]) # plt.figure(figsize=(14,5)) # plt.errorbar(data['t_mean'] - t0k, data['flux_bgsub'], yerr=data['flux_bgsub_err'], marker='.', linestyle='none') # plt.xlabel('GALEX time (sec - '+str(t0k)+')') # plt.ylabel('NUV Flux')
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Any short timescale variability of note? Let's use a Lomb-Scargle to make a periodogram! (limited to the 10sec windowing I imposed... NOTE: gPhoton could easily go shorter, but S/N looks dicey) Answer: Some interesting structure around 70-80 sec, but nothing super strong Update: David Wilson says that although there are significant pointing motions (which Scott Flemming says do occur), they don't align with the ~80sec signal here. Short timescale may be interesting! However, Keaton Bell says he saw no short timescale variations in optical last week... Update 2: This ~80 sec structure seems to be present in the gPhoton data at all three of (9,10,11) second sampling, suggesting it is real.
# try cutting on flags=0 flg0 = np.where((data['flags'] == 0))[0] plt.figure(figsize=(14,5)) plt.errorbar(data['t_mean'][flg0] - t0k, data['flux_bgsub'][flg0]/(1e-15), yerr=data['flux_bgsub_err'][flg0]/(1e-15), marker='.', linestyle='none') plt.xlabel('GALEX time (sec - '+str(t0k)+')') # plt.ylabel('NUV Flux') plt.ylabel('NUV Flux \n' r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)') plt.title('Flags = 0') minper = 10 # my windowing maxper = 200000 nper = 1000 pgram = LombScargleFast(fit_offset=False) pgram.optimizer.set(period_range=(minper,maxper)) pgram = pgram.fit(time_big - min(time_big), flux_big - np.nanmedian(flux_big)) df = (1./minper - 1./maxper) / nper f0 = 1./maxper pwr = pgram.score_frequency_grid(f0, df, nper) freq = f0 + df * np.arange(nper) per = 1./freq ## plt.figure() plt.plot(per, pwr, lw=0.75) plt.xlabel('Period (seconds)') plt.ylabel('L-S Power') plt.xscale('log') plt.xlim(10,500) plt.savefig('periodogram.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
How about the long-term evolution? Answer: looks flat!
t_unix = Time(exp_data['NUV']['t0'] + 315964800, format='unix') mjd_time_med = t_unix.mjd t0k = (mjd_time[0]) plt.figure(figsize=(9,5)) plt.errorbar(mjd_time_med - mjd_time_med[0], med_flux/1e-15, yerr=med_flux_err/1e-15, linestyle='none', marker='o') plt.xlabel('MJD - '+format(mjd_time[0], '9.3f')+' (days)') # plt.ylabel('NUV Flux') plt.ylabel('NUV Flux \n' r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)') # plt.title(target) plt.savefig(target+'.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Conclusion...? Based on data from only 4 GALEX visits, spaced over ~70 days, we can't say much possible evolution of this star with GALEX.
# average time of the gPhoton data print(np.mean(exp_data['NUV']['t0'])) t_unix = Time(np.mean(exp_data['NUV']['t0']) + 315964800, format='unix') t_date = t_unix.yday print(t_date) mjd_date = t_unix.mjd print(mjd_date)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
The visits are centered in mid 2011 (Quarter 9 and 10, I believe) Note: there was a special GALEX pointing at the Kepler field that overlapped with Quarter 14 - approximately 1 year later. This data is not available via gPhoton, but it may be able to be used! The gPhoton data shown here occurs right before the "knee" in Figure 3 of Montet & Simon (2016), and Quarter 14 is well after. Therefore a ~3% dip in the flux should be observed between this data and the Q14 visit However: the per-vist errors shown here (std dev) are around 6-10% for this target. If we co-add it all, we may get enough precision. The Q14 data apparently has 15 total scans... so the measurment may be borderline possible! Long timescale variability! I followed up on both the GALEX archival flux mearument, and the published scan-mode flux. The GALEX source database from MAST (from which I believe gPhoton is derived) says m_NUV = 16.46 +/- 0.01 The "Deep GALEX NUV survey of the Kepler field" catalog by Olmedo (2015), aka GALEX CAUSE Kepler, says m_NUV = 16.499 +/- 0.006 Converting these <a href="https://en.wikipedia.org/wiki/Magnitude_(astronomy)"> magnitudes </a> to a change in flux: 10^((16.46 - 16.499) / (-2.5)) = 1.03657 And if you trust all those catalog values as stated, here is a highly suggestive plot:
plt.errorbar([10, 14], [16.46, 16.499], yerr=[0.01, 0.006], linestyle='none', marker='o') plt.xlabel('Quarter (approx)') plt.ylabel(r'$m_{NUV}$ (mag)') plt.ylim(16.52,16.44)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
For time comparison, here is an example MJD from scan 15 of the GKM data. (note: I grabbed a random time-like number from here. YMMV, but it's probably OK for comparing to the Kepler FFI results)
gck_time = Time(1029843320.995 + 315964800, format='unix') gck_time.mjd # and to push the comparison to absurd places... # http://astro.uchicago.edu/~bmontet/kic8462852/reduced_lc.txt df = pd.read_table('reduced_lc.txt', delim_whitespace=True, skiprows=1, names=('time','raw_flux', 'norm_flux', 'model_flux')) # time = BJD-2454833 # *MJD = JD - 2400000.5 plt.figure() plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2) gtime = [mjd_date, gck_time.mjd] gmag = np.array([16.46, 16.499]) gflux = np.array([1, 10**((gmag[1] - gmag[0]) / (-2.5))]) gerr = np.abs(np.array([0.01, 0.006]) * np.log(10) / (-2.5) * gflux) plt.errorbar(gtime, gflux, yerr=gerr, linestyle='none', marker='o') plt.ylim(0.956,1.012) plt.xlabel('MJD (days)') plt.ylabel('Relative Flux') # plt.savefig(target+'_compare.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25) #################### # add in WISE plt.figure() plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2) plt.errorbar(gtime, gflux, yerr=gerr, linestyle='none', marker='o') # the WISE W1-band results from another notebook wise_time = np.array([55330.86838, 55509.906929000004]) wise_flux = np.array([ 1.,0.98627949]) wise_err = np.array([ 0.02011393, 0.02000256]) plt.errorbar(wise_time, wise_flux, yerr=wise_err, linestyle='none', marker='o') plt.ylim(0.956,1.025) plt.xlabel('MJD (days)') plt.ylabel('Relative Flux') # plt.savefig(target+'_compare2.png', dpi=150, bbox_inches='tight', pad_inches=0.25) ffi_file = '8462852.txt' ffi = pd.read_table(ffi_file, delim_whitespace=True, names=('mjd', 'flux', 'err')) plt.figure() # plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2) plt.errorbar(ffi['mjd'], ffi['flux'], yerr=ffi['err'], linestyle='none', marker='s', c='gray', zorder=0, alpha=0.7) gtime = [mjd_date, gck_time.mjd] gmag = np.array([16.46, 16.499]) gflux = np.array([1, 10**((gmag[1] - gmag[0]) / (-2.5))]) gerr = np.abs(np.array([0.01, 0.006]) * np.log(10) / (-2.5) * gflux) plt.errorbar(gtime, gflux, yerr=gerr, linestyle='none', marker='o', zorder=1, markersize=10) plt.xlabel('MJD (days)') plt.ylabel('Relative Flux') # plt.errorbar(mjd_time_med, med_flux/np.mean(med_flux), yerr=med_flux_err/np.mean(med_flux), # linestyle='none', marker='o', markerfacecolor='none', linewidth=0.5) # print(np.sqrt(np.sum((med_flux_err / np.mean(med_flux))**2) / len(med_flux))) plt.ylim(0.956,1.012) # plt.ylim(0.9,1.1) plt.savefig(target+'_compare.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25) print('gflux: ', gflux, gerr)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Thinking about Dust - v1 Use basic values for the relative extinction in each band at a "standard" R_v = 3.1 Imagine if the long-term Kepler fading was due to dust. What is the extinction we'd expect in the NUV? (A: Much greater)
# considering extinction... # w/ thanks to the Padova Isochrone page for easy shortcut to getting these extinction values: # http://stev.oapd.inaf.it/cgi-bin/cmd A_NUV = 2.27499 # actually A_NUV / A_V, in magnitudes, for R_V = 3.1 A_Kep = 0.85946 # actually A_Kep / A_V, in magnitudes, for R_V = 3.1 A_W1 = 0.07134 # actually A_W1 / A_V, in magnitudes, for R_V = 3.1 wave_NUV = 2556.69 # A wave_Kep = 6389.68 # A wave_W1 = 33159.26 # A print('nuv') ## use the Long Cadence data. frac_kep = (np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -gtime[0])) < 25)[0]]) - np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -gtime[1])) < 25)[0]])) ## could use the FFI data, but it slightly changes the extinction coefficients and they a pain in the butt ## to adjust manually because I was an idiot how i wrote this # frac_kep = (np.median(ffi['flux'][np.where((np.abs(ffi['mjd'] -gtime[0])) < 75)[0]]) - # np.median(ffi['flux'][np.where((np.abs(ffi['mjd'] -gtime[1])) < 75)[0]])) print(frac_kep) mag_kep = -2.5 * np.log10(1.-frac_kep) print(mag_kep) mag_nuv = mag_kep / A_Kep * A_NUV print(mag_nuv) frac_nuv = 10**(mag_nuv / (-2.5)) print(1-frac_nuv) frac_kep_w = (np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -wise_time[0])) < 25)[0]]) - np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -wise_time[1])) < 25)[0]])) print('w1') print(frac_kep_w) mag_kep_w = -2.5 * np.log10(1.-frac_kep_w) print(mag_kep_w) mag_w1 = mag_kep_w / A_Kep * A_W1 print(mag_w1) frac_w1 = 10**(mag_w1 / (-2.5)) print(1-frac_w1) plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))], label='Observed', marker='o') plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--o', label=r'$R_V$=3.1 Model') plt.legend(fontsize=10, loc='lower right') plt.xlabel(r'Wavelength ($\rm\AA$)') plt.ylabel('Relative Flux Decrease') plt.ylim(0.93,1) # plt.savefig(target+'_extinction_model_1.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Same plot as above, but with WISE W1 band, and considering a different time window unfortunately
plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(wise_err**2))], label='Observed', marker='o', c='purple') plt.plot([wave_Kep, wave_W1], [1-frac_kep_w, frac_w1], '--o', label='Extinction Model', c='green') plt.legend(fontsize=10, loc='lower right') plt.xlabel(r'Wavelength ($\rm\AA$)') plt.ylabel('Relative Flux') plt.ylim(0.93,1.03) # plt.savefig(target+'_extinction_model_2.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Combining the fading and dust model for both the NUV and W1 data. In the IR we can't say much... so maybe toss it out since it doesn't constrain dust model one way or another
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))], label='Observed1', marker='o') plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--o', label='Extinction Model1') plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(wise_err**2))], label='Observed2', marker='o', c='purple') plt.plot([wave_Kep, wave_W1], [1-frac_kep_w, frac_w1], '--o', label='Extinction Model2') plt.legend(fontsize=10, loc='upper left') plt.xlabel(r'Wavelength ($\rm\AA$)') plt.ylabel('Relative Flux') plt.ylim(0.93,1.03) plt.xscale('log') # plt.savefig(target+'_extinction_model_2.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Dust 2.0: Let's fit some models to the Optical-NUV data! Start w/ simple dust model, try to fit for R_V The procedure will be: for each R_V I want to test figure out what A_Kep / A_V is for this R_V solve for A_NUV(R_V) given measured A_Kep
# the "STANDARD MODEL" for extinction A_V = 0.0265407 R_V = 3.1 ext_out = extinction.ccm89(np.array([wave_Kep, wave_NUV]), A_V, R_V) # (ext_out[1] - ext_out[0]) / ext_out[1] print(10**(ext_out[0]/(-2.5)), (1-frac_kep)) # these need to match (within < 1%) print(10**(ext_out[1]/(-2.5)), gflux[1]) # and then these won't, as per our previous plot # print(10**((ext_out[1] - ext_out[0])/(-2.5)) / 10**(ext_out[0]/(-2.5))) # now find an R_V (and A_V) that gives matching extinctions in both bands # start by doing a grid over plasible A_V values at each R_V I care about... we doing this brute force! ni=50 nj=50 di = 0.2 dj = 0.0003 ext_out_grid = np.zeros((2,ni,nj)) for i in range(ni): R_V = 1.1 + i*di for j in range(nj): A_V = 0.02 + j*dj ext_out_ij = extinction.ccm89(np.array([wave_Kep, wave_NUV]), A_V, R_V) ext_out_grid[:,i,j] = 10**(ext_out_ij/(-2.5)) R_V_grid = 1.1 + np.arange(ni)*di A_V_grid = 0.02 + np.arange(nj)*dj # now plot where the Kepler extinction (A_Kep) matches the measured value, for each R_V plt.figure() plt.contourf( A_V_grid, R_V_grid, ext_out_grid[0,:,:], origin='lower' ) cb = plt.colorbar() cb.set_label('A_Kep (flux)') A_V_match = np.zeros(ni) ext_NUV = np.zeros(ni) for i in range(ni): xx = np.interp(1-frac_kep, ext_out_grid[0,i,:][::-1], A_V_grid[::-1]) plt.scatter(xx, R_V_grid[i], c='r', s=10) A_V_match[i] = xx ext_NUV[i] = 10**(extinction.ccm89(np.array([wave_NUV]),xx, R_V_grid[i]) / (-2.5)) plt.ylabel('R_V') plt.xlabel('A_V (mag)') plt.show() # Finally: at what R_V do we both match A_Kep (as above), and *now* A_NUV? RV_final = np.interp(gflux[1], ext_NUV, R_V_grid) print(RV_final) # this is the hacky way to sorta do an error propogation.... RV_err = np.mean(np.interp([gflux[1] + np.sqrt(np.sum(gerr**2)), gflux[1] - np.sqrt(np.sum(gerr**2))], ext_NUV, R_V_grid)) - RV_final print(RV_err) AV_final = np.interp(gflux[1], ext_NUV, A_V_grid) print(AV_final) plt.plot(R_V_grid, ext_NUV) plt.errorbar(RV_final, gflux[1], yerr=np.sqrt(np.sum(gerr**2)), xerr=RV_err, marker='o') plt.xlabel('R_V') plt.ylabel('A_NUV (flux)')
explore.ipynb
jradavenport/GALEX_Boyajian
mit
CCM89 gives us R_V = 5.02097489191 +/- 0.938304455977 satisfies both the Kepler and NUV fading we see. Such a high value of R_V~5 is not unheard of, particulary in protostars, however Boyajian's Star does not show any other indications of being such a source. NOTE: If we re-run using extinction.fitzpatrick99 instead of extinction we get R_v = 5.80047674637 +/- 1.57810616272
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))], label='Observed', marker='o', linestyle='none', zorder=0, markersize=10) plt.plot([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], label=r'$R_V$=5.0 Model', c='r', lw=3, alpha=0.7,zorder=1) plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--', label=r'$R_V$=3.1 Model',zorder=2) plt.legend(fontsize=10, loc='lower right') plt.xlabel(r'Wavelength ($\rm\AA$)') plt.ylabel('Relative Flux Decrease') plt.ylim(0.93,1) plt.savefig(target+'_extinction_model_2.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25) # For referee: compute how many sigma away the Rv=3.1 model is from the Rv=5 print( (gflux[1] - frac_nuv) / np.sqrt(np.sum(gerr**2)), np.sqrt(np.sum(gerr**2)) ) print( (gflux[1] - frac_nuv) / 3., 3. * np.sqrt(np.sum(gerr**2)) ) # how much Hydrogen would you need to cause this fading? # http://www.astronomy.ohio-state.edu/~pogge/Ast871/Notes/Dust.pdf # based on data from Rachford et al. (2002) http://adsabs.harvard.edu/abs/2002ApJ...577..221R A_Ic = extinction.ccm89(np.array([8000.]), AV_final, RV_final) N_H = A_Ic / ((2.96 - 3.55 * ((3.1 / RV_final)-1)) * 1e-22) print(N_H[0] , 'cm^-2') # see also http://adsabs.harvard.edu/abs/2009MNRAS.400.2050G for R_V=3.1 only print(2.21e21 * AV_final, 'cm^-2') 1-gflux[1]
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Another simple model: changes in Blackbody temperature We have 2 bands, thats enough to constrain how the temperature should have changed with a blackbody... So the procedure is: - assume the stellar radius doesn't change, only temp (good approximation) - given the drop in optical luminosity, what change in temp is needed? - given that change in temp, what is predicted drop in NUV? does it match data? NOTE: a better model would be changing T_eff of Phoenix model grid to match, since star isn't a blackbody through the NUV-Optical (b/c opacities!)
# do simple thing first: a grid of temperatures starting at T_eff of the star (SpT = F3, T_eff = 6750) temp0 = 6750 * u.K wavelengths = [wave_Kep, wave_NUV] * u.AA wavegrid = np.arange(wave_NUV, wave_Kep) * u.AA flux_lam0 = blackbody_lambda(wavelengths, temp0) flux_lamgrid = blackbody_lambda(wavegrid, temp0) plt.plot(wavegrid, flux_lamgrid/1e6) plt.scatter(wavelengths, flux_lam0/1e6) Ntemps = 50 dT = 5 * u.K flux_lam_out = np.zeros((2,Ntemps)) for k in range(Ntemps): flux_new = blackbody_lambda(wavelengths, (temp0 - dT*k) ) flux_lam_out[:,k] = flux_new # [1-frac_kep, gflux[1]] yy = flux_lam_out[0,:] / flux_lam_out[0,0] xx = temp0 - np.arange(Ntemps)*dT temp_new = np.interp(1-frac_kep, yy[::-1], xx[::-1] ) # this is the hacky way to sorta do an error propogation.... err_kep = np.mean(ffi['err'][np.where((np.abs(ffi['mjd'] -gtime[0])) < 50)[0]]) temp_err = (np.interp([1-frac_kep - err_kep, 1-frac_kep + err_kep], yy[::-1], xx[::-1])) temp_err = (temp_err[1] - temp_err[0])/2. print(temp_new, temp_err) yy2 = flux_lam_out[1,:] / flux_lam_out[1,0] NUV_new = np.interp(temp_new, xx[::-1], yy2[::-1]) print(NUV_new) print(gflux[1], np.sqrt(np.sum(gerr**2))) plt.plot(temp0 - np.arange(Ntemps)*dT, flux_lam_out[0,:]/flux_lam_out[0,0], label='Blackbody model (Kep)') plt.plot(temp0 - np.arange(Ntemps)*dT, flux_lam_out[1,:]/flux_lam_out[1,0],ls='--', label='Blackbody model (NUV)') plt.errorbar(temp_new, gflux[1], yerr=np.sqrt(np.sum(gerr**2)), marker='o', label='Observed NUV' ) plt.scatter([temp_new], [1-frac_kep], s=60, marker='s') plt.scatter([temp_new], [NUV_new], s=60, marker='s') plt.legend(fontsize=10, loc='upper left') plt.xlim(6650,6750) plt.ylim(.9,1) plt.ylabel('Fractional flux') plt.xlabel('Temperature') # plt.title('Tuned to Kepler Dimming') plt.savefig(target+'_blackbody.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
explore.ipynb
jradavenport/GALEX_Boyajian
mit
Notice how we don't have the topic of the articles! Let's use LDA to attempt to figure out clusters of the articles. Preprocessing
from sklearn.feature_extraction.text import TfidfVectorizer
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
max_df: float in range [0.0, 1.0] or int, default=1.0<br> When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None. min_df: float in range [0.0, 1.0] or int, default=1<br> When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
tfidf = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english') dtm = tfidf.fit_transform(npr['Article']) dtm
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
NMF
from sklearn.decomposition import NMF nmf_model = NMF(n_components=7,random_state=42) # This can take awhile, we're dealing with a large amount of documents! nmf_model.fit(dtm)
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
Displaying Topics
len(tfidf.get_feature_names()) import random for i in range(10): random_word_id = random.randint(0,54776) print(tfidf.get_feature_names()[random_word_id]) for i in range(10): random_word_id = random.randint(0,54776) print(tfidf.get_feature_names()[random_word_id]) len(nmf_model.components_) nmf_model.components_ len(nmf_model.components_[0]) single_topic = nmf_model.components_[0] # Returns the indices that would sort this array. single_topic.argsort() # Word least representative of this topic single_topic[18302] # Word most representative of this topic single_topic[42993] # Top 10 words for this topic: single_topic.argsort()[-10:] top_word_indices = single_topic.argsort()[-10:] for index in top_word_indices: print(tfidf.get_feature_names()[index])
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
These look like business articles perhaps... Let's confirm by using .transform() on our vectorized articles to attach a label number. But first, let's view all the 10 topics found.
for index,topic in enumerate(nmf_model.components_): print(f'THE TOP 15 WORDS FOR TOPIC #{index}') print([tfidf.get_feature_names()[i] for i in topic.argsort()[-15:]]) print('\n')
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
Attaching Discovered Topic Labels to Original Articles
dtm dtm.shape len(npr) topic_results = nmf_model.transform(dtm) topic_results.shape topic_results[0] topic_results[0].round(2) topic_results[0].argmax()
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/01-Non-Negative-Matrix-Factorization.ipynb
rishuatgithub/MLPy
apache-2.0
The sequence of procedures below is based on the one explained in the Mapping Reddit demo notebook. First we import the raw data and make a sparse matrix out of it.
print mldb.put('/v1/procedures/import_rcp', { "type": "import.text", "params": { "headers": ["user_id", "recipe_id"], "dataFileUrl": "file://mldb/mldb_test_data/favorites.csv.gz", "outputDataset": "rcp_raw", "runOnCreation": True } }) print mldb.post('/v1/procedures', { "id": "rcp_import", "type": "transform", "params": { "inputData": "select pivot(recipe_id, 1) as * named user_id from rcp_raw group by user_id", "outputDataset": "recipes", "runOnCreation": True } })
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We then train an SVD decomposition and do K-Means clustering
print mldb.post('/v1/procedures', { "id": "rcp_svd", "type" : "svd.train", "params" : { "trainingData": "select * from recipes", "columnOutputDataset" : "rcp_svd_embedding_raw", "runOnCreation": True } }) num_centroids = 16 print mldb.post('/v1/procedures', { "id" : "rcp_kmeans", "type" : "kmeans.train", "params" : { "trainingData" : "select * from rcp_svd_embedding_raw", "outputDataset" : "rcp_kmeans_clusters", "centroidsDataset" : "rcp_kmeans_centroids", "numClusters" : num_centroids, "runOnCreation": True } })
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
Now we import the actual recipe names, clean them up a bit, and get a version of our SVD embedding with the recipe names as column names.
print mldb.put('/v1/procedures/import_rcp_names_raw', { 'type': 'import.text', 'params': { 'dataFileUrl': 'file://mldb/mldb_test_data/recipes.csv.gz', 'outputDataset': "rcp_names_raw", 'delimiter':'', 'quoteChar':'', 'runOnCreation': True } }) print mldb.put('/v1/procedures/rcp_names_import', { 'type': 'transform', 'params': { 'inputData': ''' select jseval( 'return s.substr(s.indexOf(",") + 1) .replace(/&#34;/g, "") .replace(/&#174;/g, "");', 's', lineText) as name named implicit_cast(rowName()) - 1 from rcp_names_raw ''', 'outputDataset': 'rcp_names', 'runOnCreation': True } }) print mldb.put('/v1/procedures/rcp_clean_svd', { 'type': 'transform', 'params': { 'inputData': """ select rcp_svd_embedding_raw.* as * named rcp_names.rowName()+'-'+rcp_names.name from rcp_svd_embedding_raw join rcp_names on (rcp_names.rowName() = rcp_svd_embedding_raw.rowPathElement(0)) """, 'outputDataset': {'id': 'rcp_svd_embedding', 'type': 'embedding', 'params': {'metric': 'cosine'}}, 'runOnCreation': True } })
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
With all that pre-processing done, let's look at the names of the 3 closest recipes to each cluster centroid to try to get a sense of what kind of clusters we got.
mldb.put("/v1/functions/nearestRecipe", { "type":"embedding.neighbors", "params": { "dataset": "rcp_svd_embedding", "defaultNumNeighbors": 3 } }) mldb.query(""" select nearestRecipe({coords: {*}})[neighbors] as * from rcp_kmeans_centroids """).applymap(lambda x: x.split('-')[1])
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We can see a bit of pattern just from the names of the recipes nearest to the centroids, but we can probably do better! Let's try to extract the most characteristic words used in the recipe names for each cluster. Topic Extraction with TF-IDF We'll start by preprocessing the recipe names a bit: taking out a few punctuations and convert to lowercase. And then for a given cluster, we will count the words taken from the recipe names. This is all done in one query.
print mldb.put('/v1/procedures/sum_words_per_cluster', { 'type': 'transform', 'params': { 'inputData': """ select sum({tokens.* as *}) as * named c.cluster from ( SELECT lower(n.name), tokenize('recipe ' + lower(n.name), {splitChars:' -.;&!''()",', minTokenLength: 4}) as tokens, c.cluster FROM rcp_names as n JOIN rcp_kmeans_clusters as c ON (n.rowName() = c.rowPathElement(0)) order by n.rowName() ) group by c.cluster """, 'outputDataset': 'rcp_cluster_word_counts', 'runOnCreation': True } }) mldb.query("""select * from rcp_cluster_word_counts order by implicit_cast(rowName())""")
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
We can use this to create a TF-IDF score for each word in the cluster. Basically this score will give us an idea of the relative importance of a each word in a given cluster.
print mldb.put('/v1/procedures/train_tfidf', { 'type': 'tfidf.train', 'params': { 'trainingData': "select * from rcp_cluster_word_counts", 'modelFileUrl': 'file:///mldb_data/models/rcp_tfidf.idf', 'runOnCreation': True } }) print mldb.put('/v1/functions/rcp_tfidf', { 'type': 'tfidf', 'params': { 'modelFileUrl': 'file:///mldb_data/models/rcp_tfidf.idf', 'tfType': 'log', 'idfType': 'inverse' } }) print mldb.put('/v1/procedures/apply_tfidf', { 'type': 'transform', 'params': { 'inputData': "select rcp_tfidf({input: {*}})[output] as * from rcp_cluster_word_counts", 'outputDataset': 'rcp_cluster_word_scores', 'runOnCreation': True } }) mldb.query("select * from rcp_cluster_word_scores order by implicit_cast(rowName())")
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
If we transpose that dataset, we will be able to get the highest scored words for each cluster, and we can display them nicely in a word cloud.
import json from ipywidgets import interact from IPython.display import IFrame, display html = """ <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.6/d3.min.js"></script> <script src="https://static.mldb.ai/d3.layout.cloud.js"></script> <script src="https://static.mldb.ai/wordcloud.js"></script> <body> <script>drawCloud(%s)</script> </body> """ @interact def cluster_word_cloud(cluster=[0, num_centroids-1]): num_words = 20 cluster_words = mldb.get( '/v1/query', q=""" SELECT rowName() as text FROM transpose(rcp_cluster_word_scores) ORDER BY "{0}" DESC LIMIT {1} """.format(cluster, num_words), format='aos', rowNames=0 ).json() for i,x in enumerate(cluster_words): x['size'] = num_words - i display( IFrame("data:text/html," + (html % json.dumps(cluster_words)).replace('"',"'"), 850, 350) )
container_files/demos/Exploring Favourite Recipes.ipynb
mldbai/mldb
apache-2.0
Load the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) print(f"Total training examples: {len(x_train)}") print(f"Total test examples: {len(x_test)}")
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Define constants
BATCH_SIZE = 128 * strategy.num_replicas_in_sync EPOCHS = 90 START_LR = 0.1 AUTO = tf.data.AUTOTUNE
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Prepare data loaders
# Augmentation pipeline data_augmentation = tf.keras.Sequential( [ layers.experimental.preprocessing.Normalization(), layers.experimental.preprocessing.RandomCrop(32, 32), layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(factor=0.02), layers.experimental.preprocessing.RandomZoom( height_factor=0.2, width_factor=0.2 ) ] ) # Now, map the augmentation pipeline to our training dataset train_ds = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(BATCH_SIZE * 100) .map(lambda x, y: (tf.image.convert_image_dtype(x, tf.float32), y), num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Test dataset test_ds = ( tf.data.Dataset.from_tensor_slices((x_test, y_test)) .map(lambda x, y: (tf.image.convert_image_dtype(x, tf.float32), y), num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Compute the mean and the variance of the training data for normalization data_augmentation.layers[0].adapt(x_train/255.) # Notice the scaling step
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Model utilities
def get_model(n_classes=10): n = 2 depth = n * 9 + 2 n_blocks = ((depth - 2) // 9) - 1 # The input tensor inputs = layers.Input(shape=(32, 32, 3)) x = data_augmentation(inputs) # Normalize and augment # The Stem Convolution Group x = resnet20.stem(x) # The learner x = resnet20.learner(x, n_blocks) # The Classifier for 10 classes outputs = resnet20.classifier(x, 10) # Instantiate the Model model = tf.keras.Model(inputs, outputs) return model # Serialize the initial model for better reproducibility with strategy.scope(): get_model().save("initial_model_resnet20")
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Model training
# LR Scheduler reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", patience=3) # Optimizer and loss function optimizer = tf.keras.optimizers.SGD(learning_rate=START_LR, momentum=0.9) loss_fn = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.1)
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
One can obtain the initial weights that I used from here.
with strategy.scope(): rn_model = tf.keras.models.load_model("initial_model_resnet20") rn_model.compile(loss=loss_fn, optimizer=optimizer, metrics=["accuracy"]) history = rn_model.fit(train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[reduce_lr]) plt.plot(history.history["loss"], label="train loss") plt.plot(history.history["val_loss"], label="test loss") plt.grid() plt.legend() plt.show() rn_model.save("resnet20_classifier") with strategy.scope(): _, train_acc = rn_model.evaluate(train_ds, verbose=0) _, test_acc = rn_model.evaluate(test_ds, verbose=0) print("Train accuracy: {:.2f}%".format(train_acc * 100)) print("Test accuracy: {:.2f}%".format(test_acc * 100))
research/third_party/denoised_smoothing/notebooks/Train_Classifier.ipynb
tensorflow/neural-structured-learning
apache-2.0
Intermediate Pandas Pandas is a powerful Python library for working with data. For this lab, you should already know what a DataFrame and Series are and how to do some simple analysis of the data contained in those structures. In this lab we'll look at some more advanced capabilities of Pandas, such as filtering, grouping, merging, and sorting. DataFrame Information DataFrame objects are rich containers that allow us to explore and modify data. In this lab we will learn powerful techniques for working with the data contained in DataFrame objects. To begin, let's create a DataFrame containing information about populations and airports in a few select cities.
import pandas as pd airport_df = pd.DataFrame.from_records(( ('Atlanta', 498044, 2), ('Austin', 964254, 2), ('Kansas City', 491918, 8), ('New York City', 8398748, 3), ('Portland', 653115, 1), ('San Francisco', 883305, 3), ('Seattle', 744955, 2), ), columns=("City Name", "Population", "Airports")) airport_df
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you aren't familiar with the from_records() method, it is a way to create a DataFrame from data formatted in a tabular manner. In this case we have a tuple-of-tuples where each inner-tuple is a row of data for a city. Shape One interesting fact about a DataFrame is its shape. What is shape? Shape is the number of rows and columns contained in the dataframe. Let's find the shape of the airport_df:
airport_df.shape
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
The DataFrame has a shape of (7, 3). This means that the DataFrame has seven rows and three columns. If you are familiar with NumPy, you probably are also familiar with shape. NumPy arrays can have n-dimensional shapes while DataFrame objects tend to stick to two dimensions: rows and columns. Exercise 1: Finding Shape Download the California housing data referenced below into a DataFrame, and print out the shape of the data. Student Solution
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv" # Download the housing data # Print the shape of the data
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Columns Speaking of columns, it's possible to ask a DataFrame what columns it contains using the columns attribute:
airport_df.columns
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Notice that the columns are contained in an Index object. An Index wraps the list of columns. For basic usage, like loops, you can just use the Index directly:
for c in airport_df.columns: print(c)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you do need the columns in a lower level format, you can use .values to get a NumPy array:
type(airport_df.columns.values)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you need a basic Python list, you can then call .tolist() to get the core Python list of column names:
type(airport_df.columns.values.tolist())
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Exercise 2: Pretty Print Columns The columns in the California housing dataset are not necessarily easy on the eyes. Columns like housing_median_age would be easier to read if they were presented as Housing Median Age. In the code block below, download the California housing dataset. Then find the names of the columns in the dataset and convert them from "snake case" to regular English. For instance housing_median_age becomes Housing Median Age and total_rooms becomes Total Rooms. Print the human-readable names one per line. You can find Python string methods that might be helpful here. Write your code in a manner that it could handle any column name in "snake case": Underscores should be replaced by spaces. The first letter of each word should be capitalized. Be sure to get the column names from the DataFrame. Student Solution
import pandas as pd url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv" df = pd.read_csv(url) # Your Code Goes Here
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Missing Values It is common to find datasets with missing data. When this happens it's good to know that the data is missing so you can determine how to handle the situation. Let's recreate our city data but set some values to None:
import pandas as pd airport_df = pd.DataFrame.from_records(( ('Atlanta', 498044, 2), (None, 964254, 2), ('Kansas City', 491918, 8), ('New York City', None, 3), ('Portland', 653115, 1), ('San Francisco', 883305, None), ('Seattle', 744955, 2), ), columns=("City Name", "Population", "Airports")) airport_df
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can see that the population of New York and the number of airports in San Francisco are now represented by NaN values. This stands for 'Not a Number', which means that the value is an unknown numeric value. You'll also see that where 'Austin' once was, we now have a None value. This means that we are missing a non-numeric value. If we want to ask the DataFrame what values are present or missing, we can use the isna() method:
airport_df.isna()
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Here we get True values where a data point is missing and False values where we have data. Using this, we can do powerful things like select all columns with populations or airports that have missing data:
airport_df[airport_df['Population'].isna() | airport_df['Airports'].isna()]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now that we know that we are missing the population of New York and the number of airports in San Francisco, we can look up that data and manually fix it. Sometimes the fixes aren't so easy. The data might be impossible to find, or there might be so many missing values that you can't individually fix them all. In these cases you have two options: completely remove the offending rows or columns or patch the data in some way. Throughout this course we will work with many datasets that have missing or obviously invalid values, and we will discuss mitigation strategies. Filtering Filtering is an important concept in data analysis and processing. When you think of filtering in the real world, you likely think of an object that blocks undesired things while allowing desired things to pass through. Imagine a coffee filter. It stops the coffee grounds from getting into the coffee pot, but it allows the water bound to coffee's chemical compounds to pass through into your perfect brew. Filtering a DataFrame is similar. A DataFrame contains rows of data. Some of these rows might be important to you, and some you might want to discard. Filtering allows you select only the data that you care about and put that data in a new DataFrame. In the example below, we filter our airport_df to select only cities that have more than two airports. In return we get a DataFrame that contains only information about cities that have more than two airports.
airport_df[airport_df['Airports'] > 2]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Let's deconstruct this statement. At its core we have: python airport_df['Airports'] &gt; 2 This expression compares every 'Airports' value in the airport_df DataFrame and returns True if there are more than two airports, False otherwise.
airport_df['Airports'] > 2
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
This data is returned as a Pandas Series. The series is then used as a boolean index for the airport_df the DataFrame. Boolean index is just a term used to refer to a Series (or other list-like structure) of boolean values used in the index operator, [], for the DataFrame. Ideally the boolean index length should be equal to the number of rows in the DataFrame being indexed. DataFrame rows that map to True values in the index are retained, while rows that map to False values are filtered out.
has_many_airports = airport_df['Airports'] > 2 airport_df[has_many_airports]
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
If you are familiar with Boolean logic and Python, you probably know that you can create compound expressions using the or and and keywords. You can also use the keyword not to reverse an expression.
print(True and False) print(True or False) print(not True)
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
You can do similar things in Pandas with boolean indices. However, and, or, and not don't work as expected. Instead you need to use the &amp;, |, and ! operators. and changes to &amp; or changes to | not changes to ! For normal numbers in Python, these are actually the 'bitwise logical operators'. When working on Pandas objects, these operators don't perform bitwise calculations but instead perform Boolean logic. Let's see this in action with an example. Imagine we want to find all cities with more than two airports and less than a million inhabitants. First, let's find the rows with more than two airports:
has_many_airports = airport_df['Airports'] > 2 has_many_airports
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0
Now we can find the rows that represent a city with less than a million residents:
small_cities = airport_df['Population'] < 1000000 small_cities
content/02_data/02_intermediate_pandas/colab.ipynb
google/applied-machine-learning-intensive
apache-2.0