markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
5.1 Sample data Loads a data sample and draws n_samples from the field. For sampling the field, random samples from a gamma distribution with a fairly high scale are drawn, to ensure there are some outliers in the samle. The values are then re-scaled to the shape of the random field and the values are extracted from it. You can use either of the next two cell to work either on the pancake or the Meuse dataset.
N = 80 pan = skg.data.pancake_field().get('sample') coords, vals = skg.data.pancake(N=80, seed=1312).get('sample') fig = make_subplots(1,2,shared_xaxes=True, shared_yaxes=True) fig.add_trace( go.Scatter(x=coords[:,0], y=coords[:,1], mode='markers', marker=dict(color=vals,cmin=0, cmax=255), name='samples'), row=1, col=1 ) fig.add_trace(go.Heatmap(z=pan, name='field'), row=1, col=2) fig.update_layout(width=900, height=450, template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
Uncomment this cell to work on the pancake:
coords, vals = skg.data.meuse().get('sample') vals = vals.flatten() fig = go.Figure(go.Scatter(x=coords[:,0], y=coords[:,1], mode='markers', marker=dict(color=vals), name='samples')) fig.update_layout(width=450, height=450, template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.2 Lag class binning - fixed N Apply different lag class binning methods and visualize their histograms. In this section, the distance matrix between all point pair combinations (NxN) is binned using each method. The plots visualize the histrogram of the distance matrix of the variogram, not the variogram lag classes themselves.
N = 15 # use a nugget V = skg.Variogram(coords, vals, n_lags=N, use_nugget=True)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.2.1 default 'even' lag classes The default binning method will find N equidistant bins. This is the default behavior and used in almost all geostatistical publications. It should not be used without a maxlag (like done in the plot below).
# apply binning bins, _ = skg.binning.even_width_lags(V.distance, N, None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'even'}~~binning$") ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.2.2 'uniform' lag classes The histogram of the uniform method will adjust the lag class widths to have the same sample size for each lag class. This can be used, when there must not be any empty lag classes on small data samples, or comparable sample sizes are desireable for the semi-variance estimator.
# apply binning bins, _ = skg.binning.uniform_count_lags(V.distance, N, None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'uniform'}~~binning$") ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.2.3 'kmeans' lag classes The distance matrix is clustered by a K-Means algorithm. The centroids, are taken as a good guess for lag class centers. Each lag class is then formed by taking half the distance to each sorted neighboring centroid as a bound. This will most likely result in non-equidistant lag classes. One important note about K-Means clustering is, that it is not a deterministic method, as the starting points for clustering are taken randomly. Thus, the decision was made to seed the random start values. Therefore, the K-Means implementation in SciKit-GStat is deterministic and will always return the same lag classes for the same distance matrix.
# apply binning bins, _ = skg.binning.kmeans(V.distance, N, None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'K-Means'}~~binning$") ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.2.4 'ward' lag classes The other clustering algorithm is a hierarchical clustering algorithm. This algorithm groups values together based on their similarity, which is expressed by Ward's criterion. Agglomerative algorithms work iteratively and deterministic, as at first iteration each value forms a cluster on its own. Each cluster is then merged with the most similar other cluster, one at a time, until all clusters are merged, or the clustering is interrupted. Here, the clustering is interrupted as soon as the specified number of lag classes is reached. The lag classes are then formed similar to the K-Means method, either by taking the cluster mean or median as center. Ward's criterion defines the one other cluster as the closest, that results in the smallest intra-cluster variance for the merged clusters. The main downside is the processing speed. You will see a significant difference for 'ward' and should not use it on medium and large datasets.
# apply binning bins, _ = skg.binning.ward(V.distance, N, None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'ward'}~~binning$") ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.3 Lag class binning - adjustable N 5.3.1 'sturges' lag classes Sturge's rule is well known and pretty straightforward. It's the default method for histograms in R. The number of equidistant lag classes is defined like: $$ n =log_2 (x + 1) $$ Sturge's rule works good for small, normal distributed datasets.
# apply binning bins, n = skg.binning.auto_derived_lags(V.distance, 'sturges', None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'sturges'}~~binning~~%d~classes$" % n) ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.3.2 'scott' lag classes Scott's rule is another quite popular approach to estimate histograms. The rule is defined like: $$ h = \sigma \frac{24 * \sqrt{\pi}}{x}^{\frac{1}{3}} $$ Other than Sturge's rule, it will estimate the lag class width from the sample size standard deviation. Thus, it is also quite sensitive to outliers.
# apply binning bins, n = skg.binning.auto_derived_lags(V.distance, 'scott', None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'scott'}~~binning~~%d~classes$" % n) ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.3.3 'sqrt' lag classes The only advantage of this method is its speed. The number of lag classes is simply defined like: $$ n = \sqrt{x} $$ Thus, it's usually not really a good choice, unless you have a lot of samples.
# apply binning bins, n = skg.binning.auto_derived_lags(V.distance, 'sqrt', None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'sqrt'}~~binning~~%d~classes$" % n) ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.3.4 'fd' lag classes The Freedman-Diaconis estimator can be used to derive the number of lag classes again from an optimal lag class width like: $$ h = 2\frac{IQR}{x^{1/3}} $$ As it is based on the interquartile range (IQR), it is very robust to outlier. That makes it a suitable method to estimate lag classes on non-normal distance matrices. On the other side it usually over-estimates the $n$ for small datasets. Thus it should only be used on medium to small datasets.
# apply binning bins, n = skg.binning.auto_derived_lags(V.distance, 'fd', None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'fd'}~~binning~~%d~classes$" % n) ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.3.5 'doane' lag classes Doane's rule is an extension to Sturge's rule that takes the skewness of the distance matrix into account. It was found to be a very reasonable choice on most datasets where the other estimators didn't yield good results. It is defined like: $$ \begin{split} n = 1 + \log_{2}(s) + \log_2\left(1 + \frac{|g|}{k}\right) \ g = E\left[\left(\frac{x - \mu_g}{\sigma}\right)^3\right]\ k = \sqrt{\frac{6(s - 2)}{(s + 1)(s + 3)}} \end{split} $$
# apply binning bins, n = skg.binning.auto_derived_lags(V.distance, 'doane', None) # get the histogram count, _ = np.histogram(V.distance, bins=bins) fig = go.Figure( go.Bar(x=bins, y=count), layout=dict(template='plotly_white', title=r"$\texttt{'doane'}~~binning~~%d~classes$" % n) ) iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4 Variograms The following section will give an overview on the influence of the chosen binning method on the resulting variogram. All parameters will be the same for all variograms, so any change is due to the lag class binning. The variogram will use a maximum lag of 200 to get rid of the very thin last bins at large distances. The maxlag is very close to the effective range of the variogram, thus you can only see differences in sill. But the variogram fitting is not at the focus of this tutorial. You can also change the parameter and fit a more suitable spatial model
# use a exponential model V.set_model('spherical') # set the maxlag V.maxlag = 'median'
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.1 'even' lag classes
# set the new binning method V.bin_func = 'even' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.2 'uniform' lag classes
# set the new binning method V.bin_func = 'uniform' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.3 'kmeans' lag classes
# set the new binning method V.bin_func = 'kmeans' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.4 'ward' lag classes
# set the new binning method V.bin_func = 'ward' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.5 'sturges' lag classes
# set the new binning method V.bin_func = 'sturges' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" adjusted {V.n_lags} lag classes - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.6 'scott' lag classes
# set the new binning method V.bin_func = 'scott' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" adjusted {V.n_lags} lag classes - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.7 'fd' lag classes
# set the new binning method V.bin_func = 'fd' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" adjusted {V.n_lags} lag classes - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.8 'sqrt' lag classes
# set the new binning method V.bin_func = 'sqrt' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" adjusted {V.n_lags} lag classes - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
5.4.9 'doane' lag classes
# set the new binning method V.bin_func = 'doane' # plot fig = V.plot(show=False) print(f'"{V._bin_func_name}" adjusted {V.n_lags} lag classes - range: {np.round(V.cof[0], 1)} sill: {np.round(V.cof[1], 1)}') fig.update_layout(template='plotly_white') iplot(fig)
tutorials/05_binning.ipynb
mmaelicke/scikit-gstat
mit
Tasa de cambio del estado emotivo de Laura $\frac{dL(t)}{dt}=-\alpha_{1}L(t)+R_{L}(P(t))+\beta_{1}A_{P}$ Tasa de cambio del estado emotivo de Petrarca $\frac{dP(t)}{dt}=-\alpha_{2}L(t)+R_{p}(L(t))+\beta_{2}\frac{A_{L}}{1+\delta Z(t)}$ Tasa de cambio de la inspiración del Poeta $\frac{dZ(t)}{dt}=-\alpha_{3}Z(t)+\beta_{3}P(t)$ Reacción del Poeta a Laura $R_{P}(L)=\gamma_{2}L$ Reacción de la Bella al Poeta $R_{L}(P)=\beta_{1}P\left(1-\left(\frac{P}{\gamma}\right)^{2}\right)$ $\alpha_{1}>\alpha_{2}>\alpha_{3}$ Modelo computacional El modelo se simplifica sustituyendo, en el sistema de ecuaciones de la tasa de cambio emotivo e inspiración, los argumentos de las reacciones de Laura y Petrarca al otro. De esta manera las únicas variables son $L$, $P$ y $Z$.
def dL(t): return -3.6 * L[t] + 1.2 * (P[t]*(1-P[t]**2) - 1) def dP(t): return -1.2 * L[t] + 6 * L[t] + 12 / (1 + Z[t]) def dZ(t): return -0.12 * Z[t] + 12 * P[t] years = 20 dt = 0.01 steps = int(years / dt) L = np.zeros(steps) P = np.zeros(steps) Z = np.zeros(steps) for t in range(steps-1): L[t+1] = L[t] + dt*dL(t) P[t+1] = P[t] + dt*dP(t) Z[t+1] = Z[t] + dt*dZ(t) plt.plot(range(steps), P, color='blue') #plt.plot(range(steps), Z/12.0, color='teal') plt.plot(range(steps), L, color='deeppink') plt.xlabel("tiempo (año/100)") plt.ylabel("estado emotivo") plt.plot(range(steps), Z) plt.xlabel("tiempo (año/100)") plt.ylabel("inspiración")
dinamica_amor.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
Espacio fase
from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(11,11)) ax = fig.gca(projection='3d') # Make the grid l, p, z = np.meshgrid(np.linspace(-1, 2, 11), np.linspace(-1, 1, 11), np.linspace(0, 18, 11)) # Make the direction data for the arrows u = -3.6 * l + 1.2 * (p*(1-p**2) - 1) v = -1.2 * l + 6 * l + 12 / (1 + z) w = -0.12 * z + 12 * p ax.quiver(l, p, z, u, v, w, length=0.1, normalize=True, arrow_length_ratio=1, colors='deeppink') ax.set_xlabel("P") ax.set_ylabel("L") ax.set_zlabel("Z") ax.plot(P, L, Z) plt.show() fig = plt.figure(figsize=(11,11)) ax = fig.gca(projection='3d') # Make the grid l, p, z = np.meshgrid(np.linspace(-2, 2, 11), np.linspace(-2, 5, 11), np.linspace(-130, -100, 11)) # Make the direction data for the arrows u = -3.6 * l + 1.2 * (p*(1-p**2) - 1) v = -1.2 * l + 6 * l + 12 / (1 + z) w = -0.12 * z + 12 * p ax.quiver(l, p, z, u, v, w, length=0.1, normalize=True, arrow_length_ratio=1, colors='deeppink') years = 20 dt = 0.01 steps = int(years / dt) L = np.zeros(steps) P = np.zeros(steps) Z = np.zeros(steps) L[0]=5 P[0]=-2 Z[0]=-120 for t in range(steps-1): L[t+1] = L[t] + dt*dL(t) P[t+1] = P[t] + dt*dP(t) Z[t+1] = Z[t] + dt*dZ(t) ax.set_xlabel("P") ax.set_ylabel("L") ax.set_zlabel("Z") ax.plot(P, L, Z) plt.show()
dinamica_amor.ipynb
rgarcia-herrera/sistemas-dinamicos
gpl-3.0
Problem 2) Unsupervised Machine Learning Unsupervised machine learning, sometimes referred to as clustering or data mining, aims to group or classify sources in the multidimensional feature space. The "unsupervised" comes from the fact that there are no target labels provided to the algorithm, so the machine is asked to cluster the data "on its own." The lack of labels means there is no (simple) method for validating the accuracy of the solution provided by the machine (though sometimes simple examination can show the results are terrible). For this reason [note - this is my (AAM) opinion and there many be many others who disagree], unsupervised methods are not particularly useful for astronomy. Supposing one did find some useful clustering structure, an adversarial researcher could always claim that the current feature space does not accurately capture the physics of the system and as such the clustering result is not interesting or, worse, erroneous. The one potentially powerful exception to this broad statement is outlier detection, which can be a branch of both unsupervised and supervised learning. Finding weirdo objects is an astronomical pastime, and there are unsupervised methods that may help in that regard in the LSST era. To begin today we will examine one of the most famous, and simple, clustering algorithms: $k$-means. $k$-means clustering looks to identify $k$ convex clusters, where $k$ is a user defined number. And here-in lies the rub: if we truly knew the number of clusters in advance, we likely wouldn't need to perform any clustering in the first place. This is the major downside to $k$-means. Operationally, pseudocode for the algorithm can be summarized as the following: initiate search by identifying k points (i.e. the cluster centers) loop assign each point in the data set to the closest cluster center calculate new cluster centers based on mean position of all cluster points if diff(new center - old center) < threshold: stop (i.e. clusters are defined) The threshold is defined by the user, though in some cases the total number of iterations is also An advantage of $k$-means is that the solution will always converge, though the solution may only be a local minimum. Disadvantages include the assumption of convexity, i.e. difficult to capture complex geometry, and the curse of dimensionality (though you can combat that with dimensionality reduction after yesterday). In scikit-learn the KMeans algorithm is implemented as part of the sklearn.cluster module. Problem 2a Fit two different $k$-means models to the iris data, one with 2 clusters and one with 3 clusters. Plot the resulting clusters in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Hint - the .labels_ attribute of the KMeans object will return the clusters measured by the algorithm.
from sklearn.cluster import KMeans Kcluster = KMeans( # complete # complete plt.figure() plt.scatter( # complete plt.xlabel('sepal length') plt.ylabel('sepal width') Kcluster = KMeans( # complete # complete plt.figure() plt.scatter( # complete plt.xlabel('sepal length') plt.ylabel('sepal width')
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
With 3 clusters the algorithm does a good job of separating the three classes. However, without the a priori knowledge that there are 3 different types of iris, the 2 cluster solution would appear to be superior. Problem 2b How do the results change if the 3 cluster model is called with n_init = 1 and init = 'random' options? Use rs for the random state [this allows me to cheat in service of making a point]. *Note - the respective defaults for these two parameters are 10 and k-means++, respectively. Read the docs to see why these choices are, likely, better than those in 2b.
rs = 14 Kcluster1 = KMeans(# complete plt.figure() plt.scatter(# complete plt.xlabel('sepal length') plt.ylabel('sepal width')
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
A random aside that is not particularly relevant here $k$-means evaluates the Euclidean distance between individual sources and cluster centers, thus, the magnitude of the individual features has a strong effect on the final clustering outcome. Problem 2c Calculate the mean, standard deviation, min, and max of each feature in the iris data set. Based on these summaries, which feature is most important for clustering?
print("feature\t\t\tmean\tstd\tmin\tmax") for featnum, feat in enumerate(iris.feature_names): print("{:s}\t{:.2f}\t{:.2f}\t{:.2f}\t{:.2f}".format( # complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Petal length has the largest range and standard deviation, thus, it will have the most "weight" when determining the $k$ clusters. The truth is that the iris data set is fairly small and straightfoward. Nevertheless, we will now examine the clustering results after re-scaling the features. [Some algorithms, cough Support Vector Machines cough, are notoriously sensitive to the feature scaling, so it is important to know about this step.] Imagine you are classifying stellar light curves: the data set will include contact binaries with periods of $\sim 0.1 \; \mathrm{d}$ and Mira variables with periods of $\gg 100 \; \mathrm{d}$. Without re-scaling, this feature that covers 4 orders of magnitude may dominate all others in the final model projections. The two most common forms of re-scaling are to rescale to a guassian with mean $= 0$ and variance $= 1$, or to rescale the min and max of the feature to $[0, 1]$. The best normalization is problem dependent. The sklearn.preprocessing module makes it easy to re-scale the feature set. It is essential that the same scaling used for the training set be used for all other data run through the model. The testing, validation, and field observations cannot be re-scaled independently. This would result in meaningless final classifications/predictions. Problem 2d Re-scale the features to normal distributions, and perform $k$-means clustering on the iris data. How do the results compare to those obtained earlier? Hint - you may find 'StandardScaler()' within the sklearn.preprocessing module useful.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit( # complete Kcluster = KMeans( # complete plt.figure() plt.scatter( # complete plt.xlabel('sepal length') plt.ylabel('sepal width')
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
These results are almost identical to those obtained without scaling. This is due to the simplicity of the iris data set. How do I test the accuracy of my clusters? Essentially - you don't. There are some methods that are available, but they essentially compare clusters to labeled samples, and if the samples are labeled it is likely that supervised learning is more useful anyway. If you are curious, scikit-learn does provide some built-in functions for analyzing clustering, but again, it is difficult to evaluate the validity of any newly discovered clusters. What if I don't know how many clusters are present in the data? An excellent question, as you will almost never know this a priori. Many algorithms, like $k$-means, do require the number of clusters to be specified, but some other methods do not. As an example DBSCAN. In brief, DBSCAN requires two parameters: minPts, the minimum number of points necessary for a cluster, and $\epsilon$, a distance measure. Clusters are grown by identifying core points, objects that have at least minPts located within a distance $\epsilon$. Reachable points are those within a distance $\epsilon$ of at least one core point but less than minPts core points. Identically, these points define the outskirts of the clusters. Finally, there are also outliers which are points that are $> \epsilon$ away from any core points. Thus, DBSCAN naturally identifies clusters, does not assume clusters are convex, and even provides a notion of outliers. The downsides to the algorithm are that the results are highly dependent on the two tuning parameters, and that clusters of highly different densities can be difficult to recover (because $\epsilon$ and minPts is specified for all clusters. In scitkit-learn the DBSCAN algorithm is part of the sklearn.cluster module. $\epsilon$ and minPts are set by eps and min_samples, respectively. Problem 2e Cluster the iris data using DBSCAN. Play around with the tuning parameters to see how they affect the final clustering results. How does the use of DBSCAN compare to $k$-means? Can you obtain 3 clusters with DBSCAN? If not, given the knowledge that the iris dataset has 3 classes - does this invalidate DBSCAN as a viable algorithm? Note - DBSCAN labels outliers as $-1$, and thus, plt.scatter(), will plot all these points as the same color.
from sklearn.cluster import DBSCAN dbs = DBSCAN( # complete dbs.fit( # complete dbs_outliers = # complete plt.figure() plt.scatter( # complete plt.scatter( # complete plt.xlabel('sepal length') plt.ylabel('sepal width')
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
I have used my own domain knowledge to specificly choose features that may be useful when clustering galaxies. If you know a bit about SDSS and can think of other features that may be useful feel free to add them to the query. One nice feature of astropy tables is that they can readily be turned into pandas DataFrames, which can in turn easily be turned into a sklearn X array with NumPy. For example: X = np.array(SDSSgals.to_pandas()) And you are ready to go. Challenge Problem Using the SDSS dataset above, identify interesting clusters within the data [this is intentionally very open ended, if you uncover anything especially exciting you'll have a chance to share it with the group]. Feel free to use the algorithms discussed above, or any other packages available via sklearn. Can you make sense of the clusters in the context of galaxy evolution? Hint - don't fret if you know nothing about galaxy evolution (neither do I!). Just take a critical look at the clusters that are identified
# complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Note - the above solution seems to separate out elliptical galaxies from blue star forming galaxies, however, the results are highly, highly dependent upon the tuning parameters. Problem 3) Supervised Machine Learning Supervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later). The details and machinations of supervised learning will be explored further during the following break-out session. Here, we will simply introduce some of the basics as a point of comparison to unsupervised machine learning. We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: $k$-nearest-neighbors ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem. In scikit-learn the KNeighborsClassifer algorithm is implemented as part of the sklearn.neighbors module. Problem 3a Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure? Hint - after you have constructed the model, it is possible to obtain model predictions using the .predict() method, which requires a feature array, same features and order as the training set, as input. Hint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?
from sklearn.neighbors import KNeighborsClassifier KNNclf = KNeighborsClassifier( # complete preds = # complete plt.figure() plt.scatter( # complete KNNclf = KNeighborsClassifier(# complete preds = # complete plt.figure() plt.scatter( # complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data? Without going into too much detail, we will test this using cross validation (CV), which will be explored in more detail later. In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using cross_val_predict, we can get a better sense of the model accuracy. Predictions from cross_val_predict are produced in the following manner: from sklearn.cross_validation import cross_val_predict CVpreds = cross_val_predict(sklearn.model(), X, y) where sklearn.model() is the desired model, X is the feature array, and y is the label array. Problem 3b Produce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?
from sklearn.cross_validation import cross_val_predict CVpreds = cross_val_predict( # complete plt.figure() plt.scatter( # complete print("The accuracy of the kNN = 5 model is ~{:.4}".format( # complete CVpreds50 = cross_val_predict( # complete print("The accuracy of the kNN = 50 model is ~{:.4}".format( # complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur. Problem 3c Calculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.
# complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal. Like almost everything else we have encountered during this exercise, scikit-learn makes it easy to compute a confusion matrix. This can be accomplished with the following: from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_prep) Problem 3d Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
from sklearn.metrics import confusion_matrix cm = confusion_matrix( # complete print(cm)
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using imshow() within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful. Problem C3 Plot the confusion matrix. Be sure to label each of the axeses. Hint - you might find the sklearn confusion matrix tutorial helpful for making a nice plot.
plt.imshow( # complete
Sessions/Session01/Day4/IntroToMachineLearning.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
First: load the test image and run Gaussian filter on it
%%time X1 = plt.imread('input.png') X1 = rgb2gray(X1) s = 16. # sigma for testing filtering X2 = np.copy(X1).astype(np.float64) # Gaussian filter runs with zero-border X2 = ndimage.filters.gaussian_filter(X1, sigma=s, mode='constant')
python/alg5.ipynb
andmax/gpufilter
mit
Second: setup basic parameters from the input image
%%time b = 32 # squared block size (b,b) w = [ weights1(s), weights2(s) ] # weights of the recursive filter width, height = X1.shape[1], X1.shape[0] m_size, n_size = get_mn(X1, b) blocks = break_blocks(X1, b, m_size, n_size) # Pre-computation of matrices and pre-allocation of carries alg5m1 = build_alg5_matrices(b, 1, w[0], width, height) alg5m2 = build_alg5_matrices(b, 2, w[1], width, height) alg5c1 = build_alg5_carries(m_size, n_size, b, 1) alg5c2 = build_alg5_carries(m_size, n_size, b, 2)
python/alg5.ipynb
andmax/gpufilter
mit
Third: run alg5 with filter order 1 then 2
%%time # Running alg5 with filter order r = 1 alg5_stage1(m_size, n_size, 1, w[0], alg5m1, alg5c1, blocks) alg5_stage23(m_size, n_size, alg5m1, alg5c1) alg5_stage45(m_size, n_size, alg5m1, alg5c1) alg5_stage6(m_size, n_size, w[0], alg5c1, blocks) # Running alg5 with filter order r = 2 alg5_stage1(m_size, n_size, 2, w[1], alg5m2, alg5c2, blocks) alg5_stage23(m_size, n_size, alg5m2, alg5c2) alg5_stage45(m_size, n_size, alg5m2, alg5c2) alg5_stage6(m_size, n_size, w[1], alg5c2, blocks) # Join blocks back together X3 = join_blocks(blocks, b, m_size, n_size, X1.shape)
python/alg5.ipynb
andmax/gpufilter
mit
Data generation Let's first load some data!
# open a pandas dataframe for use below from histogrammar import resources df = pd.read_csv(resources.data("test.csv.gz"), parse_dates=["date"]) df.head()
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Let's fill a histogram! Histogrammar treats histograms as objects. You will see this has various advantages. Let's fill a simple histogram with a numpy array.
# this creates a histogram with 100 even-sized bins in the (closed) range [-5, 5] hist1 = hg.Bin(num=100, low=-5, high=5) # filling it with one data point: hist1.fill(0.5) hist1.entries # filling the histogram with an array: hist1.fill.numpy(np.random.normal(size=10000)) hist1.entries # let's plot it hist1.plot.matplotlib(); # Alternatively, you can call this to make the same histogram: # hist1 = hg.Histogram(num=100, low=-5, high=5)
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Histogrammar also supports open-ended histograms, which are sparsely represented. Open-ended histograms are used when you have a distribution of known scale (bin width) but unknown domain (lowest and highest bin index). Bins in a sparse histogram only get created and filled if the corresponding data points are encountered. A sparse histogram has a binWidth, and optionally an origin parameter. The origin is the left edge of the bin whose index is 0 and is set to 0.0 by default. Sparse histograms are nice if you don't want to restrict the range, for example for tracking data distributions over time, which may have large, sudden outliers.
hist2 = hg.SparselyBin(binWidth=10, origin=0) hist2.fill.numpy(df['age'].values) hist2.plot.matplotlib(); # Alternatively, you can call this to make the same histogram: # hist2 = hg.SparselyHistogram(binWidth=10)
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Filling from a dataframe Let's make the same 1d (sparse) histogram directly from a (pandas) dataframe.
hist3 = hg.SparselyBin(binWidth=10, origin=0, quantity='age') hist3.fill.numpy(df) hist3.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
When importing histogrammar, pandas (and spark) dataframes get extra functions to create histograms that all start with "hg_". For example: hg_Bin or hg_SparselyBin. Note that the column "age" is picked by setting quantity="age", and also that the filling step is done automatically.
# Alternatively, do: hist3 = df.hg_SparselyBin(binWidth=10, origin=0, quantity='age') # ... where hist3 automatically picks up column age from the dataframe, # ... and does not need to be filled by calling fill.numpy() explicitly.
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Handy histogram methods For any 1-dimensional histogram extract the bin entries, edges and centers as follows:
# full range of bin entries, and those in a specified range: (hist3.bin_entries(), hist3.bin_entries(low=30, high=80)) # full range of bin edges, and those in a specified range: (hist3.bin_edges(), hist3.bin_edges(low=31, high=71)) # full range of bin centers, and those in a specified range: (hist3.bin_centers(), hist3.bin_centers(low=31, high=80)) hsum = hist2 + hist3 hsum.entries hsum *= 4 hsum.entries
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Irregular bin histogram variants There are two other open-ended histogram variants in addition to the SparselyBin we have seen before. Whereas SparselyBin is used when bins have equal width, the others offer similar alternatives to a single fixed bin width. There are two ways: - CentrallyBin histograms, defined by specifying bin centers; - IrregularlyBin histograms, with irregular bin edges. They both partition a space into irregular subdomains with no gaps and no overlaps.
hist4 = hg.CentrallyBin(centers=[15, 25, 35, 45, 55, 65, 75, 85, 95], quantity='age') hist4.fill.numpy(df) hist4.plot.matplotlib(); hist4.bin_edges()
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Note the slightly different plotting style for CentrallyBin histograms (e.g. x-axis labels are central values instead of edges). Multi-dimensional histograms Let's make a multi-dimensional histogram. In Histogrammar, a multi-dimensional histogram is composed as two recursive histograms. We will use histograms with irregular binning in this example.
edges1 = [-100, -75, -50, -25, 0, 25, 50, 75, 100] edges2 = [-200, -150, -100, -50, 0, 50, 100, 150, 200] hist1 = hg.IrregularlyBin(edges=edges1, quantity='latitude') hist2 = hg.IrregularlyBin(edges=edges2, quantity='longitude', value=hist1) # for 3 dimensions or higher simply add the 2-dim histogram to the value argument hist3 = hg.SparselyBin(binWidth=10, quantity='age', value=hist2) hist1.bin_centers() hist2.bin_centers() hist2.fill.numpy(df) hist2.plot.matplotlib(); # number of dimensions per histogram (hist1.n_dim, hist2.n_dim, hist3.n_dim)
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Accessing bin entries For most 2+ dimensional histograms, one can get the bin entries and centers as follows:
from histogrammar.plot.hist_numpy import get_2dgrid x_labels, y_labels, grid = get_2dgrid(hist2) y_labels, grid
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Accessing a sub-histogram Depending on the histogram type of the first axis, hg.Bin or other, one can access the sub-histograms directly from: hist.values or hist.bins
# Acces sub-histograms from IrregularlyBin from hist.bins # The first item of the tuple is the lower bin-edge of the bin. hist2.bins[1] h = hist2.bins[1][1] h.plot.matplotlib() h.bin_entries()
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Histogram types recap So far we have covered the histogram types: - Bin histograms: with a fixed range and even-sized bins, - SparselyBin histograms: open-ended and with a fixed bin-width, - CentrallyBin histograms: open-ended and using bin centers. - IrregularlyBin histograms: open-ended and using (irregular) bin edges, All of these process numeric variables only. Categorical variables For categorical variables use the Categorize histogram - Categorize histograms: accepting categorical variables such as strings and booleans.
histy = hg.Categorize('eyeColor') histx = hg.Categorize('favoriteFruit', value=histy) histx.fill.numpy(df) histx.plot.matplotlib(); # show the datatype(s) of the histogram histx.datatype
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Categorize histograms also accept booleans:
histy = hg.Categorize('isActive') histy.fill.numpy(df) histy.plot.matplotlib(); histy.bin_entries() histy.bin_labels() # histy.bin_centers() will work as well for Categorize histograms
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Other histogram functionality There are several more histogram types: - Minimize, Maximize: keep track of the min or max value of a numeric distribution, - Average, Deviate: keep track of the mean or mean and standard deviation of a numeric distribution, - Sum: keep track of the sum of a numeric distribution, - Stack: keep track how many data points pass certain thresholds. - Bag: works like a dict, it keeps tracks of all unique values encountered in a column, and can also do this for vector s of numbers. For strings, Bag works just like the Categorize histogram.
hmin = df.hg_Minimize('latitude') hmax = df.hg_Maximize('longitude') (hmin.min, hmax.max) havg = df.hg_Average('latitude') hdev = df.hg_Deviate('longitude') (havg.mean, hdev.mean, hdev.variance) hsum = df.hg_Sum('age') hsum.sum # let's illustrate the Stack histogram with longitude distribution # first we plot the regular distribution hl = df.hg_SparselyBin(25, 'longitude') hl.plot.matplotlib(); # Stack counts how often data points are greater or equal to the provided thresholds thresholds = [-200, -150, -100, -50, 0, 50, 100, 150, 200] hs = df.hg_Stack(thresholds=thresholds, quantity='longitude') hs.thresholds hs.bin_entries()
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Stack histograms are useful to make efficiency curves. With all these histograms you can make multi-dimensional histograms. For example, you can evaluate the mean and standard deviation of one feature as a function of bins of another feature. (A "profile" plot, similar to a box plot.)
hav = hg.Deviate('age') hlo = hg.SparselyBin(25, 'longitude', value=hav) hlo.fill.numpy(df) hlo.bins hlo.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Convenience functions There are several convenience functions to make such composed histograms. These are: - Profile: Convenience function for creating binwise averages. - SparselyProfile: Convenience function for creating sparsely binned binwise averages. - ProfileErr: Convenience function for creating binwise averages and variances. - SparselyProfile: Convenience function for creating sparsely binned binwise averages and variances. - TwoDimensionallyHistogram: Convenience function for creating a conventional, two-dimensional histogram. - TwoDimensionallySparselyHistogram: Convenience function for creating a sparsely binned, two-dimensional histogram.
# For example, call this convenience function to make the same histogram as above: hlo = df.hg_SparselyProfileErr(25, 'longitude', 'age') hlo.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Overview of histograms Here you can find the list of all available histograms and aggregators and how to use each one: https://histogrammar.github.io/histogrammar-docs/specification/1.0/ The most useful aggregators are the following. Tinker with them to get familiar; building up an analysis is easier when you know "there's an app for that." Simple counters: Count: just counts. Every aggregator has an entries field, but Count only has this field. Average and Deviate: add mean and variance, cumulatively. Minimize and Maximize: lowest and highest value seen. Histogram-like objects: Bin and SparselyBin: split a numerical domain into uniform bins and redirect aggregation into those bins. Categorize: split a string-valued domain by unique values; good for making bar charts (which are histograms with a string-valued axis). CentrallyBin and IrregularlyBin: split a numerical domain into arbitrary subintervals, usually for separate plots like particle pseudorapidity or collision centrality. Collections: Label, UntypedLabel, and Index: bundle objects with string-based keys (Label and UntypedLabel) or simply an ordered array (effectively, integer-based keys) consisting of a single type (Label and Index) or any types (UntypedLabel). Branch: for the fourth case, an ordered array of any types. A Branch is useful as a "cable splitter". For instance, to make a histogram that tracks minimum and maximum value, do this: Making many histograms at once There a nice method to make many histograms in one go. See here. By default automagical binning is applied to make the histograms. More details one how to use this function are found in in the advanced tutorial.
hists = df.hg_make_histograms() hists.keys() h = hists['transaction'] h.plot.matplotlib(); h = hists['date'] h.plot.matplotlib(); # you can also select which and make multi-dimensional histograms hists = df.hg_make_histograms(features = ['longitude:age']) hist = hists['longitude:age'] hist.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
Storage Histograms can be easily stored and retrieved in/from the json format.
# storage hist.toJsonFile('long_age.json') # retrieval factory = hg.Factory() hist2 = factory.fromJsonFile('long_age.json') hist2.plot.matplotlib(); # we can store the histograms if we want to import json from histogrammar.util import dumper # store with open('histograms.json', 'w') as outfile: json.dump(hists, outfile, default=dumper) # and load again with open('histograms.json') as handle: hists2 = json.load(handle) hists.keys()
histogrammar/notebooks/histogrammar_tutorial_basic.ipynb
histogrammar/histogrammar-python
apache-2.0
For the arrival rate, let's set $N = 10,000$ users, and our time interval $T = 2.0$ hours. From that, we can calculate an arrival rate of $\lambda = N / T = 5,000$ per hour or $\lambda = 1.388$ users / second. Now for the times. Starting at $T_0$ we have no arrivals, but as time passes the probability of an event increases, until it reaches a near-certainty. If we randomly chose a value $U$ between 0 and 1, then we can calculate a random time interval as $$ I_n = \frac{-ln U}{\lambda} $$ Let's validate this by generating a large sample of intervals and taking their average.
count = int(1E6) x = np.arange(count) y = -np.log(1.0 - np.random.random_sample(len(x))) / lmbda np.average(y) y[:10]
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
So with a rate of $\lambda = 1.388$ new events would arrive on average $I = 0.72$ seconds apart (or $1 / \lambda$). We can plot the distribution of these random times, where we should see an exponential distribution.
plt.hist(y, 10) plt.show()
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
Random Generation Python contains the random.expovariate method which should give us similar intervals. Let's see by averaging a large sum of them.
from random import expovariate sum([expovariate(lmbda) for i in range(count)])/count
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
For completeness, we can also use NumPy's random.poisson method if we pass in $1 / \lambda$
y = np.random.exponential(1.0/lmbda, count) np.cumsum(y)[:10] np.average(y)
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
Again, this is in agreement with our expected average interval. Note the numbers (and histogram plots) won't match exactly as we are dealing with random time intervals.
x = range(count) y = [expovariate(lmbda) for i in x] plt.hist(y, 10) plt.show()
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
Event Times For a timeline of events, we can simply generate a sequence of independent intervals, and then generate a runnng sum of them for absolute timestamps.
intervals = [expovariate(lmbda) for i in range(1000)] timestamps = [0.0] timestamp = 0.0 for t in intervals: timestamp += t timestamps.append(timestamp) timestamps[:10] deltas = [y - x for x, y in zip(timestamps, timestamps[1:])] deltas[:10] sum(deltas) / len(deltas) deltas = [y - x for x, y in zip(timestamps, timestamps[1:])] plt.figure(figsize=(16, 4)) plt.plot(deltas, 'r+') plt.show()
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
Here we can readily see how the time between events is distributed, with most of the deltas below 1.0 with some fairly large outliers. This is to be expected as $T_n$ will always be greater than $T_{n-1}$ but perhaps not by much. Finally, let's generate $T = 2.0$ hours worth of timestamps and see if we have close to our desired $N$ value. We will do this 100 times and then average the counts. We should have a value that is very close to $N = 10,000$.
limit = T * 60 * 60 counts = [] for iter in range(100): count = 0 timestamp = 0.0 while timestamp < limit: timestamp += expovariate(lmbda) count += 1 counts.append(count) sum(counts) / len(counts)
src/articles/poisson/index.ipynb
bradhowes/keystrokecountdown
mit
PyTorch Tutorial Transfer Learning을 이용해 Network를 학습하는 방법에 작성된 글입니다 충분한 크기의 데이터 세트를 갖긴 힘들기 때문에 잘 학습된 네트워크를 Pretrain한 후 사용하는 경우가 많음 convnet finetuning : 무작위 초기화 대신 imagenet 1000 데이터셋에서 학습한 네트워크를 사용해 네트워크 초기화하고 훈련은 동일하게 진행 고정된 Feature 추출 : 최종 Fully Connected Layer를 제외한 모든 네트워크의 가중치를 고정. 마지막 레이어는 새 레이어로 대체되고 이 레이어만 학습 ants와 bees를 분류하는 모델을 만들려고 합니다. 각 클래스별 120개의 이미지가 있고, 75개의 validation 데이터가 있습니다. 굉장히 작은 데이터지만 transfer learning을 통해 합리적으로 만들어보겠습니다
data_transforms = { 'train': transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes use_gpu = torch.cuda.is_available()
pytorch/Transfer-Learning.ipynb
zzsza/TIL
mit
시각화
def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) # 데이터를 numpy 객체로 바꾼 후 Transpose mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated # Get a batch of training data # dataloaders의 객체는 iterable함 inputs, classes = next(iter(dataloaders['train'])) # Make a grid from batch ''' torchvision.utils.make_grid(tensor, nrow=8, padding=2, normalize=False, range=None, scale_each=False, pad_value=0) 이미지의 그리드를 만드는 함수 Args: tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W) or a list of images all of the same size. nrow (int, optional): Number of images displayed in each row of the grid. The Final grid size is (B / nrow, nrow). Default is 8. padding (int, optional): amount of padding. Default is 2. normalize (bool, optional): If True, shift the image to the range (0, 1), by subtracting the minimum and dividing by the maximum pixel value. range (tuple, optional): tuple (min, max) where min and max are numbers, then these numbers are used to normalize the image. By default, min and max are computed from the tensor. scale_each (bool, optional): If True, scale each image in the batch of images separately rather than the (min, max) over all images. pad_value (float, optional): Value for the padded pixels. ''' out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes]) class_names classes
pytorch/Transfer-Learning.ipynb
zzsza/TIL
mit
learning rate를 스케쥴링 가장 좋은 Model을 저장
def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs-1)) print('-'*10) for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train(True) # train else: model.eval() # model.train(False) # eval running_loss = 0.0 running_corrects = 0 for data in dataloaders[phase]: inputs, labels = data inputs, labels = Variable(inputs), Variable(labels) optimizer.zero_grad() outputs = model(inputs) _, preds = torch.max(outputs.data, 1) #argmax loss = criterion(outputs, labels) if phase == 'train': loss.backward() optimizer.step() running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model def visualize_model(model, num_images=6): images_so_far = 0 fig = plt.figure() for i, data in enmerate(dataloaders['val']): inputs, labels = data inputs, labels = Variable(inputs), Variable(labels) outputs = model(inputs) _, preds = torch.max(outputs.data, 1) for j in range(inputs.size()[0]): image_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title('predicted: {}'.format(class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_so_far == num_images: return
pytorch/Transfer-Learning.ipynb
zzsza/TIL
mit
FineTuning
model_ft = models.resnet18(pretrained=True) model_ft type(model_ft) dir(model_ft) num_ftrs = model_ft.fc.in_features num_ftrs model_ft.fc = nn.Linear(num_ftrs, 2) if use_gpu: model_ft = model_ft.cuda() model_ft.parameters() [i for i in model_ft.parameters()] criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) %%time model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25) #ToDO : tensorboardX를 사용해 loss떨어지는 것 구현하기 visualize_model(model_ft)
pytorch/Transfer-Learning.ipynb
zzsza/TIL
mit
ConvNet as fixed feature extractor 마지막 레이러를 제외한 모든 네트워크를 freeze해야 합니다 backward()할 경우 계산이 되는 것을 방지하기 위해 requires_grad == False를 해서 파라미터를 freeze해야합니다! Autograd 문서 참고하기
model_conv = torchvision.models.resnet18(pretrained=True) for param in model_conv.parameters(): param.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(num_ftrs, 2) if use_gpu: model_conv = model_conv.cuda() criterion = nn.CrossEntropyLoss() # Observe that only parameters of final layer are being optimized as # opoosed to before. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) model_conv = train_model(model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25) visualize_model(model_conv) plt.ioff() plt.show()
pytorch/Transfer-Learning.ipynb
zzsza/TIL
mit
Load all the data
# First we load the file file_location = '../results_database/text_wall_street_big.hdf5' f = h5py.File(file_location, 'r') # Now we need to get the letters and align them text_directory = '../data/wall_street_letters.npy' letters_sequence = np.load(text_directory) Nletters = len(letters_sequence) symbols = set(letters_sequence) # Load the particular example Nspatial_clusters = 8 Ntime_clusters = 40 Nembedding = 3 parameters_string = '/' + str(Nspatial_clusters) parameters_string += '-' + str(Ntime_clusters) parameters_string += '-' + str(Nembedding)
presentations/2016-01-28(Wall-Street-Nexa-Maximal-Lag-Analysis).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Latency analysis
# Set the parameters for the simulation maximal_lags = np.arange(8, 21, 3) # Run the delay analysis N = 50000 delays = np.arange(0, 25, 1) accuracy_matrix = np.zeros((maximal_lags.size, delays.size)) for maximal_lag_index, maximal_lag in enumerate(maximal_lags): # Extract the appropriate database run_name = '/low-resolution' + str(maximal_lag) nexa = f[run_name + parameters_string] # Now we load the time and the code vectors time = nexa['time'] code_vectors = nexa['code-vectors'] code_vectors_distance = nexa['code-vectors-distance'] code_vectors_softmax = nexa['code-vectors-softmax'] code_vectors_winner = nexa['code-vectors-winner'] for delay_index, delay in enumerate(delays): X = code_vectors_softmax[:(N - delay)] y = letters_sequence[delay:N] X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10) clf = svm.SVC(C=1.0, cache_size=200, kernel='linear') clf.fit(X_train, y_train) score = clf.score(X_test, y_test) * 100.0 accuracy_matrix[maximal_lag_index, delay_index] = score print('delay_index', delay_index) print('maximal_lag_index', maximal_lag_index) print('maximal_lag', maximal_lag) print('delay', delay) print('score', score) print('-------------')
presentations/2016-01-28(Wall-Street-Nexa-Maximal-Lag-Analysis).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Plot it
fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) for maximal_lag_index in range(maximal_lags.size): ax.plot(delays, accuracy_matrix[maximal_lag_index, :], 'o-', lw=2, markersize=10, label=str(maximal_lags[maximal_lag_index])) ax.set_xlabel('Delays') ax.set_ylabel('Accuracy') ax.set_ylim([0, 105]) ax.set_title('Latency analysis for different lags') ax.legend()
presentations/2016-01-28(Wall-Street-Nexa-Maximal-Lag-Analysis).ipynb
h-mayorquin/time_series_basic
bsd-3-clause
Here we define a string of text by enclosing it with quotations marks and assigning it to a variable or container called text. The fact that Python still sees this piece of text as a continguous string of characters becomes evident when we ask Python to print out the length of text, using the len() function:
print(len(text))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
One could say that characters are the 'atoms' or smallest meaningful units in computational text processing. Just as computer images use pixels as their fundamental building blocks, all digital text processing applications start from raw characters and it are these characters that are physically stored on your machines in bits and bytes. DIY Define a string containing your own name in the code block below and print its length. Insert a number of whitespaces at the end of your name (i.e. tab the space bar a couple of times): does this change the length of the string?
# your code goes here
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Many people find it more intuitive to consider texts as a strings of words, rather than plain characters, because words correspond to more concrete entities. In Python, we can easily turn our original 'string' into a list of words:
words = text.split() print(words)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Using the split() method, we split our original sentence into a word list along instances of whitespace characters. Note that, in technical terms, the variable type of text is different from that of the newly created variable words:
print(type(text)) print(type(words))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Likewise, they evidently differ in length:
print(len(text)) print(len(words))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
By using 'indexing' (with square brackets), we can now access individual words in our word list. Check out the following print statements:
print(words[3]) print(words[5])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
DIY Try out the indexes [0] and [-1] for the example with words. Which words are being printed when you use these indices? Do this makes sense? What is peculiar about the way Python 'counts'? Note that words is a so-called list variable in technical terms, but that the individual elements of words are still plain strings:
print(type(words[3]))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Tokenization In the previous paragraph, we have adopted an extremely crude definition of a 'word', namely as a string of characters that doesn't contain any whitespace. There are of course many problems that arise if we use such a naive definition. Can you think of some? In computer science, and computational linguistics in particular, people have come up with much smarter ways to divide texts into words. This process is called tokenization, which refers to the fact that this process divides strings of characters into a list of more meaningful tokens. One interesting package which we can use for this, is nltk (the Natural Language Toolkit), a package which has been specifically designed to deal with language problems. First, we have to import it, since it isn't part of the standard library of Python:
import nltk
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We can now use apply nltk's functionality, for instance, its (default) tokenizer for English:
tokens = nltk.word_tokenize(text) print(tokens)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Note how the function word_tokenize() neatly splits off punctuation! Many improvements nevertheless remain. To collapse the difference between uppercase and lowercase variables, for instance, we could first lowercase the original input string:
lower_str = text.lower() lower_tokens = nltk.word_tokenize(lower_str) print(lower_tokens)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Many applications will not be very interested in punctuation marks, so can we can try to remove these as well. The isalpha() method allows you to determine whether a string only contains alphabetic characters:
print(lower_tokens[1].isalpha()) print(lower_tokens[-1].isalpha())
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Functions like isalpha() return something that is called a 'boolean' value, a kind of variable that can only take two values, i.e. True or False. Such values are useful because you can use them to test whether some condition is true or not. For example, if isalpha() evaluates to False for a word, we can have Python ignore such a word. DIY Using some more complicated Python syntax (a so-called 'list generator'), it is very easy to filter out non-alphabetic strings. In the example below, I inserted a logical thinking error on purpose: can you adapt the line below and make it output the correct result? You will note that Python is really a super-intuitive programming language, because it almost reads like plain English.
clean_tokens = [w for w in lower_tokens if not w.isalpha()] print(clean_tokens)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Counting words Once we have come up with a good way to split texts into individual tokens, it is time to start thinking about how we can represent texts via these tokens. One popular approach to this problem is called the bag-of-words model (BOW): this is a very old (and slightly naive) strategy for text representation, but is still surprisingly popular. Many spam filters, for instance, will still rely on a bag-of-words model when deciding which email messages will show up in your Junk folder. The intuition behind this model is very simple: to represent a document, we consider it a 'bag', containing tokens in no particular order. We then characterize a particular text by counting how often each term occurs in it. Counting how often each word occurs in a list of tokens example from above is child's play in Python. For this purpose, we copy some of the code from the previous section and apply it too a larger paragraph:
text = """It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife. However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered the rightful property of some one or other of their daughters. "My dear Mr. Bennet," said his lady to him one day, "have you heard that Netherfield Park is let at last?" Mr. Bennet replied that he had not. "But it is," returned she; "for Mrs. Long has just been here, and she told me all about it." Mr. Bennet made no answer. "Do you not want to know who has taken it?" cried his wife impatiently. "_You_ want to tell me, and I have no objection to hearing it." This was invitation enough.""" lower_str = text.lower() lower_tokens = nltk.word_tokenize(lower_str) clean_tokens = [w for w in lower_tokens if w.isalpha()] print('Word count:', len(clean_tokens))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We obtain a list of 148 tokens. Counting how often each individual token occurs in this 'document' is trivial, using the Counter object which we can import from Python's collection module:
from collections import Counter bow = Counter(clean_tokens) print(bow)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Let us have a look at the three most frequent items in the text:
print(bow.most_common(3))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Obviously, the most common items in such a frequency list are typically small, grammatical words that are very frequent throughout all the texts in a language. Let us add a small visualisation of this information. We can use a barplot to show the top-frequency items in a more pleasing manner. In the following block, we use the matplotlib package for this purpose, which is a popular graphics package in Python. To make sure that it shows up neatly in our notebook, first execute this cell:
%matplotlib inline
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
And then execute the following blocks -- and try to understand the intuition behind them:
# first, we extract the counts: nb_words = 8 wrds, cnts = zip(*bow.most_common(nb_words)) print(wrds) print(cnts) # now the plotting part: import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() bar_width = 0.5 idxs = np.arange(nb_words) #print(idxs) ax.bar(idxs, cnts, bar_width, color='blue', align='center') plt.xticks(idxs, wrds) plt.show()
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
DIY Can you try to increase the number of words plotted? And change the color used for the bars to blue? And the width of the bars plotted? The Bag of Words Model We are almost there: we now know how to split documents into tokens and how we we count (and even visualize!) the frequencies of these items. Now it is only a small step towards a 'real' bag of words model. If we present a collection of texts under a bag of words model, what we really would like to end up with is a frequence table, which has a row for each document, and a column for all the tokens that occur in the collections, which is also called the vocabulary of the corpus. Each cell is then filled with the frequency of each vocabulary item, so that the final matrix will look like like a standard, two dimensional table which you all know from spreadsheet applications. While creating such a matrix youself isn't too difficult in Python, here we will rely on an external package, which makes it really simple to efficiently create such matrices. The zipped folder which you downloaded for this coarse, contains a small corpus, containing novels by a number of famous Victorian novelists. Under data/victorian_small, for instance, you will find a number of files; the filenames indicate the author and (abbreviated) title of the novel contained in that file (e.g. Austen_Pride.txt). In the block below, I prepared some code to load these files from your hard drive into Python, which can execute now:
# we import some modules which we need import glob import os # we create three emptylist authors, titles, texts = [], [], [] # we loop over the filenames under the directory: for filename in sorted(glob.glob('data/victorian_small/*.txt')): # we open a file and read the contents from it: with open(filename, 'r') as f: text = f.read() # we derive the title and author from the filename author, title = os.path.basename(filename).replace('.txt', '').split('_') # we add to the lists: authors.append(author) titles.append(title) texts.append(text)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
This code makes use of a so called for-loop: after retrieving the list of relevant file names, we load the content of each file and add it to a list called texts, using the append() function. Additionally, we also create lists in which we store the authors and titles of the novels:
print(authors) print(titles)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Note that these three lists can be neatly zipped together, so that the third item in authors corresponds to the third item in titles (remember: Python starts counting at zero!):
print('Title:', titles[2], '- by:', authors[2])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
To have a peak at the content of this novel, we can now 'stack' indices as follows. Using the first square brackets ([2]) we select the third novel in the list, i.e. Sense and Sensibility by Jane Austen. Then, using a second index ([:300]), we print the first 300 characters in that novel.
print(texts[2][:300])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
After loading these documents, we can now represent them using a bag of words model. To this end, we will use a library called scikit-learn, or sklearn in shorthand, which is increasingly popular in text analysis nowadays. As you will see below, we import its CountVectorizer object below, and we apply it to our corpus, specifying that we would like to extract a maximum of 10 features from the texts. The means that we will only consider the frequencies of 10 words (to keep our model small enough to be workable for now).
from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer(max_features=10) BOW = vec.fit_transform(texts).toarray() print(BOW.shape)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
The code block above creates a matrix which has a 9x10 shape: this means that the resulting matrix has 9 rows and 10 columns. Can you figure out where these numbers come from? To find out which words are included, we can inspect the newly created vec object as follows:
print(vec.get_feature_names())
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As you can see, the max_features argument which we used above restricts the model to the n words which are most frequent throughout our texts. These are typically smallish function words. Funnily, sklearn uses its own tokenizer, and this default tokenizer ignores certain words that are surprisingly enough absent in the vocabulary list we just inspected. Can you figure which words? Why are they absent? Luckily, sklearn is flexible enough to allow us to use our own tokenizer. To use the nltk tokenizer for instance, we can simply pass it as an argument when we create the CountVectorizer. (Note that, depending on the speed of your machine, the following code block might actually take a while to execute, because the tokenizer now has to process entire novels, instead of a single sentence. Sit tight!)
from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer(max_features=10, tokenizer=nltk.word_tokenize) BOW = vec.fit_transform(texts).toarray() print(vec.get_feature_names())
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Finally, let us visually inspect the BOW model which we have converted. To this end, we make use of pandas, a Python-package that is nowadays often used to work with all sorts of data tables in Python. In the code block below, we create a new table or 'DataFrame' and add the correct row and column names:
import pandas as pd # conventional shorthand! df = pd.DataFrame(BOW, columns=vec.get_feature_names(), index=titles) print(df)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
After creating this DataFrame, it becomes very easy to retrieve specific information from our corpus. What is the frequency, for instance, of the word 'the' in each text?
print(df['the'])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Or the frequency of 'and' in 'Emma'?
print(df['of']['Emma'])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Text analysis Distance metrics Now that we have converted our small dummy corpus to a bag-of-words matrix, we can finally start actually analyzing it! One very common technique to visualize texts is to render a cluster diagram or dendrogram. Such a tree-like visualization (an example will follow shortly) can be used to obtain a rough-and-ready first idea of the (dis)similarities between the texts in our corpus. Texts that cluster together under a similar branch in the resulting diagram, can be argued to be stylistically closer to each other, than texts which occupy completely different places in the tree. Texts by the same authors, for instance, will often form thight clades in the tree, because they are written in a similar style. However, when comparing texts, we should be aware of the fact that documents can strongly vary in length. The bag-of-words model which we created above does not take into account that some texts might be longer than others, because it simply uses absolute frequencies, which will be much higher in the case of longer documents. Before comparing texts on the basis of word frequencies, it therefore makes sense to apply some sort of normalization. One very common type of normalization is to use relative, instead of absolute word frequencies: that means that we have to divide the original, absolute frequencies in a document, by the total number of word in that document. Remember that we are dealing with a 9x10 matrix at this stage: Each of the 9 document rows which we obtain should now be normalized by dividing each word count by the total number of words which we recorded for that document. First, we therefore need to calculate the row-wise sum in our table.
totals = BOW.sum(axis=1, keepdims=True) print(totals)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Now, we can efficiently 'scale' or normalize the matrix using these sums:
BOW = BOW / totals print(BOW.shape)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
If we inspect our new frequency table, we can see that the values are now neatly in the 0-1 range:
print(BOW)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit