markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Moreover, if we now print the sum of the word frequencies for each of our nine texts, we see that the relative values sum to 1:
print(BOW.sum(axis=1))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
That looks great. Let us now build a model with a more serious vocabulary size (=300) for the actual cluster analysis:
vec = CountVectorizer(max_features=300, tokenizer=nltk.word_tokenize) BOW = vec.fit_transform(texts).toarray() BOW = BOW / BOW.sum(axis=1, keepdims=True)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Clustering algorithms are based on essentially based on the distances between texts: clustering algorithms typically start by calculating the distance between each pair of texts in a corpus, so that they know for each text how (dis)similar it is from any other text. Only after these pairwise-distances have been calculated, we can have the clustering algorithm start building a tree representation, in which the similar texts are joined together and merged into new nodes. To create a distance matrix, we use a number of functions from scipy (Scientific Python), a commonly used package for scientific applications.
from scipy.spatial.distance import pdist, squareform
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
The function pdist() ('pairwise distances') is a function which we can use to calculate the distance between each pair of texts in our corpus. Using the squareform() function, we will eventually obtain a 9x9 matrix, the structure of which is conceptually easy to understand: this square distance matrix (named dm) will hold for each of our 9 texts the distance to each other text in the corpus. Naturally, the diagonal in this matrix are all-zeroes (since the distance from a text to itself will be zero). We create this distance matrix as follows:
dm = squareform(pdist(BOW)) print(dm.shape)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero (cf. diagonal cells):
print(dm[3][3]) print(dm[8][8])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Additionally, we can observe that the distance from text A to text B, is equal to the distance from B to A:
print(dm[2][3]) print(dm[3][2])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We can visualize this distance matrix as a square heatmap, where darker cells indicate a larger distance between texts. Again, we use the matplotlib package to achieve this:
import matplotlib.pyplot as plt fig, ax = plt.subplots() heatmap = ax.pcolor(dm, cmap=plt.cm.Blues) ax.set_xticks(np.arange(dm.shape[0])+0.5, minor=False) ax.set_yticks(np.arange(dm.shape[1])+0.5, minor=False) ax.set_xticklabels(titles, minor=False, rotation=90) ax.set_yticklabels(authors, minor=False) plt.show()
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As you can see, the little squares representing texts by the same author already show a tendency to invite lower distance scores. But how are these distances exactly calculated? Each text in our distance matrix is represented as a row, consisting of 100 numbers. Such a list of numbers is also called a document vector, which is why the document modeling process described above is sometimes also called vectorization (cf. CountVectorizer). In digital text analysis, documents are compared by applying standard metrics from geometry to these documents vectors containing word frequencies. Let us have a closer look at one popular, but intuitively simple distance metric, the Manhattan city block distance. The formula behind this metric is very simple (don't be afraid of the mathematical notation; it won't bite): $$manhattan(x, y) = \sum_{i=1}^{n} \left| x_i - y_i \right|$$ What this formula expresses, is that to calculate the distance between two documents, we loop over each word column in both texts and we calculate the absolute difference between the value for that item in each text. Afterwards we simply sum all the absolute differences. DIY Consider the following two dummy vectors:
a = [2, 5, 1, 6, 7] b = [4, 5, 1, 7, 3]
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Can you calculate the manhattan distance between a and b by hand? Compare the result you obtain to this line of code:
from scipy.spatial.distance import cityblock as manhattan print(manhattan(a, b))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
This is an example of one popular distance metric which is currently used a lot in digital text analysis. Alternatives (which might ring a bell from math classes in high school) include the Euclidean distance or cosine distance. Our dm distance matrix from above can be created with any of these option, by specifying the correct metric when calling pdist(). Try out some of them!
dm = squareform(pdist(BOW), 'cosine') # or 'euclidean', or 'cosine' etc. fig, ax = plt.subplots() heatmap = ax.pcolor(dm, cmap=plt.cm.Reds) ax.set_xticks(np.arange(dm.shape[0])+0.5, minor=False) ax.set_yticks(np.arange(dm.shape[1])+0.5, minor=False) ax.set_xticklabels(titles, minor=False, rotation=90) ax.set_yticklabels(authors, minor=False) plt.show()
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Cluster trees Now that we have learned how to calculate the pairwise distances between texts, we are very close to the dendrogram that I promised you a while back. To be able to visualize a dendrogram, we must first figure out the (branch) linkages in the tree, because we have to determine which texts are most similar to each etc. Our clustering procedure therefore starts by merging (or 'linking') the most similar texts in the corpus into a new node; only at a later stage in the tree, these new nodes of very similar texts will be joined together with nodes representing other texts. We perform this - fairly abstract - step on our distance matrix as follows:
from scipy.cluster.hierarchy import linkage linkage_object = linkage(dm)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We are now ready to draw the actual dendrogram, which we do in the following code block. Note that we annotate the outer leaf nodes in our tree (i.e. the actual texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read from left to right:
from scipy.cluster.hierarchy import dendrogram d = dendrogram(Z=linkage_object, labels=titles, orientation='right')
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Using the authors as labels is of course also a good idea:
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As we can see, Jane Austen's novels form a tight and distinctive cloud; an author like Thackeray is apparantly more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). Note that in this code block too we can easily switch to, for instance, the Euclidean distance. Does the code block below produce better results?
dm = squareform(pdist(BOW, 'euclidean')) linkage_object = linkage(dm, method='ward') d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Exercise The code repository also contains a larger folder of novels, called victorian_large. Use the code block below to copy and paste code snippets from above, which you can slightly adapt to do the following things: 1. Read in the texts, producing 3 lists of texts, authors and titles. How many texts did you load (use the len() function)? 2. Select the text for David (Copperfield) by Charles Dickens and find out how often the word "and" is used in this text. Hint: use the nltk tokenizer and the Counter object from the collections module. Make sure no punctuation is included in your counts. 3. Vectorize the text using the CountVectorizer using the 250 most frequent words. 4. Normalize the resulting document matrix and draw a heatmap using blue colors. 5. Draw a cluster diagram and experiment with the distance metrics: which distance metric produces the 'best' result from the point of view of authorship clustering?
# exercise code goes here
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Topic modeling Up until now, we have been working with fairly small, dummy-size corpora to introduce you to some standard methods for text analysis in Python. When working with real-world data, however, we are often confronted with much larger and noisier datasets, sometimes even datasets that are too large to read or inspect manually. To deal with such huge datasets, researchers in fields such as computer science have come up with a number of techniques that allow us to nevertheless get a grasp of the kind of texts that are contained in a document collection, as well as their content. For this part of the tutorial, I have included a set of +3,000 documents under the folder 'data/newsgroups'. The so-called "20 newsgroups dataset" is a very famous dataset in computational linguistics (see this website ): it refers to a collection of approximately 20,000 newsgroup documents, divided in 20 categories which each correspond to another topic. The topics are very diverse and range from science to politics. I have subsampled a number of these categories in the repository for this tutorial, but I won't tell you which... The idea is that we will use topic modelling so that you can find out for yourself which topics are discussed in this dataset! First, we start by loading the documents, using code that is very similar to the text loading code we used above:
import os documents, names = [], [] for filename in sorted(os.listdir('data/newsgroups')): try: with open('data/newsgroups/'+filename, 'r') as f: text = f.read() documents.append(text) names.append(filename) except: continue print(len(documents))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As you can see, we are dealing with 3,551 documents. Have a look at some of the documents and try to find out what they are about. Vary the index used to select a random document and print out its first 1000 characters or so:
print(documents[3041][:1000])
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
You might already get a sense of the kind of topics that are being discussed. Also, you will notice that these are rather noisy data, which is challenging for humans to process manually. In the last part of this tutorial we will use a technique called topic modelling. This technique will automatically determine a number of topics or semantic word clusters that seem to be important in a document collection. The nice thing about topic modelling is that is a largely unsupervised technique, meaning that it does not need prior information about the document collection or the language it uses. It will simply inspect which words often co-occur in documents and are therefore more likely to be semantically related. After fitting a topic model to a document collection, we can use it to inspect which topics have been detected. Additionally, we can use the model to infer to which extent these topics are present in new documents. Interestingly, the model does not assume that texts are always about a single topic; rather, it assumes that documents contain a mixture of different topics. A text about the transfer of a football player, for instance, might contain 80% of a 'sports' topic, 15% of a 'finance'-related topic, and 5% of a topic about 'Spanish lifestyle' etc. For topic modelling too, we first need to convert our corpus to a numerical format (i.e. 'vectorize' it as we did above). Luckily, we already know how to do that:
vec = CountVectorizer(max_df=0.95, min_df=5, max_features=2000, stop_words='english') BOW = vec.fit_transform(documents) print(BOW.shape)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Note that we make use of a couple of additional bells and whistles that ship with sklearn's CountVectorizer. Can you figure out what they mean (hint: df here stands for document frequency)? In topic modelling we are not interested in the type of high-frequency grammatical words that we have used up until now. Such words are typically called function words in Information Retrieval and there are mostly completely ignored in topic modelling. Have a look at the 1000 features extracted: are these indeed content words?
print(vec.get_feature_names())
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We are now ready to start modelling the topics in this text collection. For this we make use of a popular technique called Latent Dirichlet Allocation or LDA, which is also included in the sklearn library. In the code block below, you can safely ignore most of the settings which we use when we initialize the model, but you should pay attention to the n_topics and max_iter parameter. The former controls how many topics we will extract from the document collection (this is one of few parameters which the model, sadly, does not learn itself). We start with a fairly small number of topics, but if you want a more finegrained analysis of your corpus you can always increase this parameter. The max_iter setting, finally, controls how long we let the model 'think': the more interations we allow, the better the model will get, but because LDA is as you will see fairly computationally intensive, it makes sense start with a relatively low number in this respect. You can now execute the following code block -- you will see that this code might take several minutes to complete.
from sklearn.decomposition import LatentDirichletAllocation lda = LatentDirichletAllocation(n_topics=50, max_iter=10, learning_method='online', learning_offset=50., random_state=0) lda.fit(BOW)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
After the model has (finally!) been fitted, we can now inspect our topics. We do this by finding out which items in our vocabulary have the highest score for each topic. The topics are available as lda.components_ after the model has been fitted.
feature_names = vec.get_feature_names() for topic_idx, topic in enumerate(lda.components_): print('Topic', topic_idx, '> ', end='') print(' '.join([feature_names[i] for i in topic.argsort()[:-12 - 1:-1]]))
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Can you make sense of these topics? Which are the main thematic categories that you can discern? DIY Try to run the algorithm with more topics and allow more iterations (but don't exaggerate!): do the results get more interpretable? Now that we have build a topic model, we can use it to represent our corpus. Instead of representing each document as a vector containing word frequencies, we represent it as a vector containing topic scores. To achieve this, we can simply call the fit() function to the bag-of-words representation of our document:
topic_repr = lda.transform(BOW) print(topic_repr.shape)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
As you can see, we obtain another sort of document matrix, where the number of columns corresponds to the number of topics we extracted. Let us now find out whether this representation yields anything useful. It is difficult to visualize 3,000+ documents all at once, so in the code block below, I select a smaller subset of 30 documents (and the corresponding filenames), using the random module.
comb = list(zip(names, topic_repr)) import random random.seed(10000) random.shuffle(comb) comb = comb[:30] subset_names, subset_topic_repr = zip(*comb)
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
We can now use our clustering algorithm from above in an exactly parallel way. Go on and try it (because of the random aspect of the previous code block, it possible that you obtain a different random selection).
dm = squareform(pdist(subset_topic_repr), 'cosine') # or 'euclidean', or 'cosine' etc. linkage_object = linkage(dm, method='ward') fig_size = plt.rcParams["figure.figsize"] plt.rcParams["figure.figsize"] = [15, 9] d = dendrogram(Z=linkage_object, labels=subset_names, orientation='right')
Digital Text Analysis.ipynb
mikekestemont/leyden-workshop
mit
Tymoshenko theory $u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $ $u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $ $u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $ $ \left( \begin{array}{c} u_1 \ \frac { \partial u_1 } { \partial \alpha_1} \ \frac { \partial u_1 } { \partial \alpha_2} \ \frac { \partial u_1 } { \partial \alpha_3} \ u_2 \ \frac { \partial u_2 } { \partial \alpha_1} \ \frac { \partial u_2 } { \partial \alpha_2} \ \frac { \partial u_2 } { \partial \alpha_3} \ u_3 \ \frac { \partial u_3 } { \partial \alpha_1} \ \frac { \partial u_3 } { \partial \alpha_2} \ \frac { \partial u_3 } { \partial \alpha_3} \ \end{array} \right) = T \cdot \left( \begin{array}{c} u \ \frac { \partial u } { \partial \alpha_1} \ \gamma \ \frac { \partial \gamma } { \partial \alpha_1} \ w \ \frac { \partial w } { \partial \alpha_1} \ \end{array} \right) $
T=zeros(12,6) T[0,0]=1 T[0,2]=alpha3 T[1,1]=1 T[1,3]=alpha3 T[3,2]=1 T[8,4]=1 T[9,5]=1 T B=Matrix([[0, 1/(A*(K*alpha3 + 1)), 0, 0, 0, 0, 0, 0, K/(K*alpha3 + 1), 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1/(A*(K*alpha3 + 1)), 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [-K/(K*alpha3 + 1), 0, 0, 0, 0, 0, 0, 0, 0, 1/(A*(K*alpha3 + 1)), 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) B E=zeros(6,9) E[0,0]=1 E[1,4]=1 E[2,8]=1 E[3,1]=1 E[3,3]=1 E[4,2]=1 E[4,6]=1 E[5,5]=1 E[5,7]=1 E simplify(E*B*T) mu = Symbol('mu') la = Symbol('lambda') C_tensor = getIsotropicStiffnessTensor(mu, la) C = convertStiffnessTensorToMatrix(C_tensor) C S=T.T*B.T*E.T*C*E*B*T*A*(1+alpha3*K)**2 S=simplify(S) S h=Symbol('h') S_in = integrate(S*(1-alpha3*K+(alpha3**2)*K),(alpha3, -h/2, h/2)) S_in E,nu=symbols('E nu') lambda_elastic=E*nu/((1+nu)*(1-2*nu)) mu_elastic=E/(2*(1+nu)) S_ins=simplify(S_in.subs(A,1).subs(la,lambda_elastic).subs(mu,mu_elastic)) S_ins a11=E/(1-nu**2) a44=5*E/(12*(1+nu)) AM=Matrix([[a11,0],[0,a44]]) strainT=Matrix([[1,alpha3,0],[0,0,1]]) AT=strainT.T*AM*strainT integrate(AT,(alpha3, -h/2, h/2)) M=Matrix([[rho, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, rho, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, rho, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) M=T.T*M*T*A*(1+alpha3*K) M M_in = integrate(M,(alpha3, -h/2, h/2)) M_in
py/notebooks/.ipynb_checkpoints/LinearSolShellsFEM-checkpoint.ipynb
tarashor/vibrations
mit
Cartesian coordinates
import fem.geometry as g import fem.model as m import fem.material as mat import fem.shell.shellsolver as s import fem.shell.mesh1D as me import plot stiffness_matrix_func = lambdify([A, K, mu, la, h], S_in, "numpy") mass_matrix_func = lambdify([A, K, rho, h], M_in, "numpy") def stiffness_matrix(material, geometry, x1, x2, x3): A,K = geometry.get_A_and_K(x1,x2,x3) return stiffness_matrix_func(A, K, material.mu(), material.lam(), thickness) def mass_matrix(material, geometry, x1, x2, x3): A,K = geometry.get_A_and_K(x1,x2,x3) return mass_matrix_func(A, K, material.rho, thickness) def generate_layers(thickness, layers_count, material): layer_top = thickness / 2 layer_thickness = thickness / layers_count layers = set() for i in range(layers_count): layer = m.Layer(layer_top - layer_thickness, layer_top, material, i) layers.add(layer) layer_top -= layer_thickness return layers def solve(geometry, thickness, linear, N_width, N_height): layers_count = 1 layers = generate_layers(thickness, layers_count, mat.IsotropicMaterial.steel()) model = m.Model(geometry, layers, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS) mesh = me.Mesh1D.generate(width, layers, N_width, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS) lam, vec = s.solve(model, mesh, stiffness_matrix, mass_matrix) return lam, vec, mesh, geometry width = 2 curvature = 0.8 thickness = 0.05 corrugation_amplitude = 0.05 corrugation_frequency = 20 # geometry = g.CorrugatedCylindricalPlate(width, curvature, corrugation_amplitude, corrugation_frequency) geometry = g.CylindricalPlate(width, curvature) # geometry = g.Plate(width) N_width = 100 N_height = 4 lam, vec, mesh, geometry = solve(geometry, thickness, False, N_width, N_height) results = s.convert_to_results(lam, vec, mesh, geometry) results_index = 0 plot.plot_init_and_deformed_geometry_in_cartesian(results[results_index], 0, width, -thickness / 2, thickness / 2, 0, geometry.to_cartesian_coordinates) to_print = 20 if (len(results) < to_print): to_print = len(results) for i in range(to_print): print(results[i].rad_per_sec_to_Hz(results[i].freq))
py/notebooks/.ipynb_checkpoints/LinearSolShellsFEM-checkpoint.ipynb
tarashor/vibrations
mit
Ejercicio Simplifica los cocientes entre factoriales: - $\frac{7!}{6!}$ - $\frac{{8!}}{{9!}}$ - $\frac{{9!}}{{5!.4!}}$ - $\frac{{m!}}{{(m - 1)!}}$ - $\frac{{\left( {m + 1} \right)!}}{{\left( {m - 1} \right)!}}$
enunciado = list([r'\frac{7!}{6!}',r'\frac{{8!}}{{9!}}',r'\frac{{9!}}{{5!\cdot 4!}}',r'\frac{{m!}}{{(m - 1)!}}', r'\frac{{( {m + 1} )!}}{{( {m - 1} )!}}']) enunciado enunciado = list([r'\frac{7!}{6!}',r'\frac{{8!}}{{9!}}',r'\frac{{9!}}{{5!\cdot 4!}}',r'\frac{{m!}}{{(m - 1)!}}', r'\frac{{( {m + 1} )!}}{{( {m - 1} )!}}']) enunciado_sympy=[] for i in enunciado : enunciado_sympy.append(parse_latex(i)); enunciado_sympy for i in range(len(enunciado_sympy)) : display(md("$"+enunciado[i]+" \\rightarrow "+latex(simplify(enunciado_sympy[i]))+"$"))
tmp/Ejercicios 1_3.ipynb
crdguez/mat4ac
gpl-3.0
Ejercicio Calcula las siguientes operaciones: - $\binom{252}{250}$ - $\binom{25}{3} + \binom{25}{4} = \binom{26}{4}$ - $\binom{9}{6} + \binom{9}{7} + \binom{10}{2}=\binom{10}{7}+\binom{10}{8}=\binom{11}{8}$ - $\binom{4}{2} + \binom{4}{3} + \binom{5}{4}+\binom{6}{5} + \binom{7}{6} + \binom{8}{7}=\binom{9}{7}$ - $\binom{4}{0} + \binom{4}{1} + \binom{4}{2}+\binom{4}{3} = 2^4-1$ \endmatrix } \right)\, + \,\left( {\matrix {25} \ 4 \ \endmatrix } \right)$$ iii) $$\left( {\matrix 9 \ 6 \ \endmatrix } \right)\, + \,\left( {\matrix 9 \ 7 \ \endmatrix } \right)\, + \,\left( {\matrix {10} \ 2 \ \endmatrix } \right)$$ iv) $$\left( {\matrix 4 \ 2 \ \endmatrix } \right)\, + \,\left( {\matrix 4 \ 3 \ \endmatrix } \right)\, + \,\left( {\matrix 5 \ 4 \ \endmatrix } \right)\, + \,\left( {\matrix 6 \ 5 \ \endmatrix } \right)\, + \,\left( {\matrix 7 \ 6 \ \endmatrix } \right)\, + \,\left( {\matrix 8 \ 7 \ \endmatrix } \right)$$ v) $$\left( {\matrix 4 \ 0 \ \endmatrix } \right)\, + \,\left( {\matrix 4 \ 1 \ \endmatrix } \right)\, + \,\left( {\matrix 4 \ 2 \ \endmatrix } \right)\, + \,\left( {\matrix 4 \ 3 \ \endmatrix } \right)$$
from sympy.functions.combinatorial.numbers import nC, nP, nT nC(5,3) from sympy import * expr = sympify("nC(5,3)") display(expr.expand()) enunciado = [[252,250], [25,3], [25,4]] for i in range(len(enunciado)): display(nC(enunciado[i][0],enunciado[i][1])) nC(enunciado[0][0],enunciado[0][1]) factorial(252)/(factorial(250)*factorial(2))
tmp/Ejercicios 1_3.ipynb
crdguez/mat4ac
gpl-3.0
Set up our toy problem (1D optimisation of the forrester function) and collect 3 initial points.
target_function, space = forrester_function() x_plot = np.linspace(space.parameters[0].min, space.parameters[0].max, 200)[:, None] y_plot = target_function(x_plot) X_init = np.array([[0.2],[0.6], [0.9]]) Y_init = target_function(X_init) plt.figure(figsize=(12, 8)) plt.plot(x_plot, y_plot, "k", label="Objective Function") plt.scatter(X_init,Y_init) plt.legend(loc=2, prop={'size': LEGEND_SIZE}) plt.xlabel(r"$x$") plt.ylabel(r"$f(x)$") plt.grid(True) plt.xlim(0, 1) plt.show()
notebooks/Emukit-tutorial-Max-Value-Entropy-Search-Example.ipynb
EmuKit/emukit
apache-2.0
Fit our GP model to the observed data.
gpy_model = GPy.models.GPRegression(X_init, Y_init, GPy.kern.RBF(1, lengthscale=0.08, variance=20), noise_var=1e-10) emukit_model = GPyModelWrapper(gpy_model)
notebooks/Emukit-tutorial-Max-Value-Entropy-Search-Example.ipynb
EmuKit/emukit
apache-2.0
Lets plot the resulting acqusition functions for the chosen model on the collected data. Note that MES takes a fraction of the time of ES to compute (plotted on a log scale). This difference becomes even more apparent as you increase the dimensions of the sample space.
ei_acquisition = ExpectedImprovement(emukit_model) es_acquisition = EntropySearch(emukit_model,space) mes_acquisition = MaxValueEntropySearch(emukit_model,space) t_0=time.time() ei_plot = ei_acquisition.evaluate(x_plot) t_ei=time.time()-t_0 es_plot = es_acquisition.evaluate(x_plot) t_es=time.time()-t_ei mes_plot = mes_acquisition.evaluate(x_plot) t_mes=time.time()-t_es plt.figure(figsize=(12, 8)) plt.plot(x_plot, (es_plot - np.min(es_plot)) / (np.max(es_plot) - np.min(es_plot)), "green", label="Entropy Search") plt.plot(x_plot, (ei_plot - np.min(ei_plot)) / (np.max(ei_plot) - np.min(ei_plot)), "blue", label="Expected Improvement") plt.plot(x_plot, (mes_plot - np.min(mes_plot)) / (np.max(mes_plot) - np.min(mes_plot)), "red", label="Max Value Entropy Search") plt.legend(loc=1, prop={'size': LEGEND_SIZE}) plt.xlabel(r"$x$") plt.ylabel(r"$f(x)$") plt.grid(True) plt.xlim(0, 1) plt.show() plt.figure(figsize=(12, 8)) plt.bar(["ei","es","mes"],[t_ei,t_es,t_mes]) plt.xlabel("Acquisition Choice") plt.yscale('log') plt.ylabel("Calculation Time (secs)")
notebooks/Emukit-tutorial-Max-Value-Entropy-Search-Example.ipynb
EmuKit/emukit
apache-2.0
<table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/automl/automl-tabular-classification.ipynb""> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/automl/automl-tabular-classification.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Vertex AI SDK for Python: AutoML Tabular Training and Prediction To use this Colaboratory notebook, you copy the notebook to your own Google Drive and open it with Colaboratory (or Colab). You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Colab automatically displays the return value of the last line in each cell. For more information about running notebooks in Colab, see the Colab welcome page. Overview This tutorial demonstrates how to use the Vertex AI Python client library to train and deploy a tabular classification model for online prediction. Note: you may incur charges for training, prediction, storage, or usage of other GCP products in connection with testing this SDK. Dataset The dataset we are using is the PetFinder Dataset, available locally in Colab. To learn more about this dataset, visit https://www.kaggle.com/c/petfinder-adoption-prediction. Objective This notebook demonstrates, using the Vertex AI Python client library, how to train and make predictions on an AutoML model based on a tabular dataset. Alternatively, you can train and make predictions on models by using the gcloud command-line tool or by using the online Cloud Console. The steps performed include the following: Create a Vertex AI model training job. Train an AutoML Tabular model. Deploy the Model resource to a serving Endpoint resource. Make a prediction by sending data. Undeploy the Model resource. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user"
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the latest version of the Vertex AI client library. Run the following command in your virtual environment to install the Vertex SDK for Python:
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the Cloud Storage library:
! pip install {USER_FLAG} --upgrade google-cloud-storage
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API and Compute Engine API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud.
PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID)
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This notebook demonstrates how to use Vertex AI SDK for Python to create an AutoML model based on a tabular dataset. You will need to provide a Cloud Storage bucket where the dataset will be stored. Set the name of your Cloud Storage bucket below. It must be unique across all of your Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Copy dataset into your Cloud Storage bucket
IMPORT_FILE = "petfinder-tabular-classification.csv" ! gsutil cp gs://cloud-samples-data/ai-platform-unified/datasets/tabular/{IMPORT_FILE} {BUCKET_NAME}/data/ gcs_source = f"{BUCKET_NAME}/data/{IMPORT_FILE}"
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import Vertex SDK for Python Import the Vertex SDK into your Python environment and initialize it.
import os from google.cloud import aiplatform aiplatform.init(project=PROJECT_ID, location=REGION)
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Launch a Training Job to Create a Model Once we have defined your training script, we will create a model. The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.
job = aiplatform.AutoMLTabularTrainingJob( display_name="train-petfinder-automl-1", optimization_prediction_type="classification", column_transformations=[ {"categorical": {"column_name": "Type"}}, {"numeric": {"column_name": "Age"}}, {"categorical": {"column_name": "Breed1"}}, {"categorical": {"column_name": "Color1"}}, {"categorical": {"column_name": "Color2"}}, {"categorical": {"column_name": "MaturitySize"}}, {"categorical": {"column_name": "FurLength"}}, {"categorical": {"column_name": "Vaccinated"}}, {"categorical": {"column_name": "Sterilized"}}, {"categorical": {"column_name": "Health"}}, {"numeric": {"column_name": "Fee"}}, {"numeric": {"column_name": "PhotoAmt"}}, ], ) # This will take around an hour to run model = job.run( dataset=ds, target_column="Adopted", training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, model_display_name="adopted-prediction-model", disable_early_stopping=False, )
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy your model Before you use your model to make predictions, you need to deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This function does two things: Creates an Endpoint resource to which the Model resource will be deployed. Deploys the Model resource to the Endpoint resource. Deploy your model. NOTE: Wait until the model FINISHES deployment before proceeding to prediction.
endpoint = model.deploy( machine_type="n1-standard-4", )
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Predict on the endpoint This sample instance is taken from an observation in which Adopted = Yes Note that the values are all strings. Since the original data was in CSV format, everything is treated as a string. The transformations you defined when creating your AutoMLTabularTrainingJob inform Vertex AI to transform the inputs to their defined types.
prediction = endpoint.predict( [ { "Type": "Cat", "Age": "3", "Breed1": "Tabby", "Gender": "Male", "Color1": "Black", "Color2": "White", "MaturitySize": "Small", "FurLength": "Short", "Vaccinated": "No", "Sterilized": "No", "Health": "Healthy", "Fee": "100", "PhotoAmt": "2", } ] ) print(prediction)
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Undeploy the model To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter: deployed_model_id: The model deployment identifier returned by the prediction service when the Model resource is deployed. You can retrieve the deployed_model_id using the prediction object's deployed_model_id property.
endpoint.undeploy(deployed_model_id=prediction.deployed_model_id)
notebooks/official/automl/automl-tabular-classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Overview: Deploying the Dress Recommender Let's say you want to make an app which can recommend dresses to you based on a photo you took. You need a way to deploy the model previously built. Turi Predictive Services helps do this in an easy and scalable way. In this notebook, we demonstrate how do that for the dress recommender model. <img src='images/predictive_services_overview.png'></img> Deployment Steps The notebook has three sections: <a href='#cpo'>Create a model</a> <a href='#create'>Create a predictive service</a> <a href='#query'>Query the model</a> 1. Create a model <a id='cpo'></a> Let us try and deploy the dress recommender. First, we define a function similar_dress that takes a query and returns similar dresses. <img src="images/left.png"></img> We start by loading the already trained models and datasets.
if os.path.exists('dress_sf_processed.sf'): reference_sf = graphlab.SFrame('dress_sf_processed.sf') else: reference_sf = graphlab.SFrame('https://static.turi.com/datasets/dress_sf_processed.sf') reference_sf.save('dress_sf_processed.sf') if os.path.exists('dress_nn_model'): nn_model = graphlab.load_model('dress_nn_model') else: nn_model = graphlab.load_model('https://static.turi.com/models/dress_nn_model') nn_model.save('dress_nn_model') if os.path.exists('imagenet_model'): pretrained_model = graphlab.load_model('imagenet_model') else: pretrained_model = graphlab.load_model('https://static.turi.com/models/imagenet_model_iter45') pretrained_model.save('imagenet_model') pretrained_model reference_sf def dress_similar(url): img = graphlab.Image(url) image_sf = graphlab.SFrame() image_sf['image'] = [img] image_sf['features'] = pretrained_model.extract_features(image_sf) ans = nn_model.query(image_sf, k=5) return ans QUERY_URL = 'http://static.ddmcdn.com/gif/blue-dress.jpg' Image(QUERY_URL) def retrieve_image(nearest_neighbors_output, input_sframe): joined = input_sframe.join(nearest_neighbors_output, on={'_id':'reference_label'}) sorted_sf = joined.sort('rank') return sorted_sf['image'] images = retrieve_image(dress_similar(QUERY_URL), reference_sf) images.show()
strata-sj-2016/ml-in-production/deploy-dress-recommender.ipynb
turi-code/tutorials
apache-2.0
Load an already created service
import graphlab as gl ps = gl.deploy.predictive_service.load(TBD) ps #ps.add('dress_similar', dress_similar) #ps.update('dress_similar', dress_similar) ps.apply_changes()
strata-sj-2016/ml-in-production/deploy-dress-recommender.ipynb
turi-code/tutorials
apache-2.0
Query via REST Query from anywhere. Here, we issue a request via the requests library, and convert the returning JSON back into an SFrame. This could easily be done from outside of Python, though.
import json import requests from requests.auth import HTTPBasicAuth def restful_query(x): headers = {'content-type': 'application/json'} payload = {'data': {'url': url} } end_point = 'http://TBD/query/dress_similar' return requests.post( end_point, json.dumps(payload), headers=headers, auth=HTTPBasicAuth('api_key', TBD)).json() restful_query('http://static.ddmcdn.com/gif/blue-dress.jpg').show()
strata-sj-2016/ml-in-production/deploy-dress-recommender.ipynb
turi-code/tutorials
apache-2.0
We'll use 100 inducing points
M = 100 Z = kmeans2(X, M, minit='points')[0]
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
We'll compare three models: an ordinary sparse GP and DGPs with 2 and 3 layers. We'll use a batch size of 1000 for all models
m_sgp = SVGP(X, Y, RBF(784, lengthscales=2., variance=2.), MultiClass(10), Z=Z, num_latent=10, minibatch_size=1000, whiten=True) def make_dgp(L): kernels = [RBF(784, lengthscales=2., variance=2.)] for l in range(L-1): kernels.append(RBF(30, lengthscales=2., variance=2.)) model = DGP(X, Y, Z, kernels, MultiClass(10), minibatch_size=1000, num_outputs=10) # start things deterministic for layer in model.layers[:-1]: layer.q_sqrt = layer.q_sqrt.value * 1e-5 return model m_dgp2 = make_dgp(2) m_dgp3 = make_dgp(3)
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
For the SGP model we'll calcuate accuracy by simply taking the max mean prediction:
def assess_model_sgp(model, X_batch, Y_batch): m, v = model.predict_y(X_batch) l = model.predict_density(X_batch, Y_batch) a = (np.argmax(m, 1).reshape(Y_batch.shape).astype(int)==Y_batch.astype(int)) return l, a
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
For the DGP models we have stochastic predictions. We need a single prediction for each datum, so to do this we take $S$ samples for the one-hot predictions ($(S, N, 10)$ matrices for mean and var), then we take the max over the class means (to give a $(S, N)$ matrix), and finally we take the modal class over the samples (to give a vector of length $N$): We'll use 100 samples
S = 100 def assess_model_dgp(model, X_batch, Y_batch): m, v = model.predict_y(X_batch, S) l = model.predict_density(X_batch, Y_batch, S) a = (mode(np.argmax(m, 2), 0)[0].reshape(Y_batch.shape).astype(int)==Y_batch.astype(int)) return l, a
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
We need batch predictions (we might run out of memory otherwise)
def batch_assess(model, assess_model, X, Y): n_batches = max(int(len(X)/1000), 1) lik, acc = [], [] for X_batch, Y_batch in zip(np.split(X, n_batches), np.split(Y, n_batches)): l, a = assess_model(model, X_batch, Y_batch) lik.append(l) acc.append(a) lik = np.concatenate(lik, 0) acc = np.array(np.concatenate(acc, 0), dtype=float) return np.average(lik), np.average(acc)
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
Now we're ready to go The sparse GP:
iterations = 20000 AdamOptimizer(0.01).minimize(m_sgp, maxiter=iterations) l, a = batch_assess(m_sgp, assess_model_sgp, Xs, Ys) print('sgp test lik: {:.4f}, test acc {:.4f}'.format(l, a))
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
Using more inducing points improves things, but at the expense of very slow computation (500 inducing points takes about a day) The two layer DGP:
AdamOptimizer(0.01).minimize(m_dgp2, maxiter=iterations) l, a = batch_assess(m_dgp2, assess_model_dgp, Xs, Ys) print('dgp2 test lik: {:.4f}, test acc {:.4f}'.format(l, a))
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
And the three layer:
AdamOptimizer(0.01).minimize(m_dgp3, maxiter=iterations) l, a = batch_assess(m_dgp3, assess_model_dgp, Xs, Ys) print('dgp3 test lik: {:.4f}, test acc {:.4f}'.format(l, a))
demos/demo_mnist.ipynb
ICL-SML/Doubly-Stochastic-DGP
apache-2.0
<p style="color:darkred;"> <b>Figyelem, a második sort beljebb húztuk!</b> A behúzás a Python jelölése az utasítások csoportosítására! </p> A behúzás mértékére általánosan bevett szokás, hogy a szintaktikailag alacsonyabb szintű programrészek négy szóközzel beljebb vannak tolva. Ha tehát két if utasítást ágyazunk egymásba, akkor a második által teljesítendő utasítások 8 szóközzel lesznek beljebb húzva. Ez a behúzás a jupyter notebook kódcelláiban automatikusan is megtörténik. A : utáni ENTER leütése után a következő sor 4 szóközzel kerül beljebb, további ENTER leütése hatására a hierarchia azonos szintjén kezdjük az új sort. Ha feljebb szeretnénk lépni, akkor a behúzást ki kell törölni. A C, C++, Java nyelvekben kapcsos zárójeleket {} használnak az egybefüggő kódrészletek elkülönítésére, kb. így: C if (i==10) { print i } A FORTRAN nyelvben az END szócska kiírása jelzi a kódrészlet végét. A pythonban a kettőspont (":"), majd szóközökkel beljebb írt sorok szolgálnak erre. Ha más programokból másolunk át részleteket, figyeljünk arra, hogy a behúzás helyett nem TAB-ot használ-e. (A TAB néhol megengedett, de kerülendő. A modern python kódolási stílusirányzat minden behúzást, ahogy azt fent is említettük, 4 szóköznek javasol.)
today='Monday'; time='12:00'; if today=='Monday': if time=='12:00': print('Nyomassuk a pythont!')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Bonyolultabb kritériumszerkezetet az else és elif parancsok segítségével konstruálhatunk:
x = 1 if x < 0: x = 0 print('Negatív, lecseréltem nullára') elif x == 0: print('Nulla') elif x == 1: print('Egy') else: print('Egynél több.')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Hiányozhat, de lehet egy vagy akár egynél több elif rész, az else rész szintén elmaradhat. Az elif kulcsszó – amely az ‘else if’ rövidítése – hasznos a felesleges behúzások elkerülésre. Egy if ...elif ... elif ... sor helyettesíti a más nyelvekben található switch és case utasításokat. A for utasítás A számítógépek legfontosabb tulajdonságai közt szerepel az, hogy nagyon gyorsak és "fáradhatatlanok". Olyan feladatok megoldásában a leghatékonyabbak, amikor a feladatot kevés munkával meg lehet fogalmazni ("az alkotó pihen") de végrehajtása nagyon sok ismétlést, iterációt igényel ("a gép forog"). Az iteráció (angol: iterate) azt jelenti, hogy például egy lista elemein egyesével végigmegy a program, és műveleteket végez el rajtuk. A Python-ban az egyik erre használható utasítás a for parancs (magyarul kb. a ...-ra, azaz pl. a hét minden napjára, a lista minden elemére):
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"] for day in days_of_the_week: print(day)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ez a kódrészlet a days_of_the_week listán megy végig, és a meglátogatott elemet hozzárendeli a day változóhoz, amit ciklusváltozónak is neveznek. Ezek után mindent végrehajt, amit a beljebb tabulált (angolul: indented) parancsblokkban írtunk (most csak egy print utasítás), amihez felhasználhatja a ciklusváltozót is. Miután vége a beljebb tabulált régiónak, kilép a ciklusból. Annak semmi jelentősége nincs, hogy a példában a day nevet adtuk az iterációban szereplő ciklusváltozónak. A program semmit se tud az emberi időszámításról, például, hogy a hétben napok vannak és nem kiscicák:
for macska in days_of_the_week: print(macska)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A ciklus utasításblokkja állhat több utasításból is:
for day in days_of_the_week: statement = "Today is " + day print(statement)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A range() parancs remekül használható, ha a for ciklusban adott számú műveletet szeretnénk elvégezni:
for i in range(20): print(i," szer ",i ,"az pontosan ",i*i)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Akkor válik mindez még érdekesebbé, ha az eddig tanult iterációt és feltételvizsgálatot kombináljuk:
for day in days_of_the_week: statement = "Today is " + day print(statement) if day == "Sunday": print (" Sleep in") elif day == "Saturday": print (" Do chores") else: print (" Go to work")
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Figyeljük meg, a fenti példában hogy ágyazódik egymás alá a for és az if! Egy példaprogram: Fibbonacci-sorozat A Fibonacci-sorozat első két eleme 0 és 1, majd a következő elemet mindig az előző kettő összegéből számoljuk ki: 0,1,1,2,3,5,8,13,21,34,55,89,... Ha nagyobb $n$ értékekre is ki akarjuk számolni a sorozatot, ez kiváló feladat lehet egy fáradhatatlan és gyors számítógépnek!
n = 10 # ennyi elemet szeretnénk meghatározni sequence = [0,1] # az első két elem for i in range(2,n): # számok 2-től n-ig, Figyelni kell, hogy n ne legyen kisebb mint 2!! sequence.append(sequence[i-1]+sequence[i-2]) print (sequence)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Nézzük végig lépésről lépésre! Először $n$ értékét, azaz a kiszámolandó sorozat hosszát állítjuk be 10-re. A sorozatot majdan tároló listát sequence-nek neveztük el, és inicializáltuk az első két értékkel. A "kézi munka" után következhet a gép automatikus munkája, az iteráció. Az iterációt 2-vel kezdjük (ez ugye a 0-s indexelés miatt a 3. elem lesz, hisz az első kettőt már mi megadtuk) és $n$-ig, a megadott listaméretig számolunk. A ciklus törzsében az addig kiszámolt lista végére hozzátűzzük (append parancsról kicsit bővebben itt) az előző két tag összegét. A ciklus vége után kiíratjuk az eredményt. Függvények Számos programozási nyelvben szokás a felhasználó által megalkotott bonyolultabb utasításokat függvényekbe szervezni. Jól megírt függvények használata általában rövidebbé és jobban átláthatóvá teszi a programokat. Függvények deklarálása Ha más hosszúságú sorozatot szeretnénk, átmásolhatjuk a fenti kódot egy új cellába, és átírhatjuk az n=10-et pl. n=100-ra. Van azonban egy hatékonyabb módszer, új függvény-ként definiálhatjuk a def utasítás segítségével:
def fibonacci(sequence_length): "A Fibonacci sorozat elso *sequence_length* darab eleme" # ez csak a 'help'-hez kell sequence = [0,1] if 0 < sequence_length < 3: return sequence[:sequence_length] for i in range(2,sequence_length): sequence.append(sequence[i-1]+sequence[i-2]) return sequence
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Most már meghívhatjuk a fibonacci() függvényt különböző hosszakra:
fibonacci(5)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Elemezzük a fenti kódot! A már megszokott módon a kettőspont és behúzás határozza meg a függvénydefinícióhoz tartozó kódblokkot. A 2. sorban idézőjelek közt szerepel a "docstring", ami a függvény működését magyarázza el röviden, és később a help paranccsal hívható elő:
help(fibonacci)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A notebookos környezetben a docstring a ? segítségével is elérhető:
?fibonacci
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A docstring-et jupyter környezetben úgy is megtekinthetjük, ha egyes függvények hasában, azaz a zárójelek között SHIFT+TAB-ot nyomunk. Próbáld ki ezt az alábbi cellán (annélkül hogy lefuttatnád azt)!
fibonacci()
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A függvény kimenetelét a return kulcsszó határozza meg. Ha a fügvény definiálása során nem használtunk return utasítást akkor a függvény None (semmi) értéket ad vissza. Ha egy függvény lefutott, és nem hajtott végre return utasítást, akkor is None értékkel tér vissza. A fibonacci fügvény például egy listát ad vissza:
x=fibonacci(10) x
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ez a függvény viszont nem tér vissza sehol:
def ures_fuggveny(x): print('Én egy ures fuggvény vagyok,\nannak ellenére hogy beszélek,\nnem térek vissza változóval!!') y=x-2;
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Így a z válltozóban nem tárolódik semmilyen érték!
z=ures_fuggveny(3) z print(z)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Egy függvénynek lehet több bemeneti változója is:
def osszead(a,b): return a+b
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Az is előfordulhat, hogy több értéket szeretnénk visszakapni egy függvényből. Ezt például az alábbiak alapján tehetjük meg:
def plusminus(a,b): return a+b,a-b p,m=plusminus(2,3) print (p) print (m)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Paraméter lista és a "kicsomagolás" Előfordulhat, hogy egy függvénynek sok bemenő paramétere van, vagy hogy egy függvény bemenő paramétereit egy másik függvény eleve egy listába rendezi. Egy tipikus ilyen példa, amint azt későb látni fogjuk, a függvényillesztés esete. Ilyenkor a paramétereket tartalmazó lista "kicsomagolásával", amit a *-jellel tehetünk meg, kompaktabb kódot érhetünk el. Vegyünk például egy olyan esetet amikor egy adatsorra egy ötödfokú polinomot illesztünk. $$f(x)=a_0+a_1x+a_2x^2+a_3x^3+a_4x^4+a_5x^5$$ Definiáljunk egy függvényt ami egy "futó" $x$ változó és az ötödfokó polinomnak megfelelően hat darab "paraméter" $a_i$ változó segítségével kiértékeli a fenti polinomot:
# ez lesz az illesztendő függvény def poly5(x,a0,a1,a2,a3,a4,a5): return a0+a1*x+a2*x**2+a3*x**3+a4*x**4+a5*x**5
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Az illesztés során hat darab illesztési paramétert határozunk meg: $a_0,a_1,a_2,a_3,a_4,a_5$ ám ezeket az illesztő program egy listába rendezve adja a kezünkbe:
# ezek az illesztés során meghatározott paraméterek # az alábbi sorrendnek megfelelően # params=[a0,a1,a2,a3,a4,a5] params=[ 2.27171539, -1.1368942 , 0.65380304, -0.25005187, -0.1751268 , -0.48828309];
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ha ki szeretnénk értékelni az illesztett polinomot az $x=0.3$ helyen, akkor azt megtehetjük az alábbi módon:
poly5(0.3,params[0],params[1],params[2],params[3],params[4],params[5])
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
vagy az ennél sokkal kompaktabb módszerrel:
poly5(0.3,*params)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ez a konstrukció lehetővé teszi, hogy a függvény definiálása során felkészísük a függvényt arra, hogy a bemenő paraméterek száma ne legyen rögzített. Ha a függvény deklaráció során egy paraméter elé *-t teszünk akkor az a paraméter tetszőleges hosszúságú lehet! Vizsgáljuk meg az alábbi példát:
def adok_mit_kapok(*argv): #Így, a *-al, készítünk fel egy függvényt #válltozó számú paraméter fogadására print("Nekem ",len(argv),"db bemenő paraméterem jött") for arg in argv: print ("Ez egy paraméter :", arg) return argv[-1] #a bemenő paraméterek elemeire mint #szokványos lista elemekre hivatkozunk
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A fenti blokkban definiált függvény tetszőleges számú bemenő paramétert elfogad! A futás során közli, hogy hány paraméter érkezett, kiírja azokat, illetve a paraméterlista utolsó tagját mint a függvény vissza térési értékét állítja be.
adok_mit_kapok('Gáspár','Menyhért','Boldizsár')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Természetesen a tetszőleges paraméter helyett egy "kicsomagolt" tetszőleges hosszúságú listát is használhatunk!
adok_mit_kapok(*params) #természetesen itt is működik a kicsomagolás..
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Függvények kulcsszavakkal Azon kívül, hogy a szótárak már önmagukban is igen hasznos adatstruktúrák, később látni fogjuk, hogy sokszor függvények bizonyos paramétereit is szokás szótárakba szedni. Az ilyen paramétereket szokás kulcsszavas változóknak vagy kulcsszavas argumentumnak (angol nyelven keyword argument) hívni. Ezek használata olvashatóbbá és emberi szemmel is jóval értelmezhetőbbé teszi a programot, ezenfelül további flexibilitást is ad a programozó kezébe. Nézzük ezt meg az alábbi példán:
#Így adunk meg alapértelmezett értékeket def students(ido, allapot='lelkesen figyelik a tanárt', tevekenyseg='kísérletezés', ora='fizika'): print("Ezek a diákok "+ora+"órán "+tevekenyseg+" közben mindig "+allapot+"!"); print("Még akkor is, ha épp ",ido,"-t mutat az óra!");
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A függvény első paraméterét meg kell adnunk, ha nem adunk többet, akkor az alapértelmezett értékek ugranak be:
students('17:00')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ha egy kulcsszavas argumentumot kap a függvény, akkor azt értelemszerűen használja:
students('8:00',allapot='unott képet vágnak')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A kulcsszavas argumentumok sorrendjére nem kell figyelni:
students('8:00',ora='ógörög',allapot='pánikolva izzadnak',tevekenyseg='TÉMAZÁRÓ')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ha a deklaráció során nem használt kulcsszót adunk meg, akkor hibát kapunk:
students('17:00',tanar='Mici néni')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Hasonlóan problémába ütközünk, ha egy kulcsszót kétszer is alkalmazunk:
students('8:00',ora='ógörög',ora='kémia')
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Kulcsszavas argumentumok szótárának kicsomagolása **-jel segítségével történik:
diak_hozzaallas={'ora':'ének','allapot':'nyüszítenek'}; students(12,**diak_hozzaallas)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Amint a sima paramétereknél is, itt is előfordulhat az, hogy tetszőleges hosszúságú dict-et szeretnénk feldolgozni. Erre példa, ha egy olyan függvényt írunk, amely esetleg több más fügvényt hív, melyeknek tovább akarjuk adni a bejövő paraméterek egy részét. Az alábbi függvény egy tetszőleges szótárat vár a bemenetre, megnézi hogy milyen hosszú, és ha a szótárban van 'hamburger' címkéjű kulcsszó, akkor annak az értékét adja vissza:
def kulcsot_adok_amit_kapok(**szotar): print('A szotár hossza:',len(szotar)) for kulcs in list(szotar.keys()): if kulcs=='hamburger': print('Van hamburger!') return szotar[kulcs] kulcsot_adok_amit_kapok(makaróni=1,torta='finom') #itt már nem jelent hibát ha előre meg nemhatározott kulcsszavaink vannak! kaja={'makaróni':1,'torta':'finom','hamburger':137,'saláta':'nincs'} kulcsot_adok_amit_kapok(**kaja)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Általános függvénydeklarációs szokások Amint azt a fentiekben láttuk, függvényeknek többféleképpen is adhatunk paramétereket, változók (ezek lehetnek sima változók, meghatározott hosszúságú listák, vagy akár kulcsszavas változók is) előre meg nem határozott hosszúságú változó lista kulcsszavas változók tetszőleges hosszú listája Sokszor előfordul, hogy egy függvénynek mind a négy fajta bemenete is lehet. Illetve hogy néhány változónak alapértelmezett értéket is adunk. Függvények definiálásánál használjuk a fenti sorrendet! Például:
def bonyolult_fuggveny(valtozo1,valtozo2,valtozo3='ELZETT',*args,**kwargs): if ((len(args)==0 and len(kwargs)==0)): return valtozo3+str(valtozo2)+str(valtozo1) elif (len(args)!=0 and len(kwargs)==0): return 'Van valami az args-ban!' elif (len(args)==0 and len(kwargs)!=0): return 'Van valami az kwargs-ban!' else: return 'Mindenféle változónk van:'+str(valtozo1)+str(valtozo2)+valtozo3+str(args)+str(kwargs)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A fenti függvény első két változója "sima" változó, a harmadik egy kulcsszavas változó az alapértelmezett 'ELZETT' értékkel, és ezen kívül megengedünk még egyéb tetszőleges hosszú "sima" változók listáját (args), illetve tetszőleges hosszú kulcsszavas változók listáját (kwargs). Nézzük meg hogy a notebook során korábban definiált változókat felhasználva milyen viselkedést mutat a fent deklarált függvény:
bonyolult_fuggveny(1,2) bonyolult_fuggveny(1,2,valtozo3='MULTLOCK') bonyolult_fuggveny(1,2,*days_of_the_week) bonyolult_fuggveny(1,2,**kaja) bonyolult_fuggveny(1,2,*days_of_the_week,**kaja)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
A Lambda-formák ☠ Egy függvénynek nemcsak változókat, hanem más függvényeket is megadhatunk bemenetként. Például gondolhatunk egy olyan függvényre, ami egy matematikai függvényt ábrázol! Ilyen esetekben, amikor egy függvény a bemenetére másik függvényt vár, sokszor előfordul, hogy hosszadalmas külön definiálni a bemeneti függvényt. Ekkor tömörebb használni az úgynevezett lambda-formákat. Nézzünk erre egy példát! Definiáljunk egy függvényt, ami egy másik függvényt értékel ki egy adott helyen, kiírja a kiértékelési helyet, és visszatér a kiértékelt függvényértékkel:
def funfun(g,x): print('Ez volt az x változó: ',x) return g(x) def fx(x): return x**2-1/x; funfun(fx,0.1)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Ekkor egy úgynevezett lambda-forma segítségével alkalmazhatjuk az ennél kicsivel kompaktabb kifejezést (megúszva a függvénydeklarációt):
funfun(lambda x:x**2-1/x,0.1)
notebooks/Package02/mintapelda02.ipynb
oroszl/szamprob
gpl-3.0
Data retrieval Files Remote files can be downloaded directly from python. Depending on file format they can also be opened and read directly from python. For python files are either text or binary, opened with 'b' binary mode. Python looks for line endings when reading text files: \n on Unix, \r\n on Windows. open(file, 'rb') open(file, 'wb') Text files may be structured as tables (CSV). For more complex data structures other formats may be more appropriate: * Images (binary) * Archives as data containers (binary) * HDF5 (binary) * pickle (binary) * JSON (text) * XML (text) Use appropriate python packages for operating data in these formats.
# An arbitrary collection of objects data1 = { 'a': [1, 2.0, 3, 4+6j], 'b': ("character string", b"byte string"), 'c': {None, True, False} } print(data1) # write pickled data with open('data.pickle', 'wb') as f: pickle.dump(data1, f) # reads the resulting pickled data with open('data.pickle', 'rb') as f: data2 = pickle.load(f) print(data2) data = { 'a': [1, 2.0, 3], 'b': ("character string", "string"), 'c': [None, True, False] } print(data) json_string = json.dumps(data) new_data = json.loads(json_string) print(new_data) # Not all objects are supported by JSON try: print(json.dumps(data1)) except TypeError as e: print(e)
Wk04-Data-retrieval-and-preprocessing.ipynb
streety/biof509
mit
Retrieving remote data
ICGC_API = 'https://dcc.icgc.org/api/v1/download?fn=/release_18/Projects/BRCA-US/' expression_fname = 'protein_expression.BRCA-US.tsv.gz' if not Path(expression_fname).is_file(): print("Downloading file", ICGC_API + expression_fname, "saving it as", expression_fname) urllib.request.urlretrieve(ICGC_API + expression_fname, expression_fname); else: print("Local file exists:", expression_fname) def get_genome_sequence_ensembl(chrom, start, end): """ API described here http://rest.ensembl.org/documentation/info/sequence_region """ url = 'https://rest.ensembl.org/sequence/region/human/{0}:{1}..{2}:1?content-type=application/json'.format(chrom, start, end) r = requests.get(url, headers={"Content-Type": "application/json"}, timeout=10.000) if not r.ok: print("REST Request FAILED") decoded = r.json() print(decoded['error']) return else: print("REST Request OK") decoded = r.json() return decoded['seq'] sequence = get_genome_sequence_ensembl(7, 200000,200100) print(sequence) # Reading data file with gzip.open(expression_fname) as f: expression = f.read().decode('UTF-8') for i, line in enumerate(expression.split("\n")): if i == 0: continue if i > 3: break fields = line.split("\t") print(fields) print() # Loading data into numpy E_np = np.genfromtxt(expression_fname, delimiter='\t', dtype=("|U10", "|U10", "|U10", float), skip_header=1, usecols=(0, 2, 7, 10)) print(E_np) # Loading data into pandas E_pd = pd.read_csv(expression_fname, delimiter='\t') E_pd.head() E_pd['normalized_expression_level'].hist()
Wk04-Data-retrieval-and-preprocessing.ipynb
streety/biof509
mit
Connecting to a remote database server SQL databases are convenient for storing and accessing data that requires concurrent access and control of integrity. Example: UCSC Genomes database http://genome.ucsc.edu/cgi-bin/hgTables We use SQLAlchemy package and pymysql MySQL driver, which has the following major objects: * Engine * Connection * Metadata * Table The typical usage of create_engine() is once per particular database URL, held globally for the lifetime of a single application process. A single Engine manages many individual DBAPI connections. Here I am disabling connection pooling by Engine in order to use it in a Jupyter workbook.
engine = sa.create_engine('mysql+pymysql://genome@genome-mysql.cse.ucsc.edu/hg38', poolclass=sa.pool.NullPool)
Wk04-Data-retrieval-and-preprocessing.ipynb
streety/biof509
mit
The connection is an instance of Connection, which is a proxy object for an actual DBAPI connection. The DBAPI connection is retrieved from the connection pool at the point at which Connection is created.
connection = engine.connect() result = connection.execute("SHOW TABLES") for row in result: print("Table:", row[0]) connection.close() # Connection supports context manager with engine.connect() as connection: result = connection.execute("DESCRIBE refGene") for row in result: print("Columns:", row) # Selected columns and rows using SQL with engine.connect() as connection: result = connection.execute(""" SELECT name, name2, chrom, strand, cdsStart, cdsEnd FROM refGene WHERE name2='ZNF107' """) for i, row in enumerate(result): print("Record #", i) print("\tGene {} ({})".format(row['name'], row['name2'])) print("\tCDS location {} {}-{} on strand {}".format(row['chrom'], row['cdsStart'], row['cdsEnd'], row['strand'])) meta = sa.MetaData(bind=engine) meta.reflect(only=['refGene', 'snp147Common']) # However, we need to modify metadata and add a primary key: gene_table = sa.Table('refGene', meta, sa.PrimaryKeyConstraint('name'), extend_existing=True) print(gene_table.columns.keys()) print(gene_table.c.strand.name, gene_table.c.strand.type)
Wk04-Data-retrieval-and-preprocessing.ipynb
streety/biof509
mit
Pandas can read data directly from the database
snp_table = sa.Table('snp147Common', meta, sa.PrimaryKeyConstraint('name'), extend_existing=True) # Getting data into pandas: import pandas as pd expr = sa.select([snp_table]).where(snp_table.c.chrom == 'chrY').limit(5) pd.read_sql(expr, engine)
Wk04-Data-retrieval-and-preprocessing.ipynb
streety/biof509
mit
Download the dataset This next chunk of code will download the face images dataset we're going to use for this tutorial. Then, it will convert these images to numpy array, so dlib can understand it, and append each one of them to a list. Finally, in the last line, this list is converted into a larger numpy array containing all images.
url_list = [ 'http://fei.edu.br/~cet/frontalimages_spatiallynormalized_part1.zip', 'http://fei.edu.br/~cet/frontalimages_spatiallynormalized_part2.zip', ] archive = [ZipFile(urlretrieve(url)[0], 'r') for url in url_list] images = [image for zipfile in archive for image in zipfile.namelist()] face_db = [] for image in images: try: face = Image.open(BytesIO(archive[0].read(image))) except: face = Image.open(BytesIO(archive[1].read(image))) face_db.append(np.array(face)) face_db = np.array(face_db)
my_notebooks/facial_landmarks.ipynb
ddfabbro/ipython_tutorial
mit
Download and extract landmarks predictor Next, we need to download the trained model which is able to predict the location of each of the 68 landmarks in a face image. Since I don't want this tutorial to have any additional step other than the code available here, the following script will automatically download the file and store it in your temporary files.
url = 'http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2' filepath = urlretrieve(url)[0] data = bz2.BZ2File(filepath).read() with open(filepath, 'wb') as f: f.write(data) print(filepath)
my_notebooks/facial_landmarks.ipynb
ddfabbro/ipython_tutorial
mit
Create the landmarks dataset With the trained model in hands ~~(hope it didn't take long to download)~~, we can build our landmarks dataset for each face image accordingly. First, we need to define the face detector using dlib.get_frontal_face_detector(), and then, we specify the landmarks predictor using dlib.shape_predictor(path_to_trained_model_file). Next, for each face in our face dataset, we need to first detect it and then predict the coordinates for the 68 landmarks. Note that the landmarks coordinates are not a numpy array, so we also need to convert it to array and finally append to our landmarks dataset.
detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(filepath) landmarks_db = [] for face in face_db: rect = detector(face)[0] shape = predictor(face, rect) landmarks = np.array([[p.x, p.y] for p in shape.parts()]) landmarks_db.append(landmarks) landmarks_db = np.array(landmarks_db)
my_notebooks/facial_landmarks.ipynb
ddfabbro/ipython_tutorial
mit
Results So lets recap. We have a dataset containing face images and another dataset contained 68 landmarks coordinates for each face. This last chunk of code shows how to plot a sample of 15 faces with landmarks annotated.
def plot_landmarks(image,vtk): plt.imshow(image,cmap='gray',origin="lower") plt.scatter(vtk[:,0],vtk[:,1],marker='+',color='w') plt.xlim([0,image.shape[1]]) plt.ylim([0,image.shape[0]]) plt.gca().invert_yaxis() plt.axis('off') np.random.seed(1) fig = plt.figure(figsize=(20.,14.7)) fig.subplots_adjust(hspace=0, wspace=0) for i in range(15): k = np.random.randint(0,face_db.shape[0]) plt.subplot(3,5,i+1) plot_landmarks(face_db[k],landmarks_db[k])
my_notebooks/facial_landmarks.ipynb
ddfabbro/ipython_tutorial
mit