markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
|
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print(("Parameter 'max_depth' is {} for the optimal model.").format(reg.get_params()['max_depth']))
|
boston_housing.ipynb
|
rodrigomas/boston_housing
|
mit
|
Answer: The answer is 4, and it is the range we "guess" on question 6. But it is logical because we are optimizing one parameter (so it is not so difficult) and we have the graphs to know the exact estimator behavior against the parameter change.
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
|
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print(("Predicted selling price for Client {}'s home: ${:,.2f}").format(i+1, price))
features.describe()
|
boston_housing.ipynb
|
rodrigomas/boston_housing
|
mit
|
Answer:
Client 1: 403025.00
The value looks good, the value is close to the medium, and if we analyse the features, we can note that this client want the features close to the medium of each feature. So the predictor will go to something to close to it.
Client 2: 237478.72
Also it is good, the client chooses values close to the minimum RM (which decreases the price), close to the maximum LSTAT (which also decreases the price) and PTRATIO at the maximum. So all those features forces the estimator to a lower price.
Client 3: 931636.36
This client is the oposite of client 2, the RM is too close to the maximum value, LSTAT is close to the minimum and the PTRATIO is close to the minimum. Thus, it is a set that the estimator will try to reach a higher value.
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
|
vs.PredictTrials(features, prices, fit_model, client_data)
|
boston_housing.ipynb
|
rodrigomas/boston_housing
|
mit
|
Doc.sents is a generator
It is important to note that doc.sents is a generator. That is, a Doc is not segmented until doc.sents is called. This means that, where you could print the second Doc token with print(doc[1]), you can't call the "second Doc sentence" with print(doc.sents[1]):
|
print(doc[1])
print(doc.sents[1])
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
However, you can build a sentence collection by running doc.sents and saving the result to a list:
|
doc_sents = [sent for sent in doc.sents]
doc_sents
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
<font color=green>NOTE: list(doc.sents) also works. We show a list comprehension as it allows you to pass in conditionals.</font>
|
# Now you can access individual sentences:
print(doc_sents[1])
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
sents are Spans
At first glance it looks like each sent contains text from the original Doc object. In fact they're just Spans with start and end token pointers.
|
type(doc_sents[1])
print(doc_sents[1].start, doc_sents[1].end)
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
Adding Rules
spaCy's built-in sentencizer relies on the dependency parse and end-of-sentence punctuation to determine segmentation rules. We can add rules of our own, but they have to be added before the creation of the Doc object, as that is where the parsing of segment start tokens happens:
|
# Parsing the segmentation start tokens happens during the nlp pipeline
doc2 = nlp(u'This is a sentence. This is a sentence. This is a sentence.')
for token in doc2:
print(token.is_sent_start, ' '+token.text)
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
<font color=green>Notice we haven't run doc2.sents, and yet token.is_sent_start was set to True on two tokens in the Doc.</font>
Let's add a semicolon to our existing segmentation rules. That is, whenever the sentencizer encounters a semicolon, the next token should start a new segment.
|
# SPACY'S DEFAULT BEHAVIOR
doc3 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc3.sents:
print(sent)
# ADD A NEW RULE TO THE PIPELINE
def set_custom_boundaries(doc):
for token in doc[:-1]:
if token.text == ';':
doc[token.i+1].is_sent_start = True
return doc
nlp.add_pipe(set_custom_boundaries, before='parser')
nlp.pipe_names
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
<font color=green>The new rule has to run before the document is parsed. Here we can either pass the argument before='parser' or first=True.
|
# Re-run the Doc object creation:
doc4 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc4.sents:
print(sent)
# And yet the new rule doesn't apply to the older Doc object:
for sent in doc3.sents:
print(sent)
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
Why not change the token directly?
Why not simply set the .is_sent_start value to True on existing tokens?
|
# Find the token we want to change:
doc3[7]
# Try to change the .is_sent_start attribute:
doc3[7].is_sent_start = True
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
<font color=green>spaCy refuses to change the tag after the document is parsed to prevent inconsistencies in the data.</font>
Changing the Rules
In some cases we want to replace spaCy's default sentencizer with our own set of rules. In this section we'll see how the default sentencizer breaks on periods. We'll then replace this behavior with a sentencizer that breaks on linebreaks.
|
nlp = spacy.load('en_core_web_sm') # reset to the original
mystring = u"This is a sentence. This is another.\n\nThis is a \nthird sentence."
# SPACY DEFAULT BEHAVIOR:
doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent])
# CHANGING THE RULES
from spacy.pipeline import SentenceSegmenter
def split_on_newlines(doc):
start = 0
seen_newline = False
for word in doc:
if seen_newline:
yield doc[start:word.i]
start = word.i
seen_newline = False
elif word.text.startswith('\n'): # handles multiple occurrences
seen_newline = True
yield doc[start:] # handles the last group of tokens
sbd = SentenceSegmenter(nlp.vocab, strategy=split_on_newlines)
nlp.add_pipe(sbd)
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
<font color=green>While the function split_on_newlines can be named anything we want, it's important to use the name sbd for the SentenceSegmenter.</font>
|
doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent])
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb
|
rishuatgithub/MLPy
|
apache-2.0
|
Itérateur, Générateur
itérateur
La notion d'itérateur est incournable dans ce genre d'approche fonctionnelle. Un itérateur parcourt les éléments d'un ensemble. C'est le cas de la fonction range.
|
it = iter([0,1,2,3,4,5,6,7,8])
print(it, type(it))
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Il faut le dissocier d'une liste qui est un conteneur.
|
[0,1,2,3,4,5,6,7,8]
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Pour s'en convaincre, on compare la taille d'un itérateur avec celui d'une liste : la taille de l'itérateur ne change pas quelque soit la liste, la taille de la liste croît avec le nombre d'éléments qu'elle contient.
|
import sys
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8])))
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14])))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8]))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]))
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
L'itérateur ne sait faire qu'une chose : passer à l'élément suivant et lancer une exception StopIteration lorsqu'il arrive à la fin.
|
it = iter([0,1,2,3,4,5,6,7,8])
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
générateur
Un générateur se comporte comme un itérateur, il retourne des éléments les uns à la suite des autres que ces éléments soit dans un container ou pas.
|
def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
genere_nombre_pair(5)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Appelé comme suit, un générateur ne fait rien. On s'en convaint en insérant une instruction print dans la fonction :
|
def genere_nombre_pair(n):
for i in range(0,n):
print("je passe par là", i, n)
yield 2*i
genere_nombre_pair(5)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Mais si on construit une liste avec tout ces nombres, on vérifie que la fonction genere_nombre_pair est bien executée :
|
list(genere_nombre_pair(5))
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
L'instruction next fonctionne de la même façon :
|
def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
it = genere_nombre_pair(5)
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Le moyen le plus simple de parcourir les éléments retournés par un itérateur ou un générateur est une boucle for :
|
it = genere_nombre_pair(5)
for nombre in it:
print(nombre)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
On peut combiner les générateurs :
|
def genere_nombre_pair(n):
for i in range(0,n):
print("pair", i)
yield 2*i
def genere_multiple_six(n):
for pair in genere_nombre_pair(n):
print("six", pair)
yield 3*pair
print(genere_multiple_six)
for i in genere_multiple_six(3):
print(i)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
intérêt
Les itérateurs et les générateurs sont des fonctions qui parcourent des ensembles d'éléments ou donne cette illusion.
Ils ne servent qu'à passer à l'élément suivant.
Ils ne le font que si on le demande explicitement avec une boucle for par exemple. C'est pour cela qu'on parle d'évaluation paresseuse ou lazy evaluation.
On peut combiner les itérateurs / générateurs.
Il faut voir les itérateurs et générateurs comme des flux, une ou plusieurs entrées d'éléments, une sortie d'éléments, rien ne se passe tant qu'on n'envoie pas de l'eau pour faire tourner la roue.
lambda fonction
Une fonction lambda est une fonction plus courte d'écrire des fonctions très simples.
|
def addition(x, y):
return x + y
addition(1, 3)
additionl = lambda x,y : x+y
additionl(1, 3)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Exercice 1 : application aux grandes bases de données
Imaginons qu'on a une base de données de 10 milliards de lignes. On doit lui appliquer deux traitements : f1, f2. On a deux options possibles :
Appliquer la fonction f1 sur tous les éléments, puis appliquer f2 sur tous les éléments transformés par f1.
Application la combinaison des générateurs f1, f2 sur chaque ligne de la base de données.
Que se passe-t-il si on a fait une erreur d'implémentation dans la fonction f2 ?
Map/Reduce, approche fonctionnelle avec cytoolz
On a vu les fonctions iter et next mais on ne les utilise quasiment jamais. La programmation fonctionnelle consiste le plus souvent à combiner des itérateurs et générateurs pour ne les utiliser qu'au sein d'une boucle. C'est cette boucle qui appelle implicitement les deux fonctions iter et next.
La combinaison d'itérateurs fait sans cesse appel aux mêmes schémas logiques. Python implémente quelques schémas qu'on complète par un module tel que cytoolz. Les deux modules toolz et cytoolz sont deux implémentations du même ensemble de fonctions décrit par la documentation : pytoolz. toolz est une implémentation purement Python. cytoolz s'appuie sur le langage C++, elle est plus rapide.
Par défault, les éléments entrent et sortent dans le même ordre. La liste qui suit n'est pas exhaustive (voir itertoolz).
schémas simples:
filter : sélectionner des éléments, $n$ qui entrent, $<n$ qui sortent.
map : transformer les éléments, $n$ qui entrent, $n$ qui sortent.
take : prendre les $k$ premiers éléments, $n$ qui entrent, $k <= n$ qui sortent.
drop : passer les $k$ premiers éléments, $n$ qui entrent, $n-k$ qui sortent.
sorted : tri les éléments, $n$ qui entrent, $n$ qui sortent dans un ordre différent.
reduce : aggréger (au sens de sommer) les éléments, $n$ qui entrent, 1 qui sort.
concat : fusionner deux séquences d'éléments définies par deux itérateurs, $n$ et $m$ qui entrent, $n+m$ qui sortent.
schémas complexes
Certains schémas sont la combinaison de schémas simples mais il est plus efficace d'utiliser la version combinée.
join : associe deux séquences, $n$ et $m$ qui entrent, au pire $nm$ qui sortent.
groupby : classe les éléments, $n$ qui entrent, $p<=n$ groupes d'éléments qui sortent.
reduceby : combinaison (groupby, reduce), $n$ qui entrent, $p<=n$ qui sortent.
schéma qui retourne un seul élément
all : vrai si tous les éléments sont vrais.
any : vrai si un éléments est vrai.
first : premier élément qui entre.
last : dernier élément qui sort.
min, max, sum, len...
schéma qui aggrège
add : utilisé avec la fonction reduce pour aggréger les éléments et n'en retourner qu'un.
API PyToolz décrit l'ensemble des fonctions disponibles.
Exercice 2 : cytoolz
La note d'un candidat à un concours de patinage artistique fait la moyenne de trois moyennes parmi cinq, les deux extrêmes n'étant pas prises en compte. Il faut calculer cette somme pour un ensemble de candidats avec cytoolz.
|
notes = [dict(nom="A", juge=1, note=8),
dict(nom="A", juge=2, note=9),
dict(nom="A", juge=3, note=7),
dict(nom="A", juge=4, note=4),
dict(nom="A", juge=5, note=5),
dict(nom="B", juge=1, note=7),
dict(nom="B", juge=2, note=4),
dict(nom="B", juge=3, note=7),
dict(nom="B", juge=4, note=9),
dict(nom="B", juge=1, note=10),
dict(nom="C", juge=2, note=0),
dict(nom="C", juge=3, note=10),
dict(nom="C", juge=4, note=8),
dict(nom="C", juge=5, note=8),
dict(nom="C", juge=5, note=8),
]
import pandas
pandas.DataFrame(notes)
import cytoolz.itertoolz as itz
import cytoolz.dicttoolz as dtz
from functools import reduce
from operator import add
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Blaze, odo : interfaces communes
Blaze fournit une interface commune, proche de celle des Dataframe, pour de nombreux modules comme bcolz... odo propose des outils de conversions dans de nombreux formats.
Pandas to Blaze
Ils sont présentés dans un autre notebook. On reproduit ce qui se fait une une ligne avec odo.
|
df.to_csv("mortalite_compresse.csv", index=False)
from pyquickhelper.filehelper import gzip_files
gzip_files("mortalite_compresse.csv.gz", ["mortalite_compresse.csv"], encoding="utf-8")
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Parallélisation avec dask
dask
dask propose de paralléliser les opérations usuelles qu'on applique à un dataframe.
L'opération suivante est très rapide, signifiant que dask attend de savoir quoi faire avant de charger les données :
|
import dask.dataframe as dd
fd = dd.read_csv('mortalite_compresse*.csv.gz', compression='gzip', blocksize=None)
#fd = dd.read_csv('mortalite_compresse.csv', blocksize=None)
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Extraire les premières lignes prend très peu de temps car dask ne décompresse que le début :
|
fd.head()
fd.npartitions
fd.divisions
s = fd.sample(frac=0.01)
s.head()
life = fd[fd.indicateur=='LIFEXP']
life
life.head()
|
_doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb
|
sdpython/actuariat_python
|
mit
|
Here you find an example to insert an image.
<img src='00_figures/bla.jpg' width=70%>
<div align="center">**Figure 0.A1**: Spherical triangle $STZ$
</div>
|
from wand.image import Image as WImage
img = WImage(filename='00_figures/bla.eps')
img
|
chapter_00_preface/00_appendix.ipynb
|
gigjozsa/HI_analysis_course
|
gpl-2.0
|
First we create a signal by smoothing some random noise and look at its DFT evaluated at 256 points, as computed by the FFT. Computing the N-point DFT of a N-point signal via the FFT has a complexity of $$O(N Log N)$$.
|
# a basic signal
N = 256
np.random.seed(0)
x = np.convolve(np.random.normal(0, 1, N), np.ones(20)/20.0)[:N]
omegas = np.linspace(-np.pi, np.pi, N+1)[:N]
dft = np.fft.fftshift(np.fft.fft(x))
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
pylab.plot(omegas, abs(dft))
ax.set_xlim(-np.pi, np.pi)
ax.set_ylim(0, 40)
|
examples/basic example.ipynb
|
ericmjonas/pychirpz
|
mit
|
Then we explicitly evaluate the discrete-time Fourier transform (DTFT) of the signal $x[n]$. Remember that the DTFT of a discrete-time signal is a continuous function of omega. Naively evaluating the M-point DTFT of an N-point signal is $O(M\cdot N)$. Here we evaluate at M = 16x256 points, and zoom in on $[-\frac{\pi}{4}, \frac{\pi}{4} ]$
|
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
zoom_factor = 16
omegas_zoom = np.linspace(-np.pi, np.pi, zoom_factor*N+1)[:zoom_factor*N]
dtft = chirpz.pychirpz.dtft(x, omegas_zoom)
ax.plot(omegas_zoom, np.abs(dtft), label='dtft')
ax.scatter(omegas, np.abs(dft), c='r', label='fft dft')
ax.set_xlim(-np.pi/4, np.pi/4.0)
ax.set_ylim(0, 40)
pylab.legend()
|
examples/basic example.ipynb
|
ericmjonas/pychirpz
|
mit
|
Note from the above plot that there are various sampling artifacts. One way of resolving this is to oversample the DFT via zero-padding. This is what happens when you ask for an FFT evaluated at M points of an N-point signal. The results are below. This is a $O(M log M)$ operation
|
dtft_zoom = chirpz.pychirpz.dtft(x, omegas_zoom)
fft_zoom = np.fft.fftshift(np.fft.fft(x, N*zoom_factor))
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
ax.plot(omegas_zoom, np.abs(dtft_zoom), label='dtft')
ax.scatter(omegas, np.abs(dft), c='r', s=40, edgecolor='none', label='fft dft')
ax.scatter(omegas_zoom, np.abs(fft_zoom), c='g', edgecolor='none', label='zero-padded M-point fft dft')
ax.set_xlim(-np.pi/4, np.pi/4.0)
ax.set_ylim(0, 40)
pylab.legend()
|
examples/basic example.ipynb
|
ericmjonas/pychirpz
|
mit
|
But what if we just care about a subset of the DTFT? That is, what if we want to evaluate the DTFT on the region $[-\frac{\pi}{4}, \frac{\pi}{4} ]$ and ignore everything else? This is where the chirp-z transform comes it. We can specify that we only wish to evaluate the DTFT starting at a particular angular frequency, with a certain angular spacing, for a specific number of points. If we wish to evaluate the DTFT of a $N$-length signal at $M$ evenly-spaced points, it will take roughly $O((M+N) log (M+N))$. We can see the result below.
|
# now try chirp-z transform
start = -np.pi / 4.0
omega_delta = omegas_zoom[1] - omegas_zoom[0]
M = N * zoom_factor / 4.0
zoom_cz = chirpz.pychirpz.zoom_fft(x, start, omega_delta , M)
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
omegas_cz = np.arange(M) * omega_delta + start
ax.plot(omegas_zoom, np.abs(dtft_zoom), label='dtft')
ax.scatter(omegas_cz, np.abs(zoom_cz), c='r', s=20, edgecolor='none', label='chirp-z')
ax.set_xlim(-np.pi/4, np.pi/4.0)
ax.set_ylim(0, 40)
pylab.legend()
|
examples/basic example.ipynb
|
ericmjonas/pychirpz
|
mit
|
Visualizing the Data
A good first-step for many problems is to visualize the data using one of the
Dimensionality Reduction techniques we saw earlier. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
variance, and as such, can help give you a good idea of the structure of the
data set. Here we'll use RandomizedPCA, because it's faster for large N.
|
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(n_components=2)
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar()
|
notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb
|
samstav/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
It can be fun to explore the various manifold learning methods available,
and how the output depends on the various parameters used to tune the
projection.
In any case, these visualizations show us that there is hope: even a simple
classifier should be able to adequately identify the members of the various
classes.
Question: Given these projections of the data, which numbers do you think
a classifier might have trouble distinguishing?
Gaussian Naive Bayes Classification
For most classification problems, it's nice to have a simple, fast, go-to
method to provide a quick baseline classification. If the simple and fast
method is sufficient, then we don't have to waste CPU cycles on more complex
models. If not, we can use the results of the simple method to give us
clues about our data.
One good method to keep in mind is Gaussian Naive Bayes. It is a generative
classifier which fits an axis-aligned multi-dimensional Gaussian distribution to
each training label, and uses this to quickly give a rough classification. It
is generally not sufficiently accurate for real-world data, but can perform surprisingly well.
|
from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import train_test_split
# split the data into training and validation sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
# train the model
clf = GaussianNB()
clf.fit(X_train, y_train)
# use the model to predict the labels of the test data
predicted = clf.predict(X_test)
expected = y_test
|
notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb
|
samstav/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier:
several are available in the sklearn.metrics submodule.
One of the most useful metrics is the classification_report, which combines several
measures and prints a table with the results:
|
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
|
notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb
|
samstav/scipy_2015_sklearn_tutorial
|
cc0-1.0
|
Como resultado obtenemos el fichero de puntos que relaciona la posición de cada pixel de la imagen con su posición geográfica, listo para usar en QGIS
|
mapX,mapY,pixelX,pixelY,enable
-6.29923900000000003,36.53782000000000352,1451,2331,1
-6.27469199999999994,36.53312999999999988,1408,2206,1
-6.30627699999999969,36.52857900000000058,1513,2324,1
-6.26748200000000022,36.48946300000000065,1609,2033,1
-6.16431299999999993,36.5218190000000007,1173,1700,1
-6.22362199999999977,36.56829499999999911,1093,2098,1
-6.11581900000000012,36.66419299999999737,297,1941,1
-6.35837699999999995,36.6163969999999992,1201,2837,1
-6.42721300000000006,36.74621700000000146,706,3584,1
-6.44231900000000035,36.73810100000000034,789,3622,1
-6.08933000000000035,36.2724410000000006,2226,577,1
-6.06687400000000032,36.28812099999999674,2101,522,1
-6.20306599999999975,36.38529900000000339,1965,1422,1
-6.14375699999999991,36.42949999999999733,1590,1323,1
-6.12581799999999976,36.69798999999999722,147,2104,1
-6.13118300000000005,36.70036400000000043,153,2129,1
-6.28443299999999994,36.53378599999999921,1419,2242,1
|
jupyter/Georreferenciadas.ipynb
|
jgcasta/CitiesAtNightPythonMadrid
|
mit
|
También obtenemos el mismo resultado para usar con GlobalMapper
|
1451,2331,-6.299239,36.537820,"punto1",0
1408,2206,-6.274692,36.533130,"punto2",0
1513,2324,-6.306277,36.528579,"punto3",0
1609,2033,-6.267482,36.489463,"punto4",0
1173,1700,-6.164313,36.521819,"punto5",0
1093,2098,-6.223622,36.568295,"punto6",0
297,1941,-6.115819,36.664193,"punto7",0
1201,2837,-6.358377,36.616397,"punto8",0
706,3584,-6.427213,36.746217,"punto9",0
789,3622,-6.442319,36.738101,"punto10",0
2226,577,-6.089330,36.272441,"punto11",0
2101,522,-6.066874,36.288121,"punto12",0
1965,1422,-6.203066,36.385299,"punto13",0
1590,1323,-6.143757,36.429500,"punto14",0
147,2104,-6.125818,36.697990,"punto15",0
153,2129,-6.131183,36.700364,"punto16",0
1419,2242,-6.284433,36.533786,"punto17",0
1954,2952,10.235532,36.804439,"punto18",0
|
jupyter/Georreferenciadas.ipynb
|
jgcasta/CitiesAtNightPythonMadrid
|
mit
|
El shell script quedaría de la siguiente forma
|
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1590 1323 -6.143757 36.429500 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1419 2242 -6.284433 36.533786 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1954 2952 10.235532 36.804439 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
|
jupyter/Georreferenciadas.ipynb
|
jgcasta/CitiesAtNightPythonMadrid
|
mit
|
Importing the quite heavy DataFrame with the voting fields and the results. We drop a useless column and create a Name field, which will contain both the first and the last name of a person, so we can then create a model for each unique deputee at the parliament.
|
path = '../datas/nlp_results/'
voting_df = pd.read_csv(path+'voting_with_topics.csv')
print('Entries in the DataFrame',voting_df.shape)
#Putting numerical values into the columns that should have numerical values
num_cols = ['Unnamed: 0','BusinessNumber','BillTitle', 'BusinessTitle','FirstName','LastName', 'BusinessShortNumber',
'Canton', 'CantonID','CantonName', 'DecisionText', 'ID', 'IdLegislativePeriod',
'IdSession', 'IdVote', 'Language', 'MeaningNo', 'MeaningYes', 'ParlGroupColour',
'ParlGroupNameAbbreviation', 'PersonNumber', 'RegistrationNumber']
cols_number = ['Decision', ' armée', ' asile / immigration', ' assurances',
' budget', ' dunno', ' entreprise/ finance', ' environnement',
' famille / enfants', ' imposition', ' politique internationale',
' retraite ']
voting = voting_df.drop(num_cols,axis=1)
voting[cols_number] = voting[cols_number].apply(pd.to_numeric)
# Readding the text because it is not numeric
#Inserting the full name at the second position
voting.insert(1,'Name', voting_df['FirstName'] + ' ' + voting_df['LastName'])
voting.head(3)
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Visualising all the different parties, along with their names. Note that for the same party, there are several same group codes. We will then use the ParlGroupName field over the ParlGroupCode
|
voting[['ParlGroupCode','ParlGroupName']].drop_duplicates()
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
We also drop the duplicates in the votes, keeping only the last one. Plot first an example of vote of a subject (Guy Parmelin)
|
gp = voting.loc[voting.Name=='Guy Parmelin']
gpt = gp.loc[gp.text == "Arrêté fédéral concernant la contribution de la Suisse en faveur de la Bulgarie et de la Roumanie au titre de la réduction des disparités économiques et sociales dans l'Union européenne élargie Réduction des disparités économiques et sociales dans l'UE. Contribution de la Suisse en faveur de la Roumanie et de la Bulgarie"]
gpt[['Decision','VoteEnd']]
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Drop duplicates here
|
voting_unique = voting_df.drop_duplicates(['text','Name'], keep = 'last')
voting_unique.head()
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
One element left below for a given people and entry. We will work with the voting_unique df.
|
gp = voting_unique.loc[voting.Name=='Guy Parmelin']
gpt = gp.loc[gp.text == "Arrêté fédéral concernant la contribution de la Suisse en faveur de la Bulgarie et de la Roumanie au titre de la réduction des disparités économiques et sociales dans l'Union européenne élargie Réduction des disparités économiques et sociales dans l'UE. Contribution de la Suisse en faveur de la Roumanie et de la Bulgarie"]
gpt[['Decision','VoteEnd']]
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Merging our DataFrame with the one containing the informations on each text
|
# Additional infos : whether the text was accepted and most proeminent topic
text_votes = pd.read_csv('topic_accepted.csv')
# Merging both DataFrames on the text field
voting_unique = pd.merge(voting_unique, text_votes, on=['text', 'text'])
def format_party_voting_profile(voting_unique):
# Setting the desired multiIndex
voting_party = voting_unique.set_index(['ParlGroupName','Topic'])[['Decision']]
#2. Counting yes/no/abstention
# Splitting the df by each party and topic, and then aggregating by yes/no/abstention
# Normalising every entry
count_yes = lambda x: np.sum(x==1)/(len(x))
count_no = lambda x: np.sum(x==2)/(len(x))
count_abstention = lambda x: np.sum(x==3)/(len(x))
voting_party = voting_party.groupby(level=['ParlGroupName','Topic']).agg({'Decision':
{'Yes': count_yes, 'No': count_no,'Abstention': count_abstention}})
voting_party.columns = voting_party.columns.droplevel(0)
return voting_party
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Formatting the voting_party DF
|
#voting_party = voting_party.unstack()
#voting_party.columns = voting_party.columns.swaplevel(1, 2)
#voting_party.sortlevel(0,axis=1,inplace=True)
voting_party = format_party_voting_profile(voting_unique)
voting_party.head()
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Some statistics about the whole DF.
Retrieve all topics and parties
|
parties = voting_party.index.get_level_values('ParlGroupName').unique()
topics = voting_party.index.get_level_values('Topic').unique()
topics
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Extremal percentages (Yes/No)
|
for topic in topics:
topic_voting_party = voting_party.xs(topic, level='Topic', drop_level=True)
max_yes = topic_voting_party.Yes.max().round(2); idx_max_yes = topic_voting_party.Yes.idxmax();
max_no = topic_voting_party.No.max().round(2); idx_max_no = topic_voting_party.No.idxmax()
min_yes = topic_voting_party.Yes.min().round(2); idx_min_yes = topic_voting_party.Yes.idxmin()
min_no = topic_voting_party.No.min().round(2); idx_min_no = topic_voting_party.No.idxmin()
print('Topic :',topic,'\n\t MAX YES :',idx_max_yes,'(',max_yes,')','\t MAX NO :',idx_max_no,'(',max_no,')')
print('\t MIN YES :',idx_min_yes,'(',min_yes,')','\t MIN NO :',idx_min_no,'(',min_no,')')
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
Extracting a party
The function below allows us to select one party and extract the DataFrame from the whole DataFrame
|
party_vote = voting_party.xs('Groupe des Paysans, Artisans et Bourgeois', level='ParlGroupName', drop_level=True)
party_vote.Yes
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
2.0 Dealing with a single individual at a time.
2.1 Formatting the DF
|
def format_individual_voting_profile(voting_unique):
# Setting Name, Party and Topic as indices
voting_deputee = voting_unique.set_index(['Name','ParlGroupName','Topic'])[['Decision']]
# Functions to count number of yes/no/absentions, same principle as before
count_yes = lambda x: np.sum(x==1)/(len(x))
count_no = lambda x: np.sum(x==2)/(len(x))
count_abstention = lambda x: np.sum(x==3)/(len(x))
voting_deputee = voting_deputee.groupby(level=['ParlGroupName','Name','Topic']).agg({'Decision':
{'Yes': count_yes, 'No': count_no,'Abstention': count_abstention}})
voting_deputee.columns = voting_deputee.columns.droplevel(0)
return voting_deputee
voting_deputee = format_individual_voting_profile(voting_unique)
voting_deputee.head()
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
2.2 Retrieving the stats w.r.t to the party for each individual
|
def compute_party_distance(voting_deputee,voting_party):
# Retrieve the unique parties and topics
parties = voting_party.index.get_level_values('ParlGroupName').unique()
topics = voting_party.index.get_level_values('Topic').unique()
# Extract the features of each party in a more convenient and easily accessible fashion
party_vote_dict = {}
for party in parties:
party_vote_dict[party] = voting_party.xs(party, level='ParlGroupName', drop_level=True)
for topic in topics:
split_df = voting_deputee.loc[(party,slice(None),topic),:]
split_df.loc[(party,slice(None),topic),'Yes'] = split_df.apply(lambda x: x-party_vote_dict[party].Yes[topic])
split_df.loc[(party,slice(None),topic),'No'] = split_df.apply(lambda x: x-party_vote_dict[party].No[topic])
split_df.loc[(party,slice(None),topic),'Abstention'] = split_df.apply(lambda x: x-party_vote_dict[party].Abstention[topic])
voting_deputee.loc[(party,slice(None),topic),:] = split_df
return voting_deputee
voting_deputee_party = compute_party_distance(voting_deputee,voting_party)
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
2.3 Plotting the stats w.r.t to the party for each individual
|
def plot_df(df,item,topic):
#Setting the size of the plots
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 25
fig_size[1] = 6
plt.rcParams["figure.figsize"] = fig_size
df_item = df.sort_values(item,ascending=False)
y = np.array(df_item[item])
plt.bar(range(df_item.shape[0]), df_item[item], align='center', tick_label=df_item.index)
plt.ylabel("Deviation from"+item+"mean of party\n item :"+ topic)
plt.xticks(rotation=90, ha='left')
plt.xlabel("Deputee")
plt.show()
party = 'Groupe des Paysans, Artisans et Bourgeois'
test = voting_deputee.xs((party,topics[5]),level=('ParlGroupName','Topic'),drop_level=True).sort_values('Yes',ascending=False)
plot_df(test,'Abstention',topics[5])
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
2.4 Computing the stats w.r.t to the party for each individual while aggregating all topics
|
voting_deputee_party.head()
def format_individual_global_voting_profile(voting_unique):
# Setting the desired multiIndex
voting_indiv = voting_unique.set_index(['ParlGroupName','Name'])[['Decision']]
#2. Counting yes/no/abstention
# Splitting the df by each party and topic, and then aggregating by yes/no/abstention
# Normalising every entry
count_yes = lambda x: np.sum(x==1)/(len(x))
count_no = lambda x: np.sum(x==2)/(len(x))
count_abstention = lambda x: np.sum(x==3)/(len(x))
voting_indiv = voting_indiv.groupby(level=['ParlGroupName','Name']).agg({'Decision':
{'Yes': count_yes, 'No': count_no,'Abstention': count_abstention}})
voting_indiv.columns = voting_indiv.columns.droplevel(0)
return voting_indiv
def format_party_global_profile(voting_unique):
#2. Counting yes/no/abstention
# Splitting the df by each party and topic, and then aggregating by yes/no/abstention
# Normalising every entry
count_yes = lambda x: np.sum(x==1)/(len(x))
count_no = lambda x: np.sum(x==2)/(len(x))
count_abstention = lambda x: np.sum(x==3)/(len(x))
overall_party = voting_unique.groupby('ParlGroupName').agg({'Decision':
{'Yes': count_yes, 'No': count_no,'Abstention': count_abstention}})
overall_party.columns = overall_party.columns.droplevel(0)
return overall_party
voting_indiv = format_individual_global_voting_profile(voting_unique)
overall_party = format_party_global_profile(voting_unique)
voting_indiv.head()
overall_party.loc[overall_party.index == party]
def compute_overall_party_distance(voting_indiv,overall_party):
# Retrieve the unique parties and topics
parties = voting_party.index.get_level_values('ParlGroupName').unique()
# Extract the features of each party in a more convenient and easily accessible fashion
party_vote_dict = {}
for party in parties:
party_vote = overall_party.loc[overall_party.index == party]
split_df = voting_indiv.loc[(party,slice(None)),:]
split_df.loc[(party,slice(None)),'Yes'] = split_df.apply(lambda x: x-party_vote.Yes)
split_df.loc[(party,slice(None)),'No'] = split_df.apply(lambda x: x-party_vote.No)
split_df.loc[(party,slice(None)),'Abstention'] = split_df.apply(lambda x: x-party_vote.Abstention)
voting_indiv.loc[(party,slice(None)),:] = split_df
return voting_indiv
voting_indiv_party = compute_overall_party_distance(voting_indiv,overall_party)
def plot_df_overall(df,item,party):
#Setting the size of the plots
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 25
fig_size[1] = 6
plt.rcParams["figure.figsize"] = fig_size
df_item = df.sort_values(item,ascending=False)
y = np.array(df_item[item])
plt.bar(range(df_item.shape[0]), df_item[item], align='center', tick_label=df_item.index)
plt.ylabel("Deviation from "+item+"-mean of party")
plt.xticks(rotation=90, ha='left')
plt.xlabel("Deputees of " +party)
plt.show()
voting_indiv_party.xs(('Groupe BD'),level=('ParlGroupName'),drop_level=True).head()
map_parties = {'-':'-','BD':'PBD','C':'PDC','CE':'PDC','CEg':'PDC','G':'Verts','GL':'Verts-Lib','S':'PS','V':'UDC'}
for party in parties:
test = voting_indiv_party.xs((party),level=('ParlGroupName'),drop_level=True)
plot_df_overall(test,'Yes',party)
|
04-VotingProfile/PartyAnalysis.ipynb
|
thom056/ada-parliament-ML
|
gpl-2.0
|
since this expression implements a filtering mechanism, there is no else clause
an if-else clause can be used as the first expression though:
|
l = [1, 0, -2, 3, -1, -5, 0]
signum_l = [int(n / abs(n)) if n != 0 else 0 for n in l]
signum_l
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Generator expressions
Generator expressions are a generalization of list comprehension. They were introduced in PEP 289 in 2002.
Check out the memory consumption of these cells.
|
%%time
N = 8
s = sum([i*2 for i in range(int(10**N))])
print(s)
%%time
s = sum(i*2 for i in range(int(10**N)))
print(s)
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
calling next() raises a StopIteration exception
|
# next(even_numbers) # raises StopIteration
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
if the expression in the generator is a key-value pair separated by a colon, it instantiates a dictionary:
|
word_list = ["apple", "plum", "pear"]
word_length = {word: len(word) for word in word_list}
type(word_length), len(word_length), word_length
word_list = ["apple", "plum", "pear", "avocado"]
first_letters = {word[0]: word for word in word_list}
first_letters
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Exercises
Generator expressions can be particularly useful for formatted output. We will demonstrate this through a few examples.
|
numbers = [1, -2, 3, 1]
# print(", ".join(numbers)) # raises TypeError
print(", ".join(str(number) for number in numbers))
shopping_list = ["apple", "plum", "pear"]
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
~~~
The shopping list is:
item 1: apple
item 2: plum
item 3: pear
~~~
|
shopping_list = ["apple", "plum", "pear"]
shopping_list = ["apple"]
print("The shopping list is:\n{0}".format(
"\n".join(
"item {0}: {1}".format(i+1, item)
for i, item in enumerate(shopping_list)
)
))
shopping_list = ["apple", "plum", "pear"]
for i, item in enumerate(shopping_list):
print("item {} {}".format(i+1, item))
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Q. Print the following shopping list with quantities.
For example:
~~~
item 1: apple, quantity: 2
item 2: pear, quantity: 1
~~~
|
shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join(
"item {0}: {1}, quantity: {2}".format(i+1, item, quantity)
for i, (item, quantity) in enumerate(shopping_list.items()
)))
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Q. Print the same format in alphabetical order.
Decreasing order by quantity
|
shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join(
"item {0}: {1}, quantity: {2}".format(i+1, item, quantity)
for i, (item, quantity) in
enumerate(
sorted(shopping_list.items(),
key=lambda x: x[1], reverse=True)
)))
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
else
try-except blocks may have an else clause that only runs if no exception was raised
|
try:
age = int(input())
except ValueError as e:
print("Exception", e)
else:
print("No exception was raised")
finally:
print("this always runs")
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
raise keyword
raise throws/raises an exception
an empty raise in an except
|
try:
int("not a number")
except Exception:
# raise
pass
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Using exception for trial-and-error is considered Pythonic:
|
try:
int(input())
except ValueError:
print("not an int")
else:
print("looks like an int")
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
__exit__ takes 3 extra arguments that describe the exception: exc_type, exc_value, traceback
|
class DummyContextManager:
def __init__(self, value):
self.value = value
def __enter__(self):
print("Dummy resource acquired")
return self.value
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is not None:
print("{0} with value {1} caught\nTraceback: {2}".format(exc_type, exc_value, traceback))
print("Dummy resource released")
with DummyContextManager(42) as d:
print(d)
# raise ValueError("just because I can") # __exit__ will be called anyway
|
course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb
|
bmeaut/python_nlp_2017_fall
|
mit
|
Small Body Database (SBDB)
|
eros = Orbit.from_sbdb("Eros")
eros.plot(label="Eros");
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
You can also search by IAU number or SPK-ID (there is a faster neows.orbit_from_spk_id() function in that case, although):
|
ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number
amor = Orbit.from_sbdb("2001221") # Amor SPK-ID
eros = Orbit.from_sbdb("2000433") # Eros SPK-ID
frame = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
frame.plot(ganymed, label="Ganymed")
frame.plot(amor, label="Amor")
frame.plot(eros, label="Eros");
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
You can use the wildcards from that browser: * and ?.
<div class="alert alert-info">Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies.</div>
|
try:
Orbit.from_sbdb("*alley")
except ValueError as err:
print(err)
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
<div class="alert alert-info">Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it:</div>
|
eros.epoch.iso
epoch = time.Time(2458000.0, scale="tdb", format="jd")
eros_november = eros.propagate(epoch)
eros_november.epoch.iso
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
DASTCOM5 module
This module can also be used to get NEOs orbit, in the same way that neows, but it have some advantages (and some disadvantages).
It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to its README, it is updated typically a couple times per day, but
potentially as frequently as once per hour, so you can download it whenever you want the more recently discovered bodies. This also means that, after downloading the file, you can use the database offline.
The file is a ~230 MB zip that you can manually download and unzip in ~/.poliastro or, more easily, you can use
Python
dastcom5.download_dastcom5()
The main DASTCOM5 advantage over NeoWs is that you can use it to search not only NEOs, but any asteroid or comet. The easiest function is orbit_from_name():
|
from poliastro.neos import dastcom5
atira = dastcom5.orbit_from_name("atira")[0] # NEO
wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO.
frame = StaticOrbitPlotter()
frame.plot(atira, label="Atira (NEO)")
frame.plot(wikipedia, label="Wikipedia (asteroid)");
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one:
|
halleys = dastcom5.orbit_from_name("1P")
frame = StaticOrbitPlotter()
frame.plot(halleys[0], label="Halley")
frame.plot(halleys[5], label="Halley")
frame.plot(halleys[10], label="Halley")
frame.plot(halleys[20], label="Halley")
frame.plot(halleys[-1], label="Halley");
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
<div class="alert alert-info">Asteroid and comet parameters are not exactly the same (although they are very close)</div>
With these ndarrays you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.
For example, NEOs can be grouped in several ways. One of the NEOs group is called Atiras, and is formed by NEOs whose orbits are contained entirely with the orbit of the Earth. They are a really little group, and we can try to plot all of these NEOs using asteroid_db():
Talking in orbital terms, Atiras have an aphelion distance, Q < 0.983 au and a semi-major axis, a < 1.0 au.
Visiting documentation API Reference, you can see that DASTCOM5 provides semi-major axis, but doesn't provide aphelion distance. You can get aphelion distance easily knowing perihelion distance (q, QR in DASTCOM5) and semi-major axis Q = 2*a - q, but there are probably many other ways.
|
aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983
axis_condition = ast_db["A"] < 1.3
atiras = ast_db[aphelion_condition & axis_condition]
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
Which is consistent with the stats published by CNEOS
Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)
We only need to get the 16 orbits from these 16 ndarrays.
There are two ways:
Gather all their orbital elements manually and use the Orbit.from_classical() function.
Use the NO property (logical record number in DASTCOM5 database) and the dastcom5.orbit_from_record() function.
The second one seems easier and it is related to the current notebook, so we are going to use that one, using the ASTNAM property of DASTCOM5 database:
|
from poliastro.bodies import Earth
frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for record in atiras["NO"]:
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, color="#666666")
else:
print(f"Skipping hyperbolic orbit: {record}")
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
If we needed also the names of each asteroid, we could do:
|
frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for i in range(len(atiras)):
record = atiras["NO"][i]
label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, label=label)
else:
print(f"Skipping hyperbolic orbit: {label}")
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc):
|
db[
db.NAME == "Halley"
] # As you can see, Halley is the name of an asteroid too, did you know that?
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
Panda offers many functionalities, and can also be used in the same way as the ast_db and comet_db functions:
|
aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983
axis_condition = db["A"] < 1.3
atiras = db[aphelion_condition & axis_condition]
len(atiras)
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
So, rewriting our condition:
|
axis_condition = (db["A"] < 1.3) & (db["A"] > 0)
atiras = db[aphelion_condition & axis_condition]
len(atiras)
|
docs/source/examples/Using NEOS package.ipynb
|
Juanlu001/poliastro
|
mit
|
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_numpy_data() from the second notebook of Week 2.
|
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
print features
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
|
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
weights = np.array(initial_weights) # make sure it's a numpy array
#while not reached maximum number of iterations:
for i in xrange(max_iterations):
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights=weights)
# compute the errors as predictions - output
errors = predictions - output
for i in xrange(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
derivative = feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, i == 0)
# subtract the step size times the derivative from the current weight
weights[i] = weights[i] - step_size * derivative
return weights
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_0_penalty
we'll use them later.
|
l2_penalty = 0.0
simple_weights_0_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(simple_weights_0_penalty[1], 1)
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_high_penalty
we'll use them later.
|
l2_penalty = 1e11
simple_weights_high_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(simple_weights_high_penalty[1], 1)
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
|
def compute_rss(feature, output, weights):
error = output - np.dot(feature, weights)
rss = np.dot(error, np.transpose(error))
return rss
print "initial weight test rss = " + str(compute_rss(simple_test_feature_matrix, test_output, initial_weights))
print "no regularization test rss = " + str(compute_rss(simple_test_feature_matrix, test_output, simple_weights_0_penalty))
print "high regularization test rss = " + str(compute_rss(simple_test_feature_matrix, test_output, simple_weights_high_penalty))
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_0_penalty
|
l2_penalty = 0.0
multi_weights_0_penalty = ridge_regression_gradient_descent(
feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(multi_weights_0_penalty[1], 1)
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_high_penalty
|
l2_penalty = 1e11
multi_weights_high_penalty = ridge_regression_gradient_descent(
feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
multi_weights_high_penalty
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
|
print "initial weight test rss = " + str(compute_rss(test_feature_matrix, test_output, initial_weights))
print "no regularization test rss = " + str(compute_rss(test_feature_matrix, test_output, multi_weights_0_penalty))
print "high regularization test rss = " + str(compute_rss(test_feature_matrix, test_output, multi_weights_high_penalty))
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
|
predict_output(test_feature_matrix[0], multi_weights_0_penalty)
predict_output(test_feature_matrix[0], multi_weights_high_penalty)
print abs(test_output[0] - predict_output(test_feature_matrix[0], multi_weights_0_penalty))
print abs(test_output[0] - predict_output(test_feature_matrix[0], multi_weights_high_penalty))
|
ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb
|
zomansud/coursera
|
mit
|
concordance is a view that shows every occurrence of a word alongside some context
|
text1.concordance("monstrous")
text2.concordance("affection")
text3.concordance("lived")
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
similar shows other words that appear in a similar context to the entered word
|
text1.similar("monstrous")
text2.similar("monstrous")
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
text 1 (Melville) uses monstrous very differently from text 2 (Austen)
Text 2: monstrous has positive connotations, sometimes functions as an intensifier like very
common_contexts shows contexts that are shared by two or more words
|
text2.common_contexts(["monstrous", "very"])
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
trying out other words...
|
text2.similar("affection")
text2.common_contexts(["affection", "regard"])
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Lexical Dispersion Plot
Determining the location of words in text (how many words from beginning does this word appear?) -- using dispersion_plot
|
plt.figure(figsize=(18,10))
text4.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America", "liberty", "constitution"])
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Generating some random text in the style of text3 -- using generate()
not yet supported in NLTK 3.0
|
# (not available in NLTK 3.0)
# text3.generate()
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
1.4 Counting Vocabulary
Count the number of tokens using len
|
len(text3)
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
View/count vocabulary using set(text_obj)
|
len(set(text3))
# first 50
sorted(set(text3))[:50]
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Calculating lexical richness of the text
|
len(set(text3)) / len(text3)
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Count how often a word occurs in the text
|
text3.count("smote")
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Compute what percentage of the text is taken up by a specific word
|
100 * text4.count('a') / len(text4)
text5.count('lol')
100 * text5.count('lol') / len(text5)
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Define some simple functions to calculate these values
|
def lexical_diversity(text):
return len(set(text)) / len(text)
def percentage(count, total):
return 100 * count / total
lexical_diversity(text3), lexical_diversity(text5)
percentage(text4.count('a'), len(text4))
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
A Closer Look at Python: Texts as Lists of Words
skipping some basic python parts of this section...
|
sent1
sent2
lexical_diversity(sent1)
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
List Concatenation
|
['Monty', 'Python'] + ['and', 'the', 'Holy', 'Grail']
|
01_language_processing_and_python.ipynb
|
sandipchatterjee/nltk_book_notes
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.