markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. Y... | # Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print(("Parameter 'max_depth' is {} for the optimal model.").format(reg.get_params()['max_depth'])) | boston_housing.ipynb | rodrigomas/boston_housing | mit |
Answer: The answer is 4, and it is the range we "guess" on question 6. But it is logical because we are optimizing one parameter (so it is not so difficult) and we have the graphs to know the exact estimator behavior against the parameter change.
Question 10 - Predicting Selling Prices
Imagine that you were a real est... | # Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print(("Predicted selling price for Client {}'s home: ${:,.2f}").format(i+1, price))
featu... | boston_housing.ipynb | rodrigomas/boston_housing | mit |
Answer:
Client 1: 403025.00
The value looks good, the value is close to the medium, and if we analyse the features, we can note that this client want the features close to the medium of each feature. So the predictor will go to something to close to it.
Client 2: 237478.72
Also it is good, the client chooses values c... | vs.PredictTrials(features, prices, fit_model, client_data) | boston_housing.ipynb | rodrigomas/boston_housing | mit |
Doc.sents is a generator
It is important to note that doc.sents is a generator. That is, a Doc is not segmented until doc.sents is called. This means that, where you could print the second Doc token with print(doc[1]), you can't call the "second Doc sentence" with print(doc.sents[1]): | print(doc[1])
print(doc.sents[1]) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
However, you can build a sentence collection by running doc.sents and saving the result to a list: | doc_sents = [sent for sent in doc.sents]
doc_sents | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
<font color=green>NOTE: list(doc.sents) also works. We show a list comprehension as it allows you to pass in conditionals.</font> | # Now you can access individual sentences:
print(doc_sents[1]) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
sents are Spans
At first glance it looks like each sent contains text from the original Doc object. In fact they're just Spans with start and end token pointers. | type(doc_sents[1])
print(doc_sents[1].start, doc_sents[1].end) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
Adding Rules
spaCy's built-in sentencizer relies on the dependency parse and end-of-sentence punctuation to determine segmentation rules. We can add rules of our own, but they have to be added before the creation of the Doc object, as that is where the parsing of segment start tokens happens: | # Parsing the segmentation start tokens happens during the nlp pipeline
doc2 = nlp(u'This is a sentence. This is a sentence. This is a sentence.')
for token in doc2:
print(token.is_sent_start, ' '+token.text) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
<font color=green>Notice we haven't run doc2.sents, and yet token.is_sent_start was set to True on two tokens in the Doc.</font>
Let's add a semicolon to our existing segmentation rules. That is, whenever the sentencizer encounters a semicolon, the next token should start a new segment. | # SPACY'S DEFAULT BEHAVIOR
doc3 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc3.sents:
print(sent)
# ADD A NEW RULE TO THE PIPELINE
def set_custom_boundaries(doc):
for token in doc[:-1]:
if token.text == ';':
doc[token.i+1... | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
<font color=green>The new rule has to run before the document is parsed. Here we can either pass the argument before='parser' or first=True. | # Re-run the Doc object creation:
doc4 = nlp(u'"Management is doing things right; leadership is doing the right things." -Peter Drucker')
for sent in doc4.sents:
print(sent)
# And yet the new rule doesn't apply to the older Doc object:
for sent in doc3.sents:
print(sent) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
Why not change the token directly?
Why not simply set the .is_sent_start value to True on existing tokens? | # Find the token we want to change:
doc3[7]
# Try to change the .is_sent_start attribute:
doc3[7].is_sent_start = True | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
<font color=green>spaCy refuses to change the tag after the document is parsed to prevent inconsistencies in the data.</font>
Changing the Rules
In some cases we want to replace spaCy's default sentencizer with our own set of rules. In this section we'll see how the default sentencizer breaks on periods. We'll then rep... | nlp = spacy.load('en_core_web_sm') # reset to the original
mystring = u"This is a sentence. This is another.\n\nThis is a \nthird sentence."
# SPACY DEFAULT BEHAVIOR:
doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent])
# CHANGING THE RULES
from spacy.pipeline import SentenceSegment... | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
<font color=green>While the function split_on_newlines can be named anything we want, it's important to use the name sbd for the SentenceSegmenter.</font> | doc = nlp(mystring)
for sent in doc.sents:
print([token.text for token in sent]) | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/04-Sentence-Segmentation.ipynb | rishuatgithub/MLPy | apache-2.0 |
Itérateur, Générateur
itérateur
La notion d'itérateur est incournable dans ce genre d'approche fonctionnelle. Un itérateur parcourt les éléments d'un ensemble. C'est le cas de la fonction range. | it = iter([0,1,2,3,4,5,6,7,8])
print(it, type(it)) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Il faut le dissocier d'une liste qui est un conteneur. | [0,1,2,3,4,5,6,7,8] | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Pour s'en convaincre, on compare la taille d'un itérateur avec celui d'une liste : la taille de l'itérateur ne change pas quelque soit la liste, la taille de la liste croît avec le nombre d'éléments qu'elle contient. | import sys
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8])))
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14])))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8]))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14])) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
L'itérateur ne sait faire qu'une chose : passer à l'élément suivant et lancer une exception StopIteration lorsqu'il arrive à la fin. | it = iter([0,1,2,3,4,5,6,7,8])
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it)) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
générateur
Un générateur se comporte comme un itérateur, il retourne des éléments les uns à la suite des autres que ces éléments soit dans un container ou pas. | def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
genere_nombre_pair(5) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Appelé comme suit, un générateur ne fait rien. On s'en convaint en insérant une instruction print dans la fonction : | def genere_nombre_pair(n):
for i in range(0,n):
print("je passe par là", i, n)
yield 2*i
genere_nombre_pair(5) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Mais si on construit une liste avec tout ces nombres, on vérifie que la fonction genere_nombre_pair est bien executée : | list(genere_nombre_pair(5)) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
L'instruction next fonctionne de la même façon : | def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
it = genere_nombre_pair(5)
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it)) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Le moyen le plus simple de parcourir les éléments retournés par un itérateur ou un générateur est une boucle for : | it = genere_nombre_pair(5)
for nombre in it:
print(nombre) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
On peut combiner les générateurs : | def genere_nombre_pair(n):
for i in range(0,n):
print("pair", i)
yield 2*i
def genere_multiple_six(n):
for pair in genere_nombre_pair(n):
print("six", pair)
yield 3*pair
print(genere_multiple_six)
for i in genere_multiple_six(3):
print(i) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
intérêt
Les itérateurs et les générateurs sont des fonctions qui parcourent des ensembles d'éléments ou donne cette illusion.
Ils ne servent qu'à passer à l'élément suivant.
Ils ne le font que si on le demande explicitement avec une boucle for par exemple. C'est pour cela qu'on parle d'évaluation paresseuse ou lazy ev... | def addition(x, y):
return x + y
addition(1, 3)
additionl = lambda x,y : x+y
additionl(1, 3) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Exercice 1 : application aux grandes bases de données
Imaginons qu'on a une base de données de 10 milliards de lignes. On doit lui appliquer deux traitements : f1, f2. On a deux options possibles :
Appliquer la fonction f1 sur tous les éléments, puis appliquer f2 sur tous les éléments transformés par f1.
Application l... | notes = [dict(nom="A", juge=1, note=8),
dict(nom="A", juge=2, note=9),
dict(nom="A", juge=3, note=7),
dict(nom="A", juge=4, note=4),
dict(nom="A", juge=5, note=5),
dict(nom="B", juge=1, note=7),
dict(nom="B", juge=2, note=4),
dict(nom="B", juge=3, note=7),
... | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Blaze, odo : interfaces communes
Blaze fournit une interface commune, proche de celle des Dataframe, pour de nombreux modules comme bcolz... odo propose des outils de conversions dans de nombreux formats.
Pandas to Blaze
Ils sont présentés dans un autre notebook. On reproduit ce qui se fait une une ligne avec odo. | df.to_csv("mortalite_compresse.csv", index=False)
from pyquickhelper.filehelper import gzip_files
gzip_files("mortalite_compresse.csv.gz", ["mortalite_compresse.csv"], encoding="utf-8") | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Parallélisation avec dask
dask
dask propose de paralléliser les opérations usuelles qu'on applique à un dataframe.
L'opération suivante est très rapide, signifiant que dask attend de savoir quoi faire avant de charger les données : | import dask.dataframe as dd
fd = dd.read_csv('mortalite_compresse*.csv.gz', compression='gzip', blocksize=None)
#fd = dd.read_csv('mortalite_compresse.csv', blocksize=None) | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Extraire les premières lignes prend très peu de temps car dask ne décompresse que le début : | fd.head()
fd.npartitions
fd.divisions
s = fd.sample(frac=0.01)
s.head()
life = fd[fd.indicateur=='LIFEXP']
life
life.head() | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | sdpython/actuariat_python | mit |
Here you find an example to insert an image.
<img src='00_figures/bla.jpg' width=70%>
<div align="center">**Figure 0.A1**: Spherical triangle $STZ$
</div> | from wand.image import Image as WImage
img = WImage(filename='00_figures/bla.eps')
img | chapter_00_preface/00_appendix.ipynb | gigjozsa/HI_analysis_course | gpl-2.0 |
First we create a signal by smoothing some random noise and look at its DFT evaluated at 256 points, as computed by the FFT. Computing the N-point DFT of a N-point signal via the FFT has a complexity of $$O(N Log N)$$. | # a basic signal
N = 256
np.random.seed(0)
x = np.convolve(np.random.normal(0, 1, N), np.ones(20)/20.0)[:N]
omegas = np.linspace(-np.pi, np.pi, N+1)[:N]
dft = np.fft.fftshift(np.fft.fft(x))
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
pylab.plot(omegas, abs(dft))
ax.set_xlim(-np.pi, np.pi)
ax.se... | examples/basic example.ipynb | ericmjonas/pychirpz | mit |
Then we explicitly evaluate the discrete-time Fourier transform (DTFT) of the signal $x[n]$. Remember that the DTFT of a discrete-time signal is a continuous function of omega. Naively evaluating the M-point DTFT of an N-point signal is $O(M\cdot N)$. Here we evaluate at M = 16x256 points, and zoom in on $[-\frac{\pi}{... | fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
zoom_factor = 16
omegas_zoom = np.linspace(-np.pi, np.pi, zoom_factor*N+1)[:zoom_factor*N]
dtft = chirpz.pychirpz.dtft(x, omegas_zoom)
ax.plot(omegas_zoom, np.abs(dtft), label='dtft')
ax.scatter(omegas, np.abs(dft), c='r', label='fft dft')
ax.set_xlim(... | examples/basic example.ipynb | ericmjonas/pychirpz | mit |
Note from the above plot that there are various sampling artifacts. One way of resolving this is to oversample the DFT via zero-padding. This is what happens when you ask for an FFT evaluated at M points of an N-point signal. The results are below. This is a $O(M log M)$ operation |
dtft_zoom = chirpz.pychirpz.dtft(x, omegas_zoom)
fft_zoom = np.fft.fftshift(np.fft.fft(x, N*zoom_factor))
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
ax.plot(omegas_zoom, np.abs(dtft_zoom), label='dtft')
ax.scatter(omegas, np.abs(dft), c='r', s=40, edgecolor='none', label='fft dft')
ax.scatter... | examples/basic example.ipynb | ericmjonas/pychirpz | mit |
But what if we just care about a subset of the DTFT? That is, what if we want to evaluate the DTFT on the region $[-\frac{\pi}{4}, \frac{\pi}{4} ]$ and ignore everything else? This is where the chirp-z transform comes it. We can specify that we only wish to evaluate the DTFT starting at a particular angular frequency, ... | # now try chirp-z transform
start = -np.pi / 4.0
omega_delta = omegas_zoom[1] - omegas_zoom[0]
M = N * zoom_factor / 4.0
zoom_cz = chirpz.pychirpz.zoom_fft(x, start, omega_delta , M)
fig = pylab.figure(figsize=(12, 3))
ax = fig.add_subplot(1, 1, 1)
omegas_cz = np.arange(M) * omega_delta + start
ax.plot(omegas_zoom, ... | examples/basic example.ipynb | ericmjonas/pychirpz | mit |
Visualizing the Data
A good first-step for many problems is to visualize the data using one of the
Dimensionality Reduction techniques we saw earlier. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
vari... | from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(n_components=2)
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar() | notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb | samstav/scipy_2015_sklearn_tutorial | cc0-1.0 |
It can be fun to explore the various manifold learning methods available,
and how the output depends on the various parameters used to tune the
projection.
In any case, these visualizations show us that there is hope: even a simple
classifier should be able to adequately identify the members of the various
classes.
Que... | from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import train_test_split
# split the data into training and validation sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
# train the model
clf = GaussianNB()
clf.fit(X_train, y_train)
# use the model to predict... | notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb | samstav/scipy_2015_sklearn_tutorial | cc0-1.0 |
We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier:
several are available in the sklearn.metrics submodule.
One of the most useful metrics is the classification_report, which combines several
measures... | from sklearn import metrics
print(metrics.classification_report(expected, predicted)) | notebooks/03.1 Case Study - Supervised Classification of Handwritten Digits.ipynb | samstav/scipy_2015_sklearn_tutorial | cc0-1.0 |
Como resultado obtenemos el fichero de puntos que relaciona la posición de cada pixel de la imagen con su posición geográfica, listo para usar en QGIS | mapX,mapY,pixelX,pixelY,enable
-6.29923900000000003,36.53782000000000352,1451,2331,1
-6.27469199999999994,36.53312999999999988,1408,2206,1
-6.30627699999999969,36.52857900000000058,1513,2324,1
-6.26748200000000022,36.48946300000000065,1609,2033,1
-6.16431299999999993,36.5218190000000007,1173,1700,1
-6.22362199999999977... | jupyter/Georreferenciadas.ipynb | jgcasta/CitiesAtNightPythonMadrid | mit |
También obtenemos el mismo resultado para usar con GlobalMapper | 1451,2331,-6.299239,36.537820,"punto1",0
1408,2206,-6.274692,36.533130,"punto2",0
1513,2324,-6.306277,36.528579,"punto3",0
1609,2033,-6.267482,36.489463,"punto4",0
1173,1700,-6.164313,36.521819,"punto5",0
1093,2098,-6.223622,36.568295,"punto6",0
297,1941,-6.115819,36.664193,"punto7",0
1201,2837,-6.358377,36.616397,"pun... | jupyter/Georreferenciadas.ipynb | jgcasta/CitiesAtNightPythonMadrid | mit |
El shell script quedaría de la siguiente forma | gdal_translate -of GTiff "ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co COMPRESS=NONE "/tmp/ISS030-E-209446.jpg" "geotiff/ISS030-E-209446.tif"
gdal_translate -of GTiff -gcp 1590 1323 -6.143757 36.429500 origentmpDestino
"ISS030-E-209446jpg""tmp/ISS030-E-209446.jpg"
gdalwarp -r near -tps -co ... | jupyter/Georreferenciadas.ipynb | jgcasta/CitiesAtNightPythonMadrid | mit |
Importing the quite heavy DataFrame with the voting fields and the results. We drop a useless column and create a Name field, which will contain both the first and the last name of a person, so we can then create a model for each unique deputee at the parliament. | path = '../datas/nlp_results/'
voting_df = pd.read_csv(path+'voting_with_topics.csv')
print('Entries in the DataFrame',voting_df.shape)
#Putting numerical values into the columns that should have numerical values
num_cols = ['Unnamed: 0','BusinessNumber','BillTitle', 'BusinessTitle','FirstName','LastName', 'Business... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Visualising all the different parties, along with their names. Note that for the same party, there are several same group codes. We will then use the ParlGroupName field over the ParlGroupCode | voting[['ParlGroupCode','ParlGroupName']].drop_duplicates() | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
We also drop the duplicates in the votes, keeping only the last one. Plot first an example of vote of a subject (Guy Parmelin) | gp = voting.loc[voting.Name=='Guy Parmelin']
gpt = gp.loc[gp.text == "Arrêté fédéral concernant la contribution de la Suisse en faveur de la Bulgarie et de la Roumanie au titre de la réduction des disparités économiques et sociales dans l'Union européenne élargie Réduction des disparités économiques et sociales dans l'... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Drop duplicates here | voting_unique = voting_df.drop_duplicates(['text','Name'], keep = 'last')
voting_unique.head() | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
One element left below for a given people and entry. We will work with the voting_unique df. | gp = voting_unique.loc[voting.Name=='Guy Parmelin']
gpt = gp.loc[gp.text == "Arrêté fédéral concernant la contribution de la Suisse en faveur de la Bulgarie et de la Roumanie au titre de la réduction des disparités économiques et sociales dans l'Union européenne élargie Réduction des disparités économiques et sociales ... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Merging our DataFrame with the one containing the informations on each text |
# Additional infos : whether the text was accepted and most proeminent topic
text_votes = pd.read_csv('topic_accepted.csv')
# Merging both DataFrames on the text field
voting_unique = pd.merge(voting_unique, text_votes, on=['text', 'text'])
def format_party_voting_profile(voting_unique):
# Setting the desir... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Formatting the voting_party DF | #voting_party = voting_party.unstack()
#voting_party.columns = voting_party.columns.swaplevel(1, 2)
#voting_party.sortlevel(0,axis=1,inplace=True)
voting_party = format_party_voting_profile(voting_unique)
voting_party.head() | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Some statistics about the whole DF.
Retrieve all topics and parties | parties = voting_party.index.get_level_values('ParlGroupName').unique()
topics = voting_party.index.get_level_values('Topic').unique()
topics | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Extremal percentages (Yes/No) | for topic in topics:
topic_voting_party = voting_party.xs(topic, level='Topic', drop_level=True)
max_yes = topic_voting_party.Yes.max().round(2); idx_max_yes = topic_voting_party.Yes.idxmax();
max_no = topic_voting_party.No.max().round(2); idx_max_no = topic_voting_party.No.idxmax()
min_yes = top... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
Extracting a party
The function below allows us to select one party and extract the DataFrame from the whole DataFrame | party_vote = voting_party.xs('Groupe des Paysans, Artisans et Bourgeois', level='ParlGroupName', drop_level=True)
party_vote.Yes | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
2.0 Dealing with a single individual at a time.
2.1 Formatting the DF | def format_individual_voting_profile(voting_unique):
# Setting Name, Party and Topic as indices
voting_deputee = voting_unique.set_index(['Name','ParlGroupName','Topic'])[['Decision']]
# Functions to count number of yes/no/absentions, same principle as before
count_yes = lambda x: np.sum(x==1)/(le... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
2.2 Retrieving the stats w.r.t to the party for each individual | def compute_party_distance(voting_deputee,voting_party):
# Retrieve the unique parties and topics
parties = voting_party.index.get_level_values('ParlGroupName').unique()
topics = voting_party.index.get_level_values('Topic').unique()
# Extract the features of each party in a more convenient and... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
2.3 Plotting the stats w.r.t to the party for each individual | def plot_df(df,item,topic):
#Setting the size of the plots
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 25
fig_size[1] = 6
plt.rcParams["figure.figsize"] = fig_size
df_item = df.sort_values(item,ascending=False)
y = np.array(df_item[item])
plt.bar(range(df_item.shap... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
2.4 Computing the stats w.r.t to the party for each individual while aggregating all topics | voting_deputee_party.head()
def format_individual_global_voting_profile(voting_unique):
# Setting the desired multiIndex
voting_indiv = voting_unique.set_index(['ParlGroupName','Name'])[['Decision']]
#2. Counting yes/no/abstention
# Splitting the df by each party and topic, and then agg... | 04-VotingProfile/PartyAnalysis.ipynb | thom056/ada-parliament-ML | gpl-2.0 |
since this expression implements a filtering mechanism, there is no else clause
an if-else clause can be used as the first expression though: | l = [1, 0, -2, 3, -1, -5, 0]
signum_l = [int(n / abs(n)) if n != 0 else 0 for n in l]
signum_l | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Generator expressions
Generator expressions are a generalization of list comprehension. They were introduced in PEP 289 in 2002.
Check out the memory consumption of these cells. | %%time
N = 8
s = sum([i*2 for i in range(int(10**N))])
print(s)
%%time
s = sum(i*2 for i in range(int(10**N)))
print(s) | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
calling next() raises a StopIteration exception | # next(even_numbers) # raises StopIteration | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
if the expression in the generator is a key-value pair separated by a colon, it instantiates a dictionary: | word_list = ["apple", "plum", "pear"]
word_length = {word: len(word) for word in word_list}
type(word_length), len(word_length), word_length
word_list = ["apple", "plum", "pear", "avocado"]
first_letters = {word[0]: word for word in word_list}
first_letters | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Exercises
Generator expressions can be particularly useful for formatted output. We will demonstrate this through a few examples. | numbers = [1, -2, 3, 1]
# print(", ".join(numbers)) # raises TypeError
print(", ".join(str(number) for number in numbers))
shopping_list = ["apple", "plum", "pear"] | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
~~~
The shopping list is:
item 1: apple
item 2: plum
item 3: pear
~~~ | shopping_list = ["apple", "plum", "pear"]
shopping_list = ["apple"]
print("The shopping list is:\n{0}".format(
"\n".join(
"item {0}: {1}".format(i+1, item)
for i, item in enumerate(shopping_list)
)
))
shopping_list = ["apple", "plum", "pear"]
for i, item in enumerate(shopping_list):
print... | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Q. Print the following shopping list with quantities.
For example:
~~~
item 1: apple, quantity: 2
item 2: pear, quantity: 1
~~~ | shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join(
"item {0}: {1}, quantity: {2}".format(i+1, item, quantity)
for i, (item, quantity) in enumerate(shopping_list.items()
))) | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Q. Print the same format in alphabetical order.
Decreasing order by quantity | shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join(
"item {0}: {1}, quantity: {2}".format(i+1, item, quantity)
for i, (item, quantity) in
enumerate(
sorted(shopping_list.items(),
key=lambda x: x[1], reverse=True)
))) | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
else
try-except blocks may have an else clause that only runs if no exception was raised | try:
age = int(input())
except ValueError as e:
print("Exception", e)
else:
print("No exception was raised")
finally:
print("this always runs") | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
raise keyword
raise throws/raises an exception
an empty raise in an except | try:
int("not a number")
except Exception:
# raise
pass | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Using exception for trial-and-error is considered Pythonic: | try:
int(input())
except ValueError:
print("not an int")
else:
print("looks like an int") | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
__exit__ takes 3 extra arguments that describe the exception: exc_type, exc_value, traceback | class DummyContextManager:
def __init__(self, value):
self.value = value
def __enter__(self):
print("Dummy resource acquired")
return self.value
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is not None:
print("{0} with value {1} ca... | course_material/05_Generator_expressions_list_comprehension/05_Generator_expressions_list_comprehension_lecture.ipynb | bmeaut/python_nlp_2017_fall | mit |
Small Body Database (SBDB) | eros = Orbit.from_sbdb("Eros")
eros.plot(label="Eros"); | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
You can also search by IAU number or SPK-ID (there is a faster neows.orbit_from_spk_id() function in that case, although): | ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number
amor = Orbit.from_sbdb("2001221") # Amor SPK-ID
eros = Orbit.from_sbdb("2000433") # Eros SPK-ID
frame = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
frame.plot(ganymed, label="Ganymed")
frame.plot(amor, label="Amor")
frame.plot(eros, label="Eros"); | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
You can use the wildcards from that browser: * and ?.
<div class="alert alert-info">Keep it in mind that `from_sbdb()` can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies.</div> | try:
Orbit.from_sbdb("*alley")
except ValueError as err:
print(err) | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
<div class="alert alert-info">Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it:</div> | eros.epoch.iso
epoch = time.Time(2458000.0, scale="tdb", format="jd")
eros_november = eros.propagate(epoch)
eros_november.epoch.iso | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
DASTCOM5 module
This module can also be used to get NEOs orbit, in the same way that neows, but it have some advantages (and some disadvantages).
It relies on DASTCOM5 database, a NASA/JPL maintained asteroid and comet database. This database has to be downloaded at least once in order to use this module. According to ... | from poliastro.neos import dastcom5
atira = dastcom5.orbit_from_name("atira")[0] # NEO
wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO.
frame = StaticOrbitPlotter()
frame.plot(atira, label="Atira (NEO)")
frame.plot(wikipedia, label="Wikipedia (asteroid)"); | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one: | halleys = dastcom5.orbit_from_name("1P")
frame = StaticOrbitPlotter()
frame.plot(halleys[0], label="Halley")
frame.plot(halleys[5], label="Halley")
frame.plot(halleys[10], label="Halley")
frame.plot(halleys[20], label="Halley")
frame.plot(halleys[-1], label="Halley"); | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
<div class="alert alert-info">Asteroid and comet parameters are not exactly the same (although they are very close)</div>
With these ndarrays you can classify asteroids and comets, sort them, get all their parameters, and whatever comes to your mind.
For example, NEOs can be grouped in several ways. One of the NEOs gr... | aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983
axis_condition = ast_db["A"] < 1.3
atiras = ast_db[aphelion_condition & axis_condition] | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
Which is consistent with the stats published by CNEOS
Now we're gonna plot all of their orbits, with corresponding labels, just because we love plots :)
We only need to get the 16 orbits from these 16 ndarrays.
There are two ways:
Gather all their orbital elements manually and use the Orbit.from_classical() function.
... | from poliastro.bodies import Earth
frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for record in atiras["NO"]:
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, color="#666666")
else:
print(f"Skipping hyperbolic orbit: {record}") | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
If we needed also the names of each asteroid, we could do: | frame = StaticOrbitPlotter()
frame.plot_body_orbit(Earth, time.Time.now().tdb)
for i in range(len(atiras)):
record = atiras["NO"][i]
label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary
ss = dastcom5.orbit_from_record(record)
if ss.ecc < 1:
frame.plot(ss, label=label... | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc): | db[
db.NAME == "Halley"
] # As you can see, Halley is the name of an asteroid too, did you know that? | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
Panda offers many functionalities, and can also be used in the same way as the ast_db and comet_db functions: | aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983
axis_condition = db["A"] < 1.3
atiras = db[aphelion_condition & axis_condition]
len(atiras) | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
So, rewriting our condition: | axis_condition = (db["A"] < 1.3) & (db["A"] > 0)
atiras = db[aphelion_condition & axis_condition]
len(atiras) | docs/source/examples/Using NEOS package.ipynb | Juanlu001/poliastro | mit |
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we conv... | import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along w... | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of de... | def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
weights = np.array(initial_weights) # make sure it's a numpy array
#while not reached maximum number of iterations:
for i in xrange(max_iterations):
# compute... | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_0_penalty
we'll use them later. | l2_penalty = 0.0
simple_weights_0_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(simple_weights_0_penalty[1], 1) | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_high_penalty
we'll use them later. | l2_penalty = 1e11
simple_weights_high_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(simple_weights_high_penalty[1], 1) | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best? | def compute_rss(feature, output, weights):
error = output - np.dot(feature, weights)
rss = np.dot(error, np.transpose(error))
return rss
print "initial weight test rss = " + str(compute_rss(simple_test_feature_matrix, test_output, initial_weights))
print "no regularization test rss = " + str(com... | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_0_penalty | l2_penalty = 0.0
multi_weights_0_penalty = ridge_regression_gradient_descent(
feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
round(multi_weights_0_penalty[1], 1) | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_high_penalty | l2_penalty = 1e11
multi_weights_high_penalty = ridge_regression_gradient_descent(
feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
multi_weights_high_penalty | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best? | print "initial weight test rss = " + str(compute_rss(test_feature_matrix, test_output, initial_weights))
print "no regularization test rss = " + str(compute_rss(test_feature_matrix, test_output, multi_weights_0_penalty))
print "high regularization test rss = " + str(compute_rss(test_feature_matrix, test_output, multi... | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house? | predict_output(test_feature_matrix[0], multi_weights_0_penalty)
predict_output(test_feature_matrix[0], multi_weights_high_penalty)
print abs(test_output[0] - predict_output(test_feature_matrix[0], multi_weights_0_penalty))
print abs(test_output[0] - predict_output(test_feature_matrix[0], multi_weights_high_penalty)) | ml-regression/week-4/week-4-ridge-regression-assignment-2-blank.ipynb | zomansud/coursera | mit |
concordance is a view that shows every occurrence of a word alongside some context | text1.concordance("monstrous")
text2.concordance("affection")
text3.concordance("lived") | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
similar shows other words that appear in a similar context to the entered word | text1.similar("monstrous")
text2.similar("monstrous") | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
text 1 (Melville) uses monstrous very differently from text 2 (Austen)
Text 2: monstrous has positive connotations, sometimes functions as an intensifier like very
common_contexts shows contexts that are shared by two or more words | text2.common_contexts(["monstrous", "very"]) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
trying out other words... | text2.similar("affection")
text2.common_contexts(["affection", "regard"]) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Lexical Dispersion Plot
Determining the location of words in text (how many words from beginning does this word appear?) -- using dispersion_plot | plt.figure(figsize=(18,10))
text4.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America", "liberty", "constitution"]) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Generating some random text in the style of text3 -- using generate()
not yet supported in NLTK 3.0 | # (not available in NLTK 3.0)
# text3.generate() | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
1.4 Counting Vocabulary
Count the number of tokens using len | len(text3) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
View/count vocabulary using set(text_obj) | len(set(text3))
# first 50
sorted(set(text3))[:50] | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Calculating lexical richness of the text | len(set(text3)) / len(text3) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Count how often a word occurs in the text | text3.count("smote") | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Compute what percentage of the text is taken up by a specific word | 100 * text4.count('a') / len(text4)
text5.count('lol')
100 * text5.count('lol') / len(text5) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Define some simple functions to calculate these values | def lexical_diversity(text):
return len(set(text)) / len(text)
def percentage(count, total):
return 100 * count / total
lexical_diversity(text3), lexical_diversity(text5)
percentage(text4.count('a'), len(text4)) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
A Closer Look at Python: Texts as Lists of Words
skipping some basic python parts of this section... | sent1
sent2
lexical_diversity(sent1) | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
List Concatenation | ['Monty', 'Python'] + ['and', 'the', 'Holy', 'Grail'] | 01_language_processing_and_python.ipynb | sandipchatterjee/nltk_book_notes | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.