markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
(b)
The real, reactive, and apparent power consumed by Load 1 and by Load 2 respectively are:
|
S1o = V*conj(I1)
S1o
S2o = V*conj(I2)
S2o
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
Let's pretty-print that:
|
print('''
S1o = {:>6.1f} VA
P1o = {:>6.1f} W
Q1o = {:>6.1f} var
----------------
S2o = {:>6.1f} VA
P2o = {:>6.1f} W
Q2o = {:>6.1f} var
================'''.format(abs(S1o), S1o.real, S1o.imag,
abs(S2o), S2o.real, S2o.imag))
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
As expected, the real and reactive power supplied by the source are equal to the sum of the
real and reactive powers consumed by the loads.
(c)
With the switch closed, all three loads are connected to the source. The current in Loads 1 and 2 is the same as before. The current $\vec{I}_3$ in Load 3 is:
|
I3 = V/Z3
I3_angle = arctan(I3.imag/I3.real)
print('I3 = {:.1f} A ∠{:.1f}°'.format(abs(I3), I3_angle/pi*180))
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
Therefore the total current from the source is $\vec{I} = \vec{I}_1 + \vec{I}_2 + \vec{I}_3$:
|
I = I1 + I2 + I3
I_angle = arctan(I.imag/I.real)
print('I = {:.1f} A ∠{:.1f}°'.format(abs(I), I_angle/pi*180))
print('=================')
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
lagging (because current laggs behind voltage).
The real, reactive, and apparent power supplied by the source are
$$S = VI^* \quad P = VI\cos\theta = real(S) \quad Q = VI\sin\theta = imag(S)$$
|
Sc = V*conj(I) # I use index "c" for closed switch
Sc
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
Let's pretty-print that:
|
print('''
Sc = {:.1f} VA
Pc = {:.1f} W
Qc = {:.1f} var
==============='''.format(abs(Sc), Sc.real, Sc.imag))
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
(d)
The real, reactive, and apparent power consumed by Load 1, Load 2 and by Load 3 respectively are:
|
S1c = V*conj(I1)
S1c
S2c = V*conj(I2)
S2c
S3c = V*conj(I3)
S3c
print('''
S1c = {:>7.1f} VA
P1c = {:>7.1f} W
Q1c = {:>7.1f} var
-----------------
S2c = {:>7.1f} VA
P2c = {:>7.1f} W
Q2c = {:>7.1f} var
-----------------
S3c = {:>7.1f} VA
P3c = {:>7.1f} W
Q3c = {:>7.1f} var
================='''.format(abs(S1c), S1c.real, S1c.imag,
abs(S2c), S2c.real, S2c.imag,
abs(S3c), S3c.real, S3c.imag))
|
Chapman/Ch1-Problem_1-19.ipynb
|
dietmarw/EK5312_ElectricalMachines
|
unlicense
|
Building Path
If any argument to join begins with os.sep, all of the previous arguments are discarded and the new one becomes the beginning of the return value.
|
import os.path
PATHS = [
('one', 'two', 'three'),
('/', 'one', 'two', 'three'),
('/one', '/two', '/three'),
]
for parts in PATHS:
print('{} : {!r}'.format(parts, os.path.join(*parts)))
import os.path
for user in ['', 'gaufung', 'nosuchuser']:
lookup = '~' + user
print('{!r:>15} : {!r}'.format(
lookup, os.path.expanduser(lookup)))
import os.path
import os
os.environ['MYVAR'] = 'VALUE'
print(os.path.expandvars('/path/to/$MYVAR'))
|
FileSystem/Path.ipynb
|
gaufung/PythonStandardLibrary
|
mit
|
Normal Path
|
import os.path
PATHS = [
'one//two//three',
'one/./two/./three',
'one/../alt/two/three',
]
for path in PATHS:
print('{!r:>22} : {!r}'.format(path, os.path.normpath(path)))
import os
import os.path
os.chdir('/usr')
PATHS = [
'.',
'..',
'./one/two/three',
'../one/two/three',
]
for path in PATHS:
print('{!r:>21} : {!r}'.format(path, os.path.abspath(path)))
|
FileSystem/Path.ipynb
|
gaufung/PythonStandardLibrary
|
mit
|
File Time
|
import os.path
import time
print('File :', '~/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')
print('Access time :', time.ctime(os.path.getatime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Modified time:', time.ctime(os.path.getmtime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Change time :', time.ctime(os.path.getctime('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb')))
print('Size :', os.path.getsize('/Users/gaufung/WorkSpace/PythonStandardLibrary/FileSystem/Path.ipynb'))
|
FileSystem/Path.ipynb
|
gaufung/PythonStandardLibrary
|
mit
|
You can write and use Python files and call their functions inside your notebooks to keep them simples.
|
import helpers
print('You can use and tweak the python code in the helpers.py file (example: "{}")'.format(helpers.foobar()))
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Fetching metrics
To keep this example self-contained, the data is read from a static file. The metrics are stored held in a all_dataframes variable.
|
import reprlib
import json
import arrow
import requests
from son_analyze.core.prometheus import PrometheusData
from son_scikit.hl_prometheus import build_sonata_df_by_id
all_dataframes = None
with open('empty_vnf1_sonemu_rx_count_packets_180.json') as raw:
x = PrometheusData(raw.read())
all_dataframes = build_sonata_df_by_id(x)
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Each VNF has its own dataframe where metrics have a corresponding column. Here, the empty_vnf1 VNF has a sonemu_rx_count_packets column for the monitored received packets on the network.
|
print('The dictonnary of all dataframes by VNF names: {}'.format(reprlib.repr(all_dataframes)))
print(all_dataframes['empty_vnf1'].head())
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Basic plotting
From there we use the df and ddf variables as shortcuts, before plotting them.
* df is the main dataframe we are going to work with
* ddf contains the discrete difference of df
|
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (20.0, 5.0)
df = all_dataframes['empty_vnf1']
ddf = df.diff().dropna()
df.plot();
ddf.plot();
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Injecting errors in the metrics
For this tutorial, we inject two errors in the metrics. This is done inside the error_ddf dataframe.
|
error_ddf = ddf.copy()
error_ddf.sonemu_rx_count_packets[1111] *= 2.6
error_ddf.sonemu_rx_count_packets[3333] *= 2.7
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Detecting anomalies
We use the pyculiarity package to detect anomalies in a dataframe using the detect_ts function.
|
from pyculiarity import detect_ts
import pandas as pd
import time
def f(x):
dt = x.to_datetime()
return time.mktime(dt.timetuple())
target = error_ddf
u = pd.DataFrame({'one': list(target.index.map(f)), 'two': target.sonemu_rx_count_packets})
results = detect_ts(u, max_anoms=0.004, alpha=0.01, direction='both') #, threshold='med_max')
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
The resulting plot clearly shows the 2 anomalies.
|
# make a nice plot
matplotlib.rcParams['figure.figsize'] = (20.0, 10.0)
f, ax = plt.subplots(2, 1, sharex=True)
ax[0].plot(target.index, target.sonemu_rx_count_packets, 'b')
ax[0].plot(results['anoms'].index, results['anoms']['anoms'], 'ro')
ax[0].set_title('Detected Anomalies')
ax[1].set_xlabel('Time Stamp')
ax[0].set_ylabel('Count')
ax[1].plot(results['anoms'].index, results['anoms']['anoms'], 'b')
ax[1].set_ylabel('Anomaly Magnitude')
plt.show()
|
son-scikit/src/son_scikit/resources/tutorials/Basic_anomalies_detection.ipynb
|
cgeoffroy/son-analyze
|
apache-2.0
|
Recomendacion de productos, Content-Based
A continuación veremos paso a paso como se puede realizar un sistema de recomendacion basado en el contenido en python.
http://www.p.valienteverde.com/sistemas-de-recomendacion-basados-en-el-contenido-content-based/
Basado en el Contenido (ContendBased)
Por medio de la descripción del producto. Relaciones de tags
Vectorizacion del contenido, es decir, generar los tags
Ponderar los tags
Generar el motor de buscada de los productos similares
<img src='imagenes/contect_based.png'>
|
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.neighbors import NearestNeighbors
import ContendBased as CB
datos = pd.read_csv("./BD/people_wiki.tar.gz", compression='gzip')
datos.head(3)
entrada_elton_john=datos.query('name == "Elton John"')
entrada_elton_john
|
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
|
pvalienteverde/ElCuadernillo
|
mit
|
Normalizacion
|
tf_transformer = TfidfTransformer(use_idf=False).fit(bag_of_words)
matrix_elton_john_tf=tf_transformer.transform(bag_of_words[entrada_elton_john.index.values])
pesos_tf=CB.mostrar_pesos_tags(matrix_elton_john_tf,vectorizacion,descripcion='tf')
pesos_tf
|
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
|
pvalienteverde/ElCuadernillo
|
mit
|
Prediccion
Una vez que ya hemos extraidos los tags de los productos, buscamos por similitud los mas parecidos
|
vecinos = NearestNeighbors(n_neighbors=5,metric='cosine',algorithm='brute')
datos_por_tags = tfidf_vectorizer.transform(datos.text)
vecinos.fit(datos_por_tags)
|
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
|
pvalienteverde/ElCuadernillo
|
mit
|
Con nuestro motor de recomendacion creado, podemos utilizarlo como un buscador de articulos. Como se verá, de los 5 actores propuestos, 4 de ellos al menos ha ganado un oscar !!!!
|
buscador = tfidf_vectorizer.transform(['Award Actor Oscar'])
distancia,indices = vecinos.kneighbors(buscador)
datos.iloc[indices[0],:]
|
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
|
pvalienteverde/ElCuadernillo
|
mit
|
Veamos que famosos nos relaciona con Al Pacino...
|
al_pacino_vectorizado = tfidf_vectorizer.transform(datos.query('name == "Al Pacino"').text)
distancia,indices = vecinos.kneighbors(al_pacino_vectorizado)
datos.iloc[indices[0],:]
|
ElCuadernillo/20160725_SistemasDeRecomendacionContentBased/Content-Based paso a paso.ipynb
|
pvalienteverde/ElCuadernillo
|
mit
|
Benchmarking
We can benchmark our learner's efficiency by running a couple of experiments on the Iris dataset.
Our classifier will be L1-regularized logistic regression.
|
%%time
ss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)
for train, test in ss.split(np.arange(len(X))):
# Make an SGD learner, nothing fancy here
classifier = sklearn.linear_model.SGDClassifier(verbose=0,
loss='log',
penalty='l1',
n_iter=1)
# Again, build a streamer object
batches = pescador.Streamer(batch_sampler, X[train], Y[train])
# And train the model on the stream.
n_steps = 0
for batch in batches(max_iter=5e3):
classifier.partial_fit(batch['X'], batch['Y'], classes=classes)
n_steps += 1
# How's it do on the test set?
print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))
print('# Steps: ', n_steps)
|
examples/Pescador demo.ipynb
|
bmcfee/pescador
|
isc
|
Parallelism
It's possible that the learner is more or less efficient than the data generator. If the data generator has higher latency than the learner (SGDClassifier), then this will slow down the learning.
Pescador uses zeromq to parallelize data stream generation, effectively decoupling it from the learner.
|
%%time
ss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)
for train, test in ss.split(np.arange(len(X))):
# Make an SGD learner, nothing fancy here
classifier = sklearn.linear_model.SGDClassifier(verbose=0,
loss='log',
penalty='l1',
n_iter=1)
# First, turn the data_generator function into a Streamer object
batches = pescador.Streamer(batch_sampler, X[train], Y[train])
# Then, send this thread to a second process
zmq_stream = pescador.ZMQStreamer(batches, 5156)
# And train the model on the stream.
n_steps = 0
for batch in zmq_stream(max_iter=5e3):
classifier.partial_fit(batch['X'], batch['Y'], classes=classes)
n_steps += 1
# How's it do on the test set?
print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))
print('# Steps: ', n_steps)
|
examples/Pescador demo.ipynb
|
bmcfee/pescador
|
isc
|
Set up and run non-dithered metric bundles. Use a lower value of nside to make the notebook run faster, although at lower spatial resolution.
|
nside = 16
# Set up metrics, slicer and summaryMetrics.
m1 = kConsecutiveGapMetric(k=2)
m2 = metrics.AveGapMetric()
slicer = slicers.HealpixSlicer(nside=nside)
summaryMetrics = [metrics.MinMetric(), metrics.MeanMetric(), metrics.MaxMetric(),
metrics.MedianMetric(), metrics.RmsMetric(),
metrics.PercentileMetric(percentile=25), metrics.PercentileMetric(percentile=75)]
# And I'll set a plotDict for the nvisits and coadded depth, because otherwise the DD fields throw the
# scale in the plots into too wide a range.
# (we could also generate plots, see this effect, then set the dict and regenerate the plots)
#nvisitsPlotRanges = {'xMin':0, 'xMax':300, 'colorMin':0, 'colorMax':300, 'binsize':5}
#coaddPlotRanges = {'xMin':24, 'xMax':28, 'colorMin':24, 'colorMax':28, 'binsize':0.02}
filterlist = ['u', 'g', 'r', 'i', 'z', 'y']
filterorder = {'u':0, 'g':1, 'r':2, 'i':3, 'z':4, 'y':5}
# Create metricBundles for each filter.
# For ease of access later, I want to make a dictionary with 'kgap[filter]' first.
kgap = {}
avegap = {}
for f in filterlist:
sqlconstraint = 'filter = "%s"' %(f)
# Add displayDict stuff that's useful for showMaf to put things in "nice" order.
displayDict = {'subgroup':'Undithered', 'order':filterorder[f], 'group':'kgap'}
kgap[f] = MetricBundle(m1, slicer, sqlconstraint=sqlconstraint, runName=runName,
summaryMetrics=summaryMetrics, #plotDict=nvisitsPlotRanges,
displayDict=displayDict)
displayDict['group'] = 'AveGap'
avegap[f] = MetricBundle(m2, slicer, sqlconstraint=sqlconstraint, runName=runName,
summaryMetrics=summaryMetrics, #plotDict=nvisitsPlotRanges,
displayDict=displayDict)
blistAll = []
for f in filterlist:
blistAll.append(kgap[f])
blistAll.append(avegap[f])
bdict = makeBundlesDictFromList(blistAll)
# Set the metricBundleGroup up with all metricBundles, in all filters.
bgroup = MetricBundleGroup(bdict, opsdb, outDir=outDir, resultsDb=resultsDb)
bgroup.runAll()
bgroup.writeAll()
bgroup.plotAll()
print 'Kgap --'
for f in filterlist:
print kgap[f].summaryValues
print 'Avegap --'
for f in filterlist:
print avegap[f].summaryValues
|
notebooks/k_consecutive_visits.ipynb
|
LSSTTVS/WhitepaperNotebooks
|
bsd-3-clause
|
Now let's try to combine the histograms.
|
# Set more complicated plot labels directly in the bundles.
for f in filterlist:
kgap[f].setPlotDict({'label':'%s %1.f/%.1f/%1.f' %(f, kgap[f].summaryValues['25th%ile'],
kgap[f].summaryValues['Median'],
kgap[f].summaryValues['75th%ile'])})
# Set up the plotHandler.
ph = plots.PlotHandler(outDir=outDir, resultsDb=resultsDb)
# Instantiate the healpix histogram plotter, since we'll use it a lot.
healpixhist = plots.HealpixHistogram()
ph.setMetricBundles(kgap)
# Add min/max values to the plots, which will be used for the combo histogram for nvisits.
#ph.setPlotDicts(nvisitsPlotRanges)
ph.plot(plotFunc=healpixhist)
|
notebooks/k_consecutive_visits.ipynb
|
LSSTTVS/WhitepaperNotebooks
|
bsd-3-clause
|
1. PCFG
In this lab we will show you a way to represent a PCFG using python objects. We will introduce the following classes:
Symbol
Terminal
Nonterminal
Rule
At first glance, this might seem like a lot of work. But, hopefully, by the time you get to implementing CKY you will be confinced of the benefits of these constructions.
Symbol
Recall that:
* Terminal symbols are the words of the sentence: I, ate, salad, the etc.
* Nonterminal symbols are the syntactic categories of the various constituents: S, NP, VP, Det etc.
In our representation, Symbol is going to be a container class. The classes Terminal and Nonterminal will inherit from the Symbol class and will hence both become a type of symbol. The classes themselves are effectively a container for the underlying python strings.
|
class Symbol:
"""
A symbol in a grammar.
This class will be used as parent class for Terminal, Nonterminal.
This way both will be a type of Symbol.
"""
def __init__(self):
pass
class Terminal(Symbol):
"""
Terminal symbols are words in a vocabulary
E.g. 'I', 'ate', 'salad', 'the'
"""
def __init__(self, symbol: str):
assert type(symbol) is str, 'A Terminal takes a python string, got %s' % type(symbol)
self._symbol = symbol
def is_terminal(self):
return True
def is_nonterminal(self):
return False
def __str__(self):
return "'%s'" % self._symbol
def __repr__(self):
return 'Terminal(%r)' % self._symbol
def __hash__(self):
return hash(self._symbol)
def __len__(self):
"""The length of the underlying python string"""
return len(self._symbol)
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol
def __ne__(self, other):
return not (self == other)
@property
def obj(self):
"""Returns the underlying python string"""
return self._symbol
class Nonterminal(Symbol):
"""
Nonterminal symbols are the grammatical classes in a grammar.
E.g. S, NP, VP, N, Det, etc.
"""
def __init__(self, symbol: str):
assert type(symbol) is str, 'A Nonterminal takes a python string, got %s' % type(symbol)
self._symbol = symbol
def is_terminal(self):
return False
def is_nonterminal(self):
return True
def __str__(self):
return "[%s]" % self._symbol
def __repr__(self):
return 'Nonterminal(%r)' % self._symbol
def __hash__(self):
return hash(self._symbol)
def __len__(self):
"""The length of the underlying python string"""
return len(self._symbol)
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol
def __ne__(self, other):
return not (self == other)
@property
def obj(self):
"""Returns the underlying python string"""
return self._symbol
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Let's try out the classes by initializing some terminal an nonterminal symbols:
|
dog = Terminal('dog')
the = Terminal('the')
walks = Terminal('walks')
S = Nonterminal('S')
NP = Nonterminal('NP')
NP_prime = Nonterminal('NP')
VP = Nonterminal('VP')
V = Nonterminal('V')
N = Nonterminal('N')
Det = Nonterminal('Det')
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
The methods __eq__ and __ne__ make it possible to compare our objects using standard Python syntax. But more importantly: compare in the way that we are interested in, namely whether the underlying representation is the same.
To see the difference, try commenting out the method __eq__ in the class above, and notice different result of the equality test NP==NP_prime.
|
print(dog)
print(NP)
print()
print(NP==Det)
print(NP!=Det)
print(NP==NP)
print(NP==NP_prime)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Note the difference between calling print(NP) and simply calling NP. The first is taken care of by the method __str__ and the second by the method __repr__.
|
dog
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can also easily check if our symbol is a terminal or not:
|
dog.is_terminal()
NP.is_terminal()
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Finally the method __hash__ makes our object hashable, and hence usable in a datastructure like a dictionary.
Try commenting out this method above in the class and then retry constructing the dictionary: notice the error.
|
d = {NP: 1, S: 2}
d
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Rules
In a PCFG a rule looks something like this
$$NP \to Det\;N,$$
with a corresponding probability, for example $1.0$ (if we lived in a world where all noun phrases had this grammatical structure).
In our representation, Rule will be an object made of a left-hand side (lhs) symbol, a sequence of right-hand side symbols (rhs) and a probability prob.
If we use the above defined symbols, we can call
rule = Rule(NP, [Det, N], 1.0).
This will construct an instance called rule which represent the rule above
[NP] -> [Det] [N] (1.0).
|
class Rule:
def __init__(self, lhs, rhs, prob):
"""
Constructs a Rule.
A Rule takes a LHS symbol and a list/tuple of RHS symbols.
:param lhs: the LHS nonterminal
:param rhs: a sequence of RHS symbols (terminal or nonterminal)
:param prob: probability of the rule
"""
assert isinstance(lhs, Symbol), 'LHS must be an instance of Symbol'
assert len(rhs) > 0, 'If you want an empty RHS, use an epsilon Terminal EPS'
assert all(isinstance(s, Symbol) for s in rhs), 'RHS must be a sequence of Symbol objects'
self._lhs = lhs
self._rhs = tuple(rhs)
self._prob = prob
def __eq__(self, other):
return self._lhs == other._lhs and self._rhs == other._rhs and self._prob == other._prob
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self._lhs, self._rhs, self._prob))
def __repr__(self):
return '%s -> %s (%s)' % (self._lhs,
' '.join(str(sym) for sym in self._rhs),
self.prob)
def is_binary(self):
"""True if Rule is binary: A -> B C"""
return len(self._rhs) == 2
def is_unary(self):
"""True if Rule is unary: A -> w"""
return len(self._rhs) == 1
@property
def lhs(self):
"""Returns the lhs of the rule"""
return self._lhs
@property
def rhs(self):
"""Returns the rhs of the rule"""
return self._rhs
@property
def prob(self):
"""Returns the probability of the rule"""
return self._prob
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Just as with Terminal and Nonterminal you can print an instance of Rule, you can access its attributes, and you can hash rules with containers such as dict and set.
|
r1 = Rule(S, [NP, VP], 1.0)
r2 = Rule(NP, [Det, N], 1.0)
r3 = Rule(N, [dog], 1.0)
r4 = Rule(Det, [the], 1.0)
r5 = Rule(VP, [walks], 1.0)
print(r1)
print(r2)
print(r3)
print(r4)
print(r1.prob)
r1 in set([r1])
d = {r1: 1, r2: 2}
d
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Grammar
A PCFG is a container for Rules. The Rules are stored in the PCFG in such a way that they can be accesed easily in different ways.
|
class PCFG(object):
"""
Constructs a PCFG.
A PCFG stores a list of rules that can be accessed in various ways.
:param rules: an optional list of rules to initialize the grammar with
"""
def __init__(self, rules=[]):
self._rules = []
self._rules_by_lhs = defaultdict(list)
self._terminals = set()
self._nonterminals = set()
for rule in rules:
self.add(rule)
def add(self, rule):
"""Adds a rule to the grammar"""
if not rule in self._rules:
self._rules.append(rule)
self._rules_by_lhs[rule.lhs].append(rule)
self._nonterminals.add(rule.lhs)
for s in rule.rhs:
if s.is_terminal():
self._terminals.add(s)
else:
self._nonterminals.add(s)
def update(self, rules):
"""Add a list of rules to the grammar"""
for rule in rules:
self.add(rule)
@property
def nonterminals(self):
"""The list of nonterminal symbols in the grammar"""
return self._nonterminals
@property
def terminals(self):
"""The list of terminal symbols in the grammar"""
return self._terminals
@property
def rules(self):
"""The list of rules in the grammar"""
return self._rules
@property
def binary_rules(self):
"""The list of binary rules in the grammar"""
return [rule for rule in self._rules if rule.is_binary()]
@property
def unary_rules(self):
"""The list of unary rules in the grammar"""
return [rule for rule in self._rules if rule.is_unary()]
def __len__(self):
return len(self._rules)
def __getitem__(self, lhs):
return self._rules_by_lhs.get(lhs, frozenset())
def get(self, lhs, default=frozenset()):
"""The list of rules whose LHS is the given symbol lhs"""
return self._rules_by_lhs.get(lhs, frozenset())
def __iter__(self):
"""Iterator over rules (in arbitrary order)"""
return iter(self._rules)
def iteritems(self):
"""Iterator over pairs of the kind (LHS, rules rewriting LHS)"""
return self._rules_by_lhs.items()
def __str__(self):
"""Prints the grammar line by line"""
lines = []
for lhs, rules in self.iteritems():
for rule in rules:
lines.append(str(rule))
return '\n'.join(lines)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Initialize a grammar
|
G = PCFG()
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can add rules individually with add, or as a list with update:
|
G.add(r1)
G.update([r2,r3,r4,r5])
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can print the grammar
|
print(G)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can get the set of rewrite rules for a certain LHS symbol.
|
G.get(S)
G.get(NP)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can also iterate through rules in the grammar.
Note that the following is basically counting how many rules we have in the grammar.
|
sum(1 for r in G)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can access the set of terminals and nonterminals of the grammar:
|
print(G.nonterminals)
print(G.terminals)
S in G.nonterminals
dog in G.terminals
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Finally we can easily access all the binary rules and all the unary rules in the grammar:
|
G.unary_rules
G.binary_rules
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
For the following sections you will need to have the Natural Language Toolkit (NLTK) installed. We will use a feature of the NLTK toolkit that lets you draw constituency parses. Details for download can be found here: http://www.nltk.org/install.html.
Visualizing a tree
For the sake of legacy let's reiterate an age-old NLP schtick, the well-known example of structural ambiguity from the Groucho Marx movie, Animal Crackers (1930):
One morning I shot an elephant in my pajamas. How he got into my pajamas, I don't know.
Let's take a closer look at the ambiguity in the phrase: I shot an elephant in my pajamas. The ambiguity is caused by the fact that the sentence has two competing parses represented in:
(S (NP I) (VP (VP (V shot) (NP (Det an) (N elephant))) (PP (P in) (NP (Det my) (N pajamas)))))
and
(S (NP I) (VP (V shot) (NP (Det an) (NP (N elephant) (PP (P in) (NP (Det my) (N pajamas)))))))
We can write these parses down as strings and then let NLTK turn them into trees using the NLTK Tree class. (See http://www.nltk.org/api/nltk.html#nltk.tree.Tree as reference for this class, if you want to know more.)
|
parse1 = "(S (NP I) (VP (VP (V shot) (NP (Det an) (N elephant))) (PP (P in) (NP (Det my) (N pajamas)))))"
parse2 = "(S (NP I) (VP (V shot) (NP (Det an) (NP (N elephant) (PP (P in) (NP (Det my) (N pajamas)))))))"
pajamas1 = Tree.fromstring(parse1)
pajamas2 = Tree.fromstring(parse2)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We can then pretty-print these trees:
|
pajamas1.pretty_print()
pajamas2.pretty_print()
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Parsing with CKY
Let's stick with this sentence for the rest of this lab. We will use CKY to find the 'best' parse for this sentence.
|
# Turn the sentence into a list
sentence = "I shot an elephant in my pajamas".split()
# The length of the sentence
num_words = len(sentence)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
A PCFG for this sentence can be found in the file groucho-grammar.txt. We read this in with the function read_grammar_rules.
|
def read_grammar_rules(istream):
"""Reads grammar rules formatted as 'LHS ||| RHS ||| PROB'."""
for line in istream:
line = line.strip()
if not line:
continue
fields = line.split('|||')
if len(fields) != 3:
raise ValueError('I expected 3 fields: %s', fields)
lhs = fields[0].strip()
if lhs[0] == '[':
lhs = Nonterminal(lhs[1:-1])
else:
lhs = Terminal(lhs)
rhs = fields[1].strip().split()
new_rhs = []
for r in rhs:
if r[0] == '[':
r = Nonterminal(r[1:-1])
else:
r = Terminal(r)
new_rhs.append(r)
prob = float(fields[2].strip())
yield Rule(lhs, new_rhs, prob)
# Read in the grammar
istream = open('groucho-grammar-1.txt')
grammar = PCFG(read_grammar_rules(istream))
print("The grammar:\n", grammar, "\n")
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
We will also need the following two dictionaries: nonterminal2index mapping from nonterminals to integers (indices); and its inverse, an index2nonterminal dictionary.
|
num_nonterminals = len(grammar.nonterminals)
# Make a nonterminal2index and a index2nonterminal dictionary
n2i = defaultdict(lambda: len(n2i))
i2n = dict()
for A in grammar.nonterminals:
i2n[n2i[A]] = A
# Stop defaultdict behavior of n2i
n2i = dict(n2i)
n2i
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
The charts
Now we are ready to introduce the chart datastructures. We need a chart to store the scores and a chart to store the backpointers.
Both of these will be 3-dimensional numpy arrays: one named score (also named table in J&M) holding the probabilities of intermediate results; one named back to store the backpointers in. We will use the following indexing convention for these charts:
Format for the chart holding the scores:
score[A][begin][end] = probability (naming as in slides)
table[A][i][j] = probability (naming as in J&M)
Format for the chart holding the backpointers:
back[A][begin][end] = (split,B,C) (naming as in slides)
back[A][i][j] = (k,B,C) (naming as in J&M)
This indexing convention is convenient for printing. See what happens when we print back below: we get num_nonterminal slices, each a numpy array of shape [n_words+1, n_words+1]. This is easier to read than the format table[i][j][A].
[Note] Here we pretended A is both the nonterminal as well as the index. In actual fact, in our implementation A will be the nonterminal and the index for A will be n2i[A].
Let's show you what we mean:
|
# A numpy array zeros
score = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1))
# A numpy array that can store arbitrary data (we set dtype to object)
back = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1), dtype=object)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
The following illustrates the way you will use the back chart. In this example, your parser recognized that the words between 0 and 2 form an NP and the words between 2 and the end of the sentence form a VP (and nothing else yet):
|
# Illustration of the backpointer array
back[n2i[S]][0][-1] = (2,NP,VP)
back
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Exercise 1. (80 points)
Implement the CKY algorithm. Follow the pseudo-code given in the lecture-slides (or alternatively in J&M). The code must comply to the following:
The function cky takes a sentence (list of words) a grammar (an instance of PCFG) and a n2i nonterminals2index dictionary.
The function cky returns the filled-in score-chart and backpointer-chart, following the format established above.
[Hint] This is the moment to make good use of the methods of the classes PCFG, Rule, Nonterminal, and Terminal!
|
def cky(sentence, grammar, n2i):
"""
The CKY algorithm.
Follow the pseudocode from the slides (or J&M).
:param sentence: a list of words
:param grammar: an instance of the class PCFG
:param n2i: a dictionary mapping from Nonterminals to indices
:return score: the filled in scores chart
:return back: the filled in backpointers chart
"""
num_words = len(sentence)
num_nonterminals = len(grammar.nonterminals)
# A numpy array to store the scores of intermediate parses
score = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1))
# A numpy array to store the backpointers
back = np.zeros((num_nonterminals,
num_words + 1,
num_words + 1), dtype=object)
# YOUR CODE HERE
raise NotImplementedError
return score, back
# Run CKY
score, back = cky(sentence, grammar, n2i)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Check your CKY
Use the code in the following two cell to check your cky implementation.
Take the Nonterminal S to inspect your filled in score and backpointer charts. Leave the code in this cell unchanged. We will use this to evaluate the corectness your cky function.
|
### Don't change the code in this cell. ###
S = Nonterminal('S')
print('The whole slice for nonterminal S:')
print(score[n2i[S]], "\n")
print('The score in cell (S, 0, num_words), which is the probability of the best parse:')
print(score[n2i[S]][0][num_words], "\n")
print('The backpointer in cell (S, 0, num_words):')
print(back[n2i[S]][0][num_words], "\n")
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Exercise 2. (20 points)
Write the function build_tree that reconstructs the parse from the backpointer table. This is the function that is called in the return statement of the pseudo-code in Jurafsky and Martin.
[Note] This is a challenging exercise! And we have no pseudocode for you here: you must come up with your own implementation. On the other hand, it will also constitute just the last 20 points of your your grade, so don't worry too much if can't finish it. If you finished exercise 1 you already have an 8 for this lab!
Here is some additional advice:
Use recursion - that is write your function in a recursive way.
What is the base case? Hint: $A \to w$.
What is the recursive case? Hint: $A \to B\; C$.
Use the additional clas Span that we introduce below for the symbols in your recovered rules.
Read the documentation in this class for its usage.
If you want to use the function make_nltk_tree that we provide (and that turns a derivation into an NLTK tree so that you can draw it) your function must return the list of rules in derivation ordered depth-first.
If you write your function recursively this should happen automatically.
The following class will be very useful in your solution for the function build_tree.
|
class Span(Symbol):
"""
A Span indicates that symbol was recognized between begin and end.
Example:
Span(Terminal('the'), 0, 1)
This means: we found 'the' in the sentence between 0 and 1
Span(Nonterminal('NP'), 4, 8) represents NP:4-8
This means: we found an NP that covers the part of the sentence between 4 and 8
Thus, Span holds a Terminal or a Nonterminal and wraps it between two integers.
This makes it possible to distinguish between two instances of the same rule in the derrivation.
Example:
We can find that the rule NP -> Det N is use twice in the parse derrivation. But that in the first
case it spans "an elephant" and in the second case it spans "my pajamas". We want to distinguis these.
So: "an elephant" is covered by [NP]:2-4 -> [Det]:2-3 [N]:3-4
"my pajamas" is covered by [NP]:5-7 -> [Det]:5-6 [N]:6-7
Internally, we represent spans with tuples of the kind (symbol, start, end).
"""
def __init__(self, symbol, start, end):
assert isinstance(symbol, Symbol), 'A span takes an instance of Symbol, got %s' % type(symbol)
self._symbol = symbol
self._start = start
self._end = end
def is_terminal(self):
# a span delegates this to an underlying symbol
return self._symbol.is_terminal()
def root(self):
# Spans are hierarchical symbols, thus we delegate
return self._symbol.root()
def obj(self):
"""The underlying python tuple (Symbol, start, end)"""
return (self._symbol, self._start, self._end)
def translate(self, target):
return Span(self._symbol.translate(target), self._start, self._end)
def __str__(self):
"""Prints Symbol with span if Symbol is Nonterminal else without (purely aesthetic distinction)"""
if self.is_terminal():
return "%s" % (self._symbol)
else:
return "%s:%s-%s" % (self._symbol, self._start, self._end)
def __repr__(self):
return 'Span(%r, %r, %r)' % (self._symbol, self._start, self._end)
def __hash__(self):
return hash((self._symbol, self._start, self._end))
def __eq__(self, other):
return type(self) == type(other) and self._symbol == other._symbol and self._start == other._start and self._end == other._end
def __ne__(self, other):
return not (self == other)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Example usage of Span:
|
span_S = Span(S, 0, 10)
print(span_S)
span_S = Span(dog, 4, 5)
print(span_S)
spanned_rule = Rule(Span(NP, 2, 4), [Span(Det, 2, 3), Span(NP, 3, 4)], prob=None)
print(spanned_rule)
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Your final derivation should look like this:
(Note that the rule probabilities are set to None. These are not saved in the backpointer chart so cannot be retrieved at the recovering stage. They also don't matter at this point, so you can set them to None.)
If you give this derivation to the functionmake_nltk_tree and then let NLTK draw it, then you get this tree:
The exercise
|
def build_tree(back, sentence, root, n2i):
"""
Reconstruct the viterbi parse from a filled-in backpointer chart.
It returns a list called derivation which hols the rules that. If you
want to use the function make_nltk_tree you must make sure that the
:param back: a backpointer chart of shape [num_nonterminals, num_words+1, num_words+1]
:param sentence: a list of words
:param root: the root symbol of the tree: Nonterminal('S')
:param n2i: the dictionary mapping from Nonterminals to indices
:return derivation: a derivation: a list of Rules with Span symbols that generate the Viterbi tree.
If you want to draw them with the function that we provide, then this list
should be ordered depth first!
"""
derivation = []
num_words = len(sentence)
# YOUR CODE HERE
raise NotImplementedError
return derivation
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Get your derivation:
|
derivation = build_tree(back, sentence, S, n2i)
derivation
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
Turn the derivation into an NLTK tree:
|
def make_nltk_tree(derivation):
"""
Return a NLTK Tree object based on the derivation
(list or tuple of Rules)
"""
d = defaultdict(None, ((r.lhs, r.rhs) for r in derivation))
def make_tree(lhs):
return Tree(str(lhs), (str(child) if child not in d else make_tree(child) for child in d[lhs]))
return make_tree(derivation[0].lhs)
tree = make_nltk_tree(derivation)
tree.pretty_print()
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
That's it!
Congratulations, you have made it to the end of the lab.
Make sure all your cells are executed so that all your answers are there. Then, continue if you're interested!
Optional
If you managed to get your entire CKY-parser working and have an appetite for more, it might be fun to try it on some more sentences and grammars. Give the grammars below a try!
Alternative Groucho-grammar
If you change the probabilities in the grammar, you'll get a different parse as most likely one. Compare groucho-grammar-1.txt with groucho-grammar-2.txt and spot the difference in probabilities.
|
# YOUR CODE HERE
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
The man with the telescope
Another ambiguous sentence:
I saw the man on the hill with the telescope.
A grammar for this sentence is specified in the file telescope-grammar.txt.
|
# YOUR CODE HERE
|
lab3/lab3.ipynb
|
tdeoskar/NLP1-2017
|
gpl-3.0
|
so that is the benchmark to beat.
|
from numba import njit
watrad_numba = mk_rsys(ODEsys, **watrad_data, lambdify=lambda *args: njit(sym.lambdify(*args, modules="numpy")))
watrad_numba.integrate_odeint(tout, y0)
%timeit watrad_numba.integrate_odeint(tout, y0)
import matplotlib.pyplot as plt
%matplotlib inline
|
notebooks/_37-chemical-kinetics-numba.ipynb
|
sympy/scipy-2017-codegen-tutorial
|
bsd-3-clause
|
Just to see that everything looks alright:
|
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
watrad_numba.plot_result(tout, *watrad_numba.integrate_odeint(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
|
notebooks/_37-chemical-kinetics-numba.ipynb
|
sympy/scipy-2017-codegen-tutorial
|
bsd-3-clause
|
Use least_squares to compute w, and visualize the results.
|
from least_squares import least_squares
from plots import visualization
def least_square_classification_demo(y, x):
# ***************************************************
# INSERT YOUR CODE HERE
# classify the data by linear regression: TODO
# ***************************************************
tx = np.c_[np.ones((y.shape[0], 1)), x]
# w = least squares with respect to tx
err, w = least_squares(y, tx)
print(f"MSE: {err}")
visualization(y, x, mean_x, std_x, w, "classification_by_least_square")
least_square_classification_demo(y, x)
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Logistic Regression
Compute your cost by negative log likelihood.
|
def sigmoid(t):
"""apply sigmoid function on t."""
return 1 / (1 + np.exp(-t))
# sanity checks
assert(sigmoid(0) == .5)
assert(np.all(sigmoid(np.array([0, 0, 0])) == np.array([.5, .5, .5])))
def calculate_loss(y, tx, w):
"""compute the cost by negative log likelihood."""
pred = tx @ w
return -(y * np.log(sigmoid(pred)) + (1 - y) * np.log(1 - sigmoid(pred))).sum()
def calculate_gradient(y, tx, w):
"""compute the gradient of loss."""
return tx.T @ (sigmoid(tx @ w) - y)
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Using Gradient Descent
Implement your function to calculate the gradient for logistic regression.
|
def learning_by_gradient_descent(y, tx, w, gamma):
"""
Do one step of gradient descen using logistic regression.
Return the loss and the updated w.
"""
# ***************************************************
# INSERT YOUR CODE HERE
# compute the cost: TODO
# ***************************************************
loss = calculate_loss(y, tx, w)
# ***************************************************
# INSERT YOUR CODE HERE
# compute the gradient: TODO
# ***************************************************
grad = calculate_gradient(y, tx, w)
# ***************************************************
# INSERT YOUR CODE HERE
# update w: TODO
# ***************************************************
w = w - gamma * grad
return loss, w
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Calculate your hessian below
|
def calculate_hessian(y, tx, w):
"""return the hessian of the loss function."""
S = np.diag((sigmoid(tx @ w) * (1 - sigmoid(tx @ w))).flatten())
return (tx.T @ S) @ tx
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Using Newton's method
Use Newton's method for logistic regression.
|
def learning_by_newton_method(y, tx, w):
"""
Do one step on Newton's method.
return the loss and updated w.
"""
loss, gradient, hessian = logistic_regression(y, tx, w)
w = w - np.linalg.inv(hessian) @ gradient
return loss, w
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Using penalized logistic regression
Fill in the function below.
|
def penalized_logistic_regression(y, tx, w, lambda_):
"""return the loss, gradient, and hessian."""
loss, gradient, hessian = logistic_regression(y, tx, w)
penalised_loss = loss + lambda_ * ((w ** 2).sum())
return loss, gradient + 2 * lambda_ * w, hessian
def learning_by_penalized_gradient(y, tx, w, gamma, lambda_):
"""
Do one step of gradient descent, using the penalized logistic regression.
Return the loss and updated w.
"""
# ***************************************************
# INSERT YOUR CODE HERE
# return loss, gradient: TODO
# ***************************************************
loss, gradient, hessian = penalized_logistic_regression(y, tx, w, lambda_)
# ***************************************************
# INSERT YOUR CODE HERE
# update w: TODO
# ***************************************************
w = w - gamma * gradient
return loss, w
def logistic_regression_penalized_gradient_descent_demo(y, x):
# init parameters
max_iter = 10000
gamma = 0.01
lambda_ = 0.01
threshold = 1e-8
losses = []
# build tx
tx = np.c_[np.ones((y.shape[0], 1)), x]
w = np.zeros((tx.shape[1], 1))
# start the logistic regression
for iter in range(max_iter):
# get loss and update w.
loss, w = learning_by_penalized_gradient(y, tx, w, gamma, lambda_)
# log info
if iter % 100 == 0:
print("Current iteration={i}, loss={l}, wnorm={wnorm}".format(i=iter, l=loss, wnorm=(w ** 2).sum()))
# converge criterion
losses.append(loss)
if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:
break
# visualization
visualization(y, x, mean_x, std_x, w, "classification_by_logistic_regression_penalized_gradient_descent")
print("loss={l} wnorm={wnorm}".format(l=calculate_loss(y, tx, w), wnorm=(w ** 2).sum()))
logistic_regression_penalized_gradient_descent_demo(y, x)
|
ml/ex05/template/ex05.ipynb
|
rusucosmin/courses
|
mit
|
Now we will start with normalization of the features because size of the house is in different range as compared to number of bedrooms
|
def featureNormalize(X):
mu = X.mean(axis=0)
sigma = X.std(axis=0)
X_norm = (X - mu)/sigma
return (X_norm, mu, sigma)
|
linear_regression/linear_regression_gradient_descent_with_multiple_variables.ipynb
|
aryarohit07/machine-learning-with-python
|
mit
|
Data Preparation
|
X_norm, mu, sigm = featureNormalize(X)
# now lets add ones to the input feature X for theta0
ones = np.ones((X_norm.shape[0], 1), float)
X = np.concatenate((ones,X_norm), axis=1)
print(X[:1])
#Cost function
def computeCostMulti(X, y, theta):
m = X.shape[0]
hypothesis = X.dot(theta) # h_theta = theta.T * x = theta0*x0 + theta1*x1 + ... + thetan*xn
J = (1/(2*m)) * (np.sum(np.square(hypothesis-y)))
return J
theta = np.zeros(X.shape[1])
J_cost = computeCostMulti(X, y, theta)
print('J_Cost', J_cost)
def gradientDescentMulti(X, y, theta, alpha, num_iters):
m = X.shape[0]
J_history = np.zeros(num_iters)
for iter in np.arange(num_iters):
h = X.dot(theta)
theta = theta - alpha * (1/m) * X.T.dot(h-y)
J_history[iter] = computeCostMulti(X, y, theta)
return theta, J_history
alpha = 0.01;
num_iters = 1000;
theta, J_history = gradientDescentMulti(X, y, theta, alpha, num_iters)
#Lets plot something
plt.xlim(0,num_iters)
plt.plot(J_history)
plt.ylabel('Cost J')
plt.xlabel('Iterations')
plt.show()
print(theta)
|
linear_regression/linear_regression_gradient_descent_with_multiple_variables.ipynb
|
aryarohit07/machine-learning-with-python
|
mit
|
Now lets predict prices of some houses and compare the result with scikit-learn prediction.
|
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(X, y)
inputXs = np.array([[1, 100, 3], [1, 200, 3]])
sklearnPrediction = clf.predict(inputXs)
gradientDescentPrediction = inputXs.dot(theta)
print(sklearnPrediction, gradientDescentPrediction)
print("Looks Good :D")
|
linear_regression/linear_regression_gradient_descent_with_multiple_variables.ipynb
|
aryarohit07/machine-learning-with-python
|
mit
|
Using TT-Matrices we can compactly represent densely connected layers in neural networks, which allows us to greatly reduce number of parameters. Matrix multiplication can be handled by the t3f.matmul method which allows for multiplying dense (ordinary) matrices and TT-Matrices. Very simple neural network could look as following (for initialization several options such as t3f.glorot_initializer, t3f.he_initializer or t3f.random_matrix are available):
|
class Learner:
def __init__(self):
initializer = t3f.glorot_initializer([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)
self.W1 = t3f.get_variable('W1', initializer=initializer)
self.W2 = tf.Variable(tf.random.normal([625, 10]))
self.b2 = tf.Variable(tf.random.normal([10]))
def predict(self, x):
b1 = tf.Variable(tf.zeros([625]))
h1 = t3f.matmul(x, W1) + b1
h1 = tf.nn.relu(h1)
return tf.matmul(h1, W2) + b2
def loss(self, x, y):
y_ = tf.one_hot(y, 10)
logits = self.predict(x)
return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits))
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
For convenience we have implemented a layer analogous to Keras Dense layer but with a TT-Matrix instead of an ordinary matrix. An example of fully trainable net is provided below.
|
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import optimizers
(x_train, y_train), (x_test, y_test) = mnist.load_data()
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Some preprocessing...
|
x_train = x_train / 127.5 - 1.0
x_test = x_test / 127.5 - 1.0
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
tt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],
tt_rank=4, activation='relu',
bias_initializer=1e-3)
model.add(tt_layer)
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Note that in the dense layer we only have $1725$ parameters instead of $784 * 625 = 490000$.
|
optimizer = optimizers.Adam(lr=1e-2)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3, batch_size=64, validation_data=(x_test, y_test))
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Compression of Dense layers
Let us now train an ordinary DNN (without TT-Matrices) and show how we can compress it using the TT decomposition. (In contrast to directly training a TT-layer from scratch in the example above.)
|
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(625, activation='relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
optimizer = optimizers.Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test))
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Let us convert the matrix used in the Dense layer to the TT-Matrix with tt-ranks equal to 16 (since we trained the network without the low-rank structure assumption we may wish start with high rank values).
|
W = model.trainable_weights[0]
print(W)
Wtt = t3f.to_tt_matrix(W, shape=[[7, 4, 7, 4], [5, 5, 5, 5]], max_tt_rank=16)
print(Wtt)
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
We need to evaluate the tt-cores of Wtt. We also need to store other parameters for later (biases and the second dense layer).
|
cores = Wtt.tt_cores
other_params = model.get_weights()[1:]
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Now we can construct a tensor network with the first Dense layer replaced by Wtt
initialized using the previously computed cores.
|
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
tt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],
tt_rank=16, activation='relu')
model.add(tt_layer)
model.add(Dense(10))
model.add(Activation('softmax'))
optimizer = optimizers.Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.set_weights(list(cores) + other_params)
print("new accuracy: ", model.evaluate(x_test, y_test)[1])
model.summary()
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
We see that even though we now have about 5% of the original number of parameters we still achieve a relatively high accuracy.
Finetuning the model
We can now finetune this tensor network.
|
model.fit(x_train, y_train, epochs=2, batch_size=64, validation_data=(x_test, y_test))
|
docs/tutorials/tensor_nets.ipynb
|
Bihaqo/t3f
|
mit
|
Now that we have our training data we need to create the overall pipeline for the tokenizer
|
# For the user's convenience `tokenizers` provides some very high-level classes encapsulating
# the overall pipeline for various well-known tokenization algorithm.
# Everything described below can be replaced by the ByteLevelBPETokenizer class.
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase, NFKC, Sequence
from tokenizers.pre_tokenizers import ByteLevel
# First we create an empty Byte-Pair Encoding model (i.e. not trained model)
tokenizer = Tokenizer(BPE())
# Then we enable lower-casing and unicode-normalization
# The Sequence normalizer allows us to combine multiple Normalizer that will be
# executed in order.
tokenizer.normalizer = Sequence([
NFKC(),
Lowercase()
])
# Our tokenizer also needs a pre-tokenizer responsible for converting the input to a ByteLevel representation.
tokenizer.pre_tokenizer = ByteLevel()
# And finally, let's plug a decoder so we can recover from a tokenized input to the original one
tokenizer.decoder = ByteLevelDecoder()
|
notebooks/01-training-tokenizers.ipynb
|
huggingface/pytorch-transformers
|
apache-2.0
|
The overall pipeline is now ready to be trained on the corpus we downloaded earlier in this notebook.
|
from tokenizers.trainers import BpeTrainer
# We initialize our trainer, giving him the details about the vocabulary we want to generate
trainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet())
tokenizer.train(files=["big.txt"], trainer=trainer)
print("Trained vocab size: {}".format(tokenizer.get_vocab_size()))
|
notebooks/01-training-tokenizers.ipynb
|
huggingface/pytorch-transformers
|
apache-2.0
|
Et voilà ! You trained your very first tokenizer from scratch using tokenizers. Of course, this
covers only the basics, and you may want to have a look at the add_special_tokens or special_tokens parameters
on the Trainer class, but the overall process should be very similar.
We can save the content of the model to reuse it later.
|
# You will see the generated files in the output.
tokenizer.model.save('.')
|
notebooks/01-training-tokenizers.ipynb
|
huggingface/pytorch-transformers
|
apache-2.0
|
Now, let load the trained model and start using out newly trained tokenizer
|
# Let's tokenizer a simple input
tokenizer.model = BPE('vocab.json', 'merges.txt')
encoding = tokenizer.encode("This is a simple input to be tokenized")
print("Encoded string: {}".format(encoding.tokens))
decoded = tokenizer.decode(encoding.ids)
print("Decoded string: {}".format(decoded))
|
notebooks/01-training-tokenizers.ipynb
|
huggingface/pytorch-transformers
|
apache-2.0
|
Comparing Bodies of Text
The Differ class works on sequences of text lines and produces human-readable deltas, or change instructions, including differences within individual lines. The default output produced by Differ is similar to the diff command-line tool under Unix. It includes the original input values from both lists, including common values, and markup data to indicate which changes were made.
Lines prefixed with - were in the first sequence, but not the second.
ines prefixed with + were in the second sequence, but not the first.
If a line has an incremental difference between versions, an extra line prefixed with ? is used to highlight the change within the new version.
If a line has not changed, it is printed with an extra blank space on the left column so that it is aligned with the other output that may have differences.
|
d = difflib.Differ()
diff = d.compare(text1_lines,text2_lines)
print('\n'.join(diff))
|
text/difflib.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Other Output Formats
While the Differ class shows all of the input lines, a unified diff includes only the modified lines and a bit of context. The unified_diff() function produces this sort of output.
|
diff = difflib.unified_diff(
text1_lines,
text2_lines,
lineterm='',
)
print('\n'.join(list(diff)))
|
text/difflib.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
SequenceMathcer
|
from difflib import SequenceMatcher
def show_results(match):
print(' a = {}'.format(match.a))
print(' b = {}'.format(match.b))
print(' size = {}'.format(match.size))
i, j, k = match
print(' A[a:a+size] = {!r}'.format(A[i:i + k]))
print(' B[b:b+size] = {!r}'.format(B[j:j + k]))
A = " abcd"
B = "abcd abcd"
print('A = {!r}'.format(A))
print('B = {!r}'.format(B))
print('\nWithout junk detection:')
s1 = SequenceMatcher(None, A, B)
match1 = s1.find_longest_match(0, len(A), 0, len(B))
show_results(match1)
print('\nTreat spaces as junk:')
s2 = SequenceMatcher(lambda x: x == " ", A, B)
match2 = s2.find_longest_match(0, len(A), 0, len(B))
show_results(match2)
match1
|
text/difflib.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Modify first text to second
|
modify_instruction = s2.get_opcodes()
modify_instruction
s1 = [1, 2, 3, 5, 6, 4]
s2 = [2, 3, 5, 4, 6, 1]
print('Initial data:')
print('s1 =', s1)
print('s2 =', s2)
print('s1 == s2:', s1 == s2)
print()
matcher = difflib.SequenceMatcher(None, s1, s2)
for tag, i1, i2, j1, j2 in reversed(matcher.get_opcodes()):
if tag == 'delete':
print('Remove {} from positions [{}:{}]'.format(
s1[i1:i2], i1, i2))
print(' before =', s1)
del s1[i1:i2]
elif tag == 'equal':
print('s1[{}:{}] and s2[{}:{}] are the same'.format(
i1, i2, j1, j2))
elif tag == 'insert':
print('Insert {} from s2[{}:{}] into s1 at {}'.format(
s2[j1:j2], j1, j2, i1))
print(' before =', s1)
s1[i1:i2] = s2[j1:j2]
elif tag == 'replace':
print(('Replace {} from s1[{}:{}] '
'with {} from s2[{}:{}]').format(
s1[i1:i2], i1, i2, s2[j1:j2], j1, j2))
print(' before =', s1)
s1[i1:i2] = s2[j1:j2]
print(' after =', s1, '\n')
print('s1 == s2:', s1 == s2)
|
text/difflib.ipynb
|
scotthuang1989/Python-3-Module-of-the-Week
|
apache-2.0
|
Record and play
Record a 3-second sample and save it into a file.
|
pAudio.record(3)
pAudio.save("Recording_1.pdm")
|
Pynq-Z1/notebooks/examples/audio_playback.ipynb
|
VectorBlox/PYNQ
|
bsd-3-clause
|
Load and play
Load a sample and play the loaded sample.
|
pAudio.load("/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm")
pAudio.play()
|
Pynq-Z1/notebooks/examples/audio_playback.ipynb
|
VectorBlox/PYNQ
|
bsd-3-clause
|
Quick aside on Wireframe plots in matplotlib
cf. mplot3d tutorial, matplotlib
|
from mpl_toolkits.mplot3d import axes3d
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
fig
print type(X), type(Y), type(Z); print len(X), len(Y), len(Z); print X.shape, Y.shape, Z.shape;
X
Y
Z
X[0][0:10]
|
moreCUDA/samples02/sinsin2dtex.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
EY : At least what I could surmise or infer the 2-dim. (???) python arrays for X,Y,Z of the wireframe plot work like this: imagine a 2-dimensional grid; on top of each grid point is the x-coordinate, then the y-coordinate, and then the z-coordinate. Thus you have 2-dimensional arrays for each.
Making X,Y,Z axes for mplot3d from the .csv files
|
X_sinsin = np.array( [[i*hd[0] for i in range(WIDTH)] for j in range(HEIGHT)] )
Y_sinsin = np.array( [[j*hd[1] for i in range(WIDTH)] for j in range(HEIGHT)] )
Z_sinsinresult = np.array( [[result_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )
Z_sinsinogref = np.array( [[ogref_list[i][j] for i in range(WIDTH)] for j in range(HEIGHT)] )
fig02 = plt.figure()
ax02 = fig02.add_subplot(111,projection='3d')
ax02.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinresult )
plt.show()
fig02
fig03 = plt.figure()
ax03 = fig03.add_subplot(111,projection='3d')
ax03.plot_wireframe(X_sinsin, Y_sinsin, Z_sinsinogref )
plt.show()
fig03
|
moreCUDA/samples02/sinsin2dtex.ipynb
|
ernestyalumni/CompPhys
|
apache-2.0
|
As of the latest version, IndexSelector is only supported for interaction along the x-axis.
MultiSelector <a class="anchor" id="multiselector"></a>
This 1-D selector is equivalent to multiple brush selectors.
Usage:
The first brush works like a regular brush.
Ctrl + click creates a new brush, which works like the regular brush.
The active brush has a Green border while all the inactive brushes have a Red border.
Shift + click deactivates the current active brush. Now, click on any inactive brush to make it active.
Ctrl + Shift + click clears and resets all the brushes.
Each brush has a name (0, 1, 2, ... by default), and the selected attribute is a dict {brush_name: brush_extent}
|
create_figure(MultiSelector, scale=scales['x'])
|
examples/Interactions/Selectors.ipynb
|
SylvainCorlay/bqplot
|
apache-2.0
|
使用 tf.data 加载 NumPy 数据
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/numpy"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 Tensorflow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/numpy.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/numpy.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/numpy.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载此 notebook</a></td>
</table>
本教程提供了一个将数据从 NumPy 数组加载到 tf.data.Dataset 中的示例。
此示例从 .npz 文件加载 MNIST 数据集。但是,NumPy 数组的来源并不重要。
安装
|
import numpy as np
import tensorflow as tf
|
site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
从 .npz 文件中加载
|
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
|
site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
使用 tf.data.Dataset 加载 NumPy 数组
假设您有一个示例数组和相应的标签数组,请将两个数组作为元组传递给 tf.data.Dataset.from_tensor_slices 以创建 tf.data.Dataset 。
|
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
|
site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
使用该数据集
打乱和批次化数据集
|
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
|
site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
建立和训练模型
|
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)
|
site/zh-cn/tutorials/load_data/numpy.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Data pre-processing
First we separate the target variable
|
#
# Copy the 'wheat_type' series slice out of X, and into a series
# called 'y'. Then drop the original 'wheat_type' column from the X
#
y = X.wheat_type.copy()
X.drop(['wheat_type'], axis=1, inplace=True)
y_original = y
# Do a quick, "ordinal" conversion of 'y'.
#
y = y.astype("category").cat.codes
|
02-Classification/knn.ipynb
|
Mashimo/datascience
|
apache-2.0
|
Fix the invalid values
|
#
# Basic nan munging. Fill each row's nans with the mean of the feature
#
X.fillna(X.mean(), inplace=True)
|
02-Classification/knn.ipynb
|
Mashimo/datascience
|
apache-2.0
|
Split the data into training and testing datasets
|
from sklearn.model_selection import train_test_split
#
# Split X into training and testing data sets
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,
random_state=1)
|
02-Classification/knn.ipynb
|
Mashimo/datascience
|
apache-2.0
|
Data normalisation
|
from sklearn import preprocessing
#
# Create an instance of SKLearn's Normalizer class and then train it
# using its .fit() method against the *training* data.
#
#
normaliser = preprocessing.Normalizer().fit(X_train)
#
# With the trained pre-processor, transform both training AND
# testing data.
#
# NOTE: Any testing data has to be transformed with the preprocessor
# that has been fit against the training data, so that it exist in the same
# feature-space as the original data used to train the models.
#
X_train_normalised = normaliser.transform(X_train)
X_train = pd.DataFrame(X_train_normalised)
X_test_normalised = normaliser.transform(X_test)
X_test = pd.DataFrame(X_test_normalised)
|
02-Classification/knn.ipynb
|
Mashimo/datascience
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.