markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
What issues did you have? The first issue that I has was that I was trying to output a single scalar whose value could be thresholded to determine whether the network should return TRUE or FALSE. It turns out loss functions for this are much more complicated than if I had instead treated the XOR problem as a classifica...
batch_size = 100 num_steps = 10000 num_hidden = 7 num_hidden_layers = 2 learning_rate = 0.2 xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'sigmoid') xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'tanh') xor_ne...
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
What architectures did you try? What were the different results? How long did it take? The results for several different architectures can be seen by running the code below. Since there is no reading from disk, each iteration takes almost exactly the same amount of time. Therefore, I will report "how long it takes" in ...
# Network with 2 hidden layers of 5 nodes xor_network.run_network(batch_size, num_steps, 5, 2, learning_rate, False, 'relu') # Network with 5 hidden layers of 2 nodes each num_steps = 3000 # (so it doesn't go on forever) xor_network.run_network(batch_size, num_steps, 2, 5, learning_rate, False, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
Conclusion from the above: With the number of parameters held constant, a deeper network does not necessarily perform better than a shallower one. I am guessing this is because fewer nodes in a layer means that the network can keep around less information from layer to layer.
xor_network.run_network(batch_size, num_steps, 3, 5, learning_rate, False, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
Conclusion from the above: Indeed, the problem is not the number of layers, but the number of nodes in each layer.
# This is the minimum number of nodes I can use to consistently get convergence with Gradient Descent. xor_network.run_network(batch_size, num_steps, 5, 1, learning_rate, False, 'relu') # If I switch to using Adam Optimizer, I can get down to 2 hidden nodes and consistently have convergence. xor_network.run_network(ba...
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
¿Cuál es el resultado de cada una de las siguientes operaciones? 18/4 18//4 18%4
18%4
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Jerarquía de operaciones Paréntesis Exponenciación Multiplicación y División Sumas y Restas (izquierda a derecha)
2 * (3-1) (1+1)**(5-2) 2**1+1 3*1**3 2*3-1 5-2*2 6-3+2 6-(3+2) 100/100/2 100/100*2 100/(100*2)
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
¿Cuál es el valor de la siguiente expresión? 16 - 2 * 5 // 3 + 1 (a) 14 (b) 24 (c) 3 (d) 13.667 Asignación de variables
x = 15 y = x x == y x = 22 x==y x = x+1 x x+=1 x x-=20 x
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Constant Constant simply returns the same, constant value every time.
g = Constant('quux') print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Boolean Boolean returns either True or False, optionally with different probabilities.
g1 = Boolean() g2 = Boolean(p=0.8) print_generated_sequence(g1, num=20, seed=12345) print_generated_sequence(g2, num=20, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Integer Integer returns a random integer between low and high (both inclusive).
g = Integer(low=100, high=200) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Float Float returns a random float between low and high (both inclusive).
g = Float(low=2.3, high=4.2) print_generated_sequence(g, num=10, sep='\n', fmt='.12f', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest HashDigest returns hex strings representing hash digest values (or alternatively raw bytes). HashDigest hex strings (uppercase)
g = HashDigest(length=6) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest hex strings (lowercase)
g = HashDigest(length=6, uppercase=False) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest byte strings
g = HashDigest(length=10, as_bytes=True) print_generated_sequence(g, num=5, seed=12345, sep='\n')
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
NumpyRandomGenerator This generator can produce random numbers using any of the random number generators supported by numpy.
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0) g2 = NumpyRandomGenerator(method="poisson", lam=30) g3 = NumpyRandomGenerator(method="exponential", scale=0.3) g1.reset(seed=12345); print_generated_sequence(g1, num=4) g2.reset(seed=12345); print_generated_sequence(g2, num=15) g3.reset(seed=12345); print_...
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
FakerGenerator FakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples. Example: random names
g = FakerGenerator(method='name') print_generated_sequence(g, num=8, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Example: random addresses
g = FakerGenerator(method='address') print_generated_sequence(g, num=8, seed=12345, sep='\n---\n')
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
IterateOver IterateOver is a generator which simply iterates over a given sequence. Note that once the generator has been exhausted (by iterating over all its elements), it needs to be reset before it can produce elements again.
seq = ['a', 'b', 'c', 'd', 'e'] g = IterateOver(seq) g.reset() print([x for x in g]) print([x for x in g]) g.reset() print([x for x in g])
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
SelectOne
some_items = ['aa', 'bb', 'cc', 'dd', 'ee'] g = SelectOne(some_items) print_generated_sequence(g, num=30, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.
g = SelectOne(some_items, p=[0.1, 0.05, 0.7, 0.03, 0.12]) print_generated_sequence(g, num=30, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively). Timestamp Timestamp produces random timestamps between a start and end time (both inclusive).
g = Timestamp(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15') print_generated_sequence(g, num=10, sep='\n', seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
If start or end are dates of the form YYYY-MM-DD (without the exact HH:MM:SS timestamp), they are interpreted as start='YYYY-MM-DD 00:00:00 and end='YYYY-MM-DD 23:59:59', respectively - i.e., as the beginning and the end of the day.
g = Timestamp(start='2018-02-14', end='2018-02-18') print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
For convenience, one can also pass a single date, which will produce timestamps during this particular date.
g = Timestamp(date='2018-01-01') print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Note that the generated items are datetime objects (even though they appear as strings when printed above).
g.reset(seed=12345) [next(g), next(g), next(g)]
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.
h = Timestamp(date='2018-01-01').strftime('%-d %b %Y, %H:%M (%a)') h.reset(seed=12345) [next(h), next(h), next(h)]
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
CharString
g = CharString(length=15) print_generated_sequence(g, num=5, seed=12345) print_generated_sequence(g, num=5, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
It is possible to explicitly specify the character set.
g = CharString(length=12, charset="ABCDEFG") print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
There are also a few pre-defined character sets.
g1 = CharString(length=12, charset="<lowercase>") g2 = CharString(length=12, charset="<alphanumeric_uppercase>") print_generated_sequence(g1, num=5, sep='\n', seed=12345); print() print_generated_sequence(g2, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
DigitString DigitString is the same as CharString with charset='0123456789'.
g = DigitString(length=15) print_generated_sequence(g, num=5, seed=12345) print_generated_sequence(g, num=5, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Sequential Generates a sequence of sequentially numbered strings with a given prefix.
g = Sequential(prefix='Foo_', digits=3)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Calling reset() on the generator makes the numbering start from 1 again.
g.reset() print_generated_sequence(g, num=5) print_generated_sequence(g, num=5) print() g.reset() print_generated_sequence(g, num=5)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:
g.reset(seed=12345); print_generated_sequence(g, num=5) g.reset(seed=99999); print_generated_sequence(g, num=5)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Na parte 2 já ficamos a saber que 'Orçamento de/do Estado' não se usava antes de 1984, e se falava mais de decretos-lei antes de 1983. Mas sinceramente não encontramos nada de interessante. Vamos acelerar o processo, e olhar para mais palavras:
# retorna o número de ocorrências de palavra em texto def conta_palavra(texto,palavra): return texto.count(palavra) # retorna um vector com um item por sessao, e valor verdadeiro se o ano é =i, falso se nao é def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # faz o histograma do número de oc...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Tal como tinhamos visto antes, o ano 2000 foi um ano bastante presente para o Paulo Portas. Parece que as suas contribuições vêm em ondas.
histograma_palavra('Crise')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Sempre se esteve em crise, mas em 2010 foi uma super-crise.
histograma_palavra('aborto')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Os debates sobre o aborto parecem estar bem localizados, a 1982, 1984, 1997/8 e 2005.
histograma_palavra('Euro') histograma_palavra('Europa') histograma_palavra('geringonça') histograma_palavra('corrupção') histograma_palavra('calúnia')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Saiu de moda.
histograma_palavra('iraque') histograma_palavra('china') histograma_palavra('alemanha') histograma_palavra('brasil') histograma_palavra('internet') histograma_palavra('telemóvel') histograma_palavra('redes sociais') histograma_palavra('sócrates') histograma_palavra('droga') histograma_palavra('aeroporto') his...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
e se quisermos acumular varias palavras no mesmo histograma?
def conta_palavras(texto,palavras): l = [texto.count(palavra.lower()) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) def histograma_palavras(palavras): dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras)) ocorrencia...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
A União Europeia foi fundada em ~93 e a CEE integrada nesta (segundo a wikipedia), logo o gráfico faz sentido. Vamos criar uma função para integrar os 2 graficos, para nos permitir comparar a evolução:
def conta_palavras(texto,palavras): l = [texto.count(palavra) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico def grafico_palavras_vs_palavras(palavras1, palavras2): pal...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Boa, uma substitui a outra, basicamente.
grafico_palavras_vs_palavras(['contos','escudo'],['euro.','euro ','euros'])
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Novamente, uma substitui a outra.
histograma_palavra('Troika')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Ok isto parece um mistério. Falava-se bastante mais da troika em 1989 do que 2011. Vamos investigar isto procurando e mostrando as frases onde as palavras aparecem. Queremos saber o que foi dito quando se mencionou 'Troika' no parlamento. Vamos tentar encontrar e imprimir as frases onde se dão as >70 ocorrencias de tro...
sessoes_1989 = sessoes[selecciona_ano(sessoes['data'],1989)] sessoes_2011 = sessoes[selecciona_ano(sessoes['data'],2011)] def divide_em_frases(texto): return texto.replace('!','.').replace('?','.').split('.') def acumula_lista_de_lista(l): return [j for x in l for j in x ] def selecciona_frases_com_palav...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Como vemos na última frase, a verdade é que no parlmento se usa mais o termo 'Troica' do que 'Troika'! Na comunicação social usa-se muito 'Troika'. E para quem não sabe o que foi a perestroika: https://pt.wikipedia.org/wiki/Perestroika Ok, assim já faz sentido:
def conta_palavras(texto,palavras): l = [texto.count(palavra) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico def grafico_palavras_vs_palavras(palavras1, palavras2): pal...
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Set the operating parameters to the default values:
def set_fpe_defaults(fpe): "Set the FPE to the default operating parameters, and outputs a table of the default values" defaults = {} for k in range(len(fpe.ops.address)): if fpe.ops.address[k] is None: continue fpe.ops.address[k].value = fpe.ops.address[k].default defaul...
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Get, sort, and print the default operating parameters:
from tessfpe.data.operating_parameters import operating_parameters for k in sorted(operating_parameters.keys()): v = operating_parameters[k] print k, ":", v["default"], v["unit"]
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Take a number of sets of housekeeping data, with one operating parameter varying across it's control range, then repeat for every operating parameter:
def get_base_name(name): import re if '_offset' not in name: return None offset_name = name derived_parameter_name = name.replace('_offset', '') base_name = None if 'low' in derived_parameter_name: base_name = derived_parameter_name.replace('low', 'high') if 'high...
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Set up to plot:
%matplotlib inline %config InlineBackend.figure_format = 'svg' import numpy as np import matplotlib.pyplot as plt import pylab
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Plot selected data:
def get_range_square(X,Y): return [min(X + Y)-1, max(X + Y)+1] # Plot the set vs. measured values of selected channels: for nom in sorted(data.keys()): print nom for base_value in sorted(data[nom].keys()): print base_value X = data[nom][base_value]["X"] Y = data[nom][base_value]["Y"...
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Sensitivity map of SSP projections This example shows the sources that have a forward field similar to the first SSP vector correcting for ECG.
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD-3-Clause import matplotlib.pyplot as plt from mne import read_forward_solution, read_proj, sensitivity_map from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path / 'subjects' meg_path = data...
stable/_downloads/82d9c13e00105df6fd0ebed67b862464/ssp_projs_sensitivity_map.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show sensitivity map
plt.hist(ssp_ecg_map.data.ravel()) plt.show() args = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7, hemi='rh', subjects_dir=subjects_dir) ssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)
stable/_downloads/82d9c13e00105df6fd0ebed67b862464/ssp_projs_sensitivity_map.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
1. Represent Read Article in terms of Topic Vector
article_topic_distribution = pd.read_csv(PATH_ARTICLE_TOPIC_DISTRIBUTION) article_topic_distribution.shape article_topic_distribution.head()
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Generate Article-Topic Distribution matrix
#Pivot the dataframe article_topic_pivot = article_topic_distribution.pivot(index='Article_Id', columns='Topic_Id', values='Topic_Weight') #Fill NaN with 0 article_topic_pivot.fillna(value=0, inplace=True) #Get the values in dataframe as matrix articles_topic_matrix = article_topic_pivot.values articles_topic_matrix.sh...
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
2. Represent user in terms of Topic Vector of read articles A user vector is represented in terms of average of read articles topic vector
#Select user in terms of read article topic distribution row_idx = np.array(ARTICLES_READ) read_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]] #Calculate the average of read articles topic vector user_vector = np.mean(read_articles_topic_matrix, axis=0) user_vector.shape user_vector
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
3. Calculate cosine similarity between read and unread articles
def calculate_cosine_similarity(articles_topic_matrix, user_vector): articles_similarity_score=cosine_similarity(articles_topic_matrix, user_vector) recommended_articles_id = articles_similarity_score.flatten().argsort()[::-1] #Remove read articles from recommendations final_recommended_articles_id = [a...
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
4. Recommendation Using Topic Model:-
#Recommended Articles and their title news_articles = pd.read_csv(PATH_NEWS_ARTICLES) print 'Articles Read' print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title'] print '\n' print 'Recommender ' print news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Topics + NER Recommender Topic + NER Based Recommender Represent user in terms of - <br/> (Alpha) <Topic Vector> + (1-Alpha) <NER Vector> <br/> where <br/> Alpha => [0,1] <br/> [Topic Vector] => Topic vector representation of concatenated read articles <br/> [NER Vector] => Topic vector represent...
ALPHA = 0.5 DICTIONARY_PATH = "/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/dictionary_of_words.p" LDA_MODEL_PATH = "/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/lda.model" from nltk import word_tokenize, pos_tag, ne_chunk from nltk.chunk import tree2conlltags import re from nltk.co...
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1. Represent User in terms of Topic Distribution and NER Represent user in terms of read article topic distribution Represent user in terms of NERs associated with read articles 2.1 Get NERs of read articles 2.2 Load LDA model 2.3 Get topic distribution for the concated NERs Generate user vecto...
row_idx = np.array(ARTICLES_READ) read_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]] #Calculate the average of read articles topic vector user_topic_vector = np.mean(read_articles_topic_matrix, axis=0) user_topic_vector.shape
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1.2. Represent user in terms of NERs associated with read articles
# Get NERs of read articles def get_ner(article): ne_tree = ne_chunk(pos_tag(word_tokenize(article))) iob_tagged = tree2conlltags(ne_tree) ner_token = ' '.join([token for token,pos,ner_tag in iob_tagged if not ner_tag==u'O']) #Discarding tokens with 'Other' tag return ner_token articles = news_articles...
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1.3. Generate user vector
alpha_topic_vector = ALPHA*user_topic_vector alpha_ner_vector = (1-ALPHA) * user_ner_vector user_vector = np.add(alpha_topic_vector,alpha_ner_vector) user_vector
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
2. Calculate cosine similarity between user vector and articles Topic matrix
recommended_articles_id = calculate_cosine_similarity(articles_topic_matrix, user_vector) recommended_articles_id # [array([ 0.75807146]), array([ 0.74644157]), array([ 0.74440326]), array([ 0.7420562]), array([ 0.73966259])]
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
3. Get recommended articles
#Recommended Articles and their title news_articles = pd.read_csv(PATH_NEWS_ARTICLES) print 'Articles Read' print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title'] print '\n' print 'Recommender ' print news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Then, load the data (takes a few moments):
# Load data uda = pd.read_csv("./aws-data/user_dist.txt", sep="\t") # User distribution, all udf = pd.read_csv("./aws-data/user_dist_fl.txt", sep="\t") # User distribution, Florence dra = pd.read_csv("./aws-data/user_duration.txt", sep="\t") # Duration, all drf = pd.read_csv("./aws-data/user_duration_fl.txt", sep="\t"...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
fr.plot(x='days', y='frequency', style='o-', logy=True, figsize = (10, 10)) plt.ylabel('Number of people') plt.axvline(14,ls='dotted') plt.title('Foreign SIM days between first and last instances in Florence') cvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days cvd.plot.scatter...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
fr = udf['count'].value_counts().to_frame() fr.columns = ['frequency'] fr.index.name = 'calls' fr.reset_index(inplace=True) fr = fr.sort_values('calls') fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum() fr.head() fr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10)) # plt.axvline...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% ...
fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10)) plt.axhline(1.0,ls='dotted',lw=.5) plt.axhline(.90,ls='dotted',lw=.5) plt.axhline(.75,ls='dotted',lw=.5) plt.axhline(.67,ls='dotted',lw=.5) plt.axhline(.50,ls='dotted',lw=.5) plt.axhline(.33,ls='dotted',lw=.5) plt.axhline(.25,ls='dotted',lw=....
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query, \o towers_with_counts.txt select lat, lon, count(*) as calls, count(...
df2 = pd.read_table("./aws-data/towers_with_counts2.txt") df2.head()
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Do the same thing as above.
fr2 = df2['count'].value_counts().to_frame() fr2.columns = ['frequency'] fr2.index.name = 'count' fr2.reset_index(inplace=True) fr2 = fr2.sort_values('count') fr2['cumulative'] = fr2['frequency'].cumsum()/fr2['frequency'].sum() fr2.head() fr2.plot(x='count', y='frequency', style='o-', logx=True, figsize = (10, 10)) # ...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
fr2.plot(x='count', y='cumulative', style='o-', logx=True, figsize = (10, 10)) plt.axhline(0.1,ls='dotted',lw=.5) plt.axvline(max(fr2['count'][fr2['cumulative']<.10]),ls='dotted',lw=.5) plt.axhline(0.5,ls='dotted',lw=.5) plt.axvline(max(fr2['count'][fr2['cumulative']<.50]),ls='dotted',lw=.5) plt.axhline(0.9,ls='dotted'...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S') df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date df2 = df.groupby(['cust_id','date']).size().to_frame() df2.columns = ['count'] df2.index.name = 'date' df2.reset_index(inplace=True) df2.head(20) df3 = (df2.gro...
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
3. Read tables from websites pandas is cool - Use pd.read_html(url) - It returns a list of all tables in the website - It tries to guess the encoding of the website, but with no much success.
df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8") print(type(df),len(df)) df df[0].head(10) df[0].columns df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8") df = df[0] print(df.columns) df.columns = ['Year imposed', ...
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4. Parse dates pandas is cool - Use parse_dates=[columns] when reading the file - It parses the date 4.1. Use parse_dates when reading the file
df = pd.read_csv("data/exchange-rate-twi-may-1970-aug-1.tsv",sep="\t",parse_dates=["Month"],skipfooter=2) df.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.2. You can now filter by date
#filter by time df_after1980 = df.loc[df["Month"] > "1980-05-02"] #year-month-date df_after1980.columns = ["Date","Rate"] df_after1980.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.3. And still extract columns of year and month
#make columns with year and month (useful for models) df_after1980["Year"] = df_after1980["Date"].apply(lambda x: x.year) df_after1980["Month"] = df_after1980["Date"].apply(lambda x: x.month) df_after1980.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.4. You can resample the data with a specific frequency Very similar to groupby. Groups the data with a specific frequency "A" = End of year "B" = Business day others: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases Then you tell pandas to apply a function to the group (mean/max/median......
#resample df_after1980_resampled = df_after1980.resample("A",on="Date").mean() display(df_after1980_resampled.head()) df_after1980_resampled = df_after1980_resampled.reset_index() df_after1980_resampled.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.5 And of course plot it with a line plot
#Let's visualize it plt.figure(figsize=(6,4)) plt.plot(df_after1980["Date"],df_after1980["Rate"],label="Before resampling") plt.plot(df_after1980_resampled["Date"],df_after1980_resampled["Rate"],label="After resampling") plt.xlabel("Time") plt.ylabel("Rate") plt.legend() plt.show()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
We can also implement this test with a while loop instead of a for loop. This doesn't make much of a difference, in Python 3.x. (In Python 2.x, this would save memory).
def is_prime(n): ''' Checks whether the argument n is a prime number. Uses a brute force search for factors between 1 and n. ''' j = 2 while j < n: # j will proceed through the list of numbers 2,3,...,n-1. if n%j == 0: # is n divisible by j? print("{} is a factor of {}.".fo...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
If $n$ is a prime number, then the is_prime(n) function will iterate through all the numbers between $2$ and $n-1$. But this is overkill! Indeed, if $n$ is not prime, it will have a factor between $2$ and the square root of $n$. This is because factors come in pairs: if $ab = n$, then one of the factors, $a$ or $b$...
from math import sqrt
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This command imports the square root function (sqrt) from the package called math. Now you can find square roots.
sqrt(1000)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
There are a few different ways to import functions from packages. The above syntax is a good starting point, but sometimes problems can arise if different packages have functions with the same name. Here are a few methods of importing the sqrt function and how they differ. from math import sqrt: After this command, ...
import math math.sqrt(1000) factorial(10) # This will cause an error! math.factorial(10) # This is ok, since the math package comes with a function called factorial.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's improve our is_prime(n) function by searching for factors only up to the square root of the number n. We consider two options.
def is_prime_slow(n): ''' Checks whether the argument n is a prime number. Uses a brute force search for factors between 1 and n. ''' j = 2 while j <= sqrt(n): # j will proceed through the list of numbers 2,3,... up to sqrt(n). if n%j == 0: # is n divisible by j? print("{} ...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
I've chosen function names with "fast" and "slow" in them. But what makes them faster or slower? Are they faster than the original? And how can we tell? Python comes with a great set of tools for these questions. The simplest (for the user) are the time utilities. By placing the magic %timeit before a command, Pyt...
%timeit is_prime_fast(1000003) %timeit is_prime_slow(1000003) %timeit is_prime(1000003)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Time is measured in seconds, milliseconds (1 ms = 1/1000 second), microseconds (1 µs = 1/1,000,000 second), and nanoseconds (1 ns = 1/1,000,000,000 second). So it might appear at first that is_prime is the fastest, or about the same speed. But check the units! The other two approaches are about a thousand times fast...
is_prime_fast(10000000000037) # Don't try this with `is_prime` unless you want to wait for a long time!
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Indeed, the is_prime_fast(n) function will go through a loop of length about sqrt(n) when n is prime. But is_prime(n) will go through a loop of length about n. Since sqrt(n) is much less than n, especially when n is large, the is_prime_fast(n) function is much faster. Between is_prime_fast and is_prime_slow, the diff...
is_prime_fast(10**14 + 37) # This might get a bit of delay.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we have a function is_prime_fast(n) that is speedy for numbers n in the trillions! You'll probably start to hit a delay around $10^{15}$ or so, and the delays will become intolerable if you add too many more digits. In a future lesson, we will see a different primality test that will be essentially instant even f...
L = [0,'one',2,'three',4,'five',6,'seven',8,'nine',10]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
List terms and indices Notice that the entries in a list can be of any type. The above list L has some integer entries and some string entries. Lists are ordered in Python, starting at zero. One can access the $n^{th}$ entry in a list with a command like L[n].
L[3] print(L[3]) # Note that Python has slightly different approaches to the print-function, and the output above. print(L[4]) # We will use the print function, because it makes our printing intentions clear. print(L[0])
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The location of an entry is called its index. So at the index 3, the list L stores the entry three. Note that the same entry can occur in many places in a list. E.g. [7,7,7] is a list with 7 at the zeroth, first, and second index.
print(L[-1]) print(L[-2])
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The last bit of code demonstrates a cool Python trick. The "-1st" entry in a list refers to the last entry. The "-2nd entry" refers to the second-to-last entry, and so on. It gives a convenient way to access both sides of the list, even if you don't know how long it is. Of course, you can use Python to find out how l...
len(L)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
You can also use Python to find the sum of a list of numbers.
sum([1,2,3,4,5]) sum(range(100)) # Be careful. This is the sum of which numbers? # The sum function can take lists or ranges.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
List slicing Slicing lists allows us to create new lists (or ranges) from old lists (or ranges), by chopping off one end or the other, or even slicing out entries at a fixed interval. The simplest syntax has the form L[a:b] where a denotes the index of the starting entry and index of the final entry is one less than b...
L[0:5] L[5:11] # Notice that L[0:5] and L[5:11] together recover the whole list. L[3:7]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This continues the strange (for beginners) Python convention of starting at the first number and ending just before the last number. Compare to range(3,7), for example. The command L[0:5] can be replaced by L[:5] to abbreviate. The empty opening index tells Python to start at the beginning. Similarly, the command ...
L[:5] L[3:]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Just like the range command, list slicing can take an optional third argument to give a step size. To understand this, try the command below.
L[2:10] L[2:10:3]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
If, in this three-argument syntax, the first or second argument is absent, then the slice starts at the beginning of the list or ends at the end of the list accordingly.
L # Just a reminder. We haven't modified the original list! L[:9:3] # Start at zero, go up to (but not including) 9, by steps of 3. L[2: :3] # Start at two, go up through the end of the list, by steps of 3. L[::3] # Start at zero, go up through the end of the list, by steps of 3.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Changing list slices Not only can we extract and study terms or slices of a list, we can change them by assignment. The simplest case would be changing a single term of a list.
print(L) # Start with the list L. L[5] = 'Bacon!' print(L) # What do you think L is now? print(L[2::3]) # What do you think this will do?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We can change an entire slice of a list with a single assignment. Let's change the first two terms of L in one line.
L[:2] = ['Pancakes', 'Ham'] # What was L[:2] before? print(L) # Oh... what have we done! L[0] L[1] L[2]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We can change a slice of a list with a single assignment, even when that slice does not consist of consecutive terms. Try to predict what the following commands will do.
print(L) # Let's see what the list looks like before. L[::2] = ['A','B','C','D','E','F'] # What was L[::2] before this assignment? print(L) # What do you predict?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Exercises Create a list L with L = [1,2,3,...,100] (all the numbers from 1 to 100). What is L[50]? Take the same list L, and extract a slice of the form [5,10,15,...,95] with a command of the form L[a:b:c]. Take the same list L, and change all the even numbers to zeros, so that L looks like [1,0,3,0,5,0,...,99,0...
primes = list(range(100)) # Let's start with the numbers 0...99.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now, to "filter", i.e., to say that a number is not prime, let's just change the number to the value None.
primes[0] = None # Zero is not prime. primes[1] = None # One is not prime. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's filter out the multiples of 2, starting at 4. This is the slice primes[4::2]
primes[4::2] = [None] * len(primes[4::2]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we filter out the multiples of 3, starting at 9.
primes[9::3] = [None] * len(primes[9::3]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Next the multiples of 5, starting at 25 (the first multiple of 5 greater than 5 that's left!)
primes[25::5] = [None] * len(primes[25::5]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0