markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
2.2 Analysis the Cycle
# Part(a) eta = ((h1-h2)-(h4-h3))/(h1-h4) # thermal efficiency # Result for part (a) print('Thermal efficiency is: {:>.2f}%'.format(100*eta)) # Part(b) Wcycledot = 100 # given,a net power output of 100 MW # Calculations mdot = (Wcycledot*(10**3)*3600)/((h1-h2)-(h4-h3)) # Result...
notebook/RankineCycle81-82.ipynb
PySEE/PyRankine
mit
1.2.3 T-S Diagram
%matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(10.0,5.0)) # saturated vapor and liquid entropy lines npt = np.linspace(10,647.096-273.15,200) # range of temperatures svap = [s for s in [tx2s(t, 1) for t in npt]] sliq = [s for s in [tx2s(t, 0) for t in npt]] plt.plot(svap, ...
notebook/RankineCycle81-82.ipynb
PySEE/PyRankine
mit
Python é uma linguagem excelente para "propósitos gerais", com um sintaxe clara elegível, tipos de dados (data types) funcionais (strings, lists, sets, dictionaries, etc) e uma biblioteca padrão vasta. Entretanto não é um linguagem desenhada especificamente para matemática e computação científica. Não há forma fácil d...
import numpy as np
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
NumPy, em seu núcleo, fornece apenas um objeto array. <img height="300" src="files/anatomyarray.png" >
lst = [10, 20, 30, 40] arr = np.array([10, 20, 30, 40]) print(lst) print(arr) print(lst[0], arr[0]) print(lst[-1], arr[-1]) print(lst[2:], arr[2:])
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
A diferença entre list e array é que a arrays são homógenas!
lst[-1] = 'Um string' lst arr[-1] = 'Um string' arr arr.dtype arr[-1] = 1.234 arr
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Voltando às nossas lista a e b
a = [0.1, 0.25, 0.03] b = [400, 5000, 6e4] a = np.array(a) b = np.array(b) c = a + b c np.tanh([a, b]) a * b np.dot(a, b) np.matrix(a) * np.matrix(b).T
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Data types bool uint8 int (Em Python2 é machine dependent) int8 int32 int64 float (Sempre é machine dependent Matlab double) float32 float64 (http://docs.scipy.org/doc/numpy/user/basics.types.html.) Curiosidades...
np.array(255, dtype=np.uint8) float_info = '{finfo.dtype}: max={finfo.max:<18}, approx decimal precision={finfo.precision};' print(float_info.format(finfo=np.finfo(np.float32))) print(float_info.format(finfo=np.finfo(np.float64)))
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
https://en.wikipedia.org/wiki/Floating_point Criando arrays:
np.zeros(3, dtype=int) np.zeros(5, dtype=float) np.ones(5, dtype=complex) a = np.empty([3, 3]) a a.fill(np.NaN) a
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Métodos das arrays
a = np.array([[1, 2, 3], [1, 2, 3]]) a print('Tipo de dados : {}'.format(a.dtype)) print('Número total de elementos : {}'.format(a.size)) print('Número de dimensões : {}'.format(a.ndim)) print('Forma : {}'.format(a.shape)) print('Memória em bytes : {}'.format(a.nbytes))
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Outros métodos matemáticos/estatísticos úteis:
print('Máximo e mínimo : {}'.format(a.min(), a.max())) print('Some é produto de todos os elementos : {}'.format(a.sum(), a.prod())) print('Média e desvio padrão : {}'.format(a.mean(), a.std())) a.mean(axis=0) a.mean(axis=1)
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Métodos que auxiliam na criação de arrays.
np.zeros(a.shape) == np.zeros_like(a) np.arange(1, 2, 0.2) a = np.linspace(1, 10, 5) # Olhe também `np.logspace` a
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
5 amostras aleatórias tiradas da distribuição normal de média 0 e variância 1.
np.random.randn(5)
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
5 amostras aleatórias tiradas da distribuição normal de média 10 e variância 3.
np.random.normal(10, 3, 5)
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Máscara condicional
mask = np.where(a <= 5) # Para quem ainda vive em MatlabTM world. mask mask = a <= 5 # Melhor não? mask a[mask]
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Temos também as masked_arrays
import numpy.ma as ma ma.masked_array(a, mask)
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Salvando e carregando novamente os dados: np.save np.savez np.load
a = np.random.rand(10) b = np.linspace(0, 10, 10) np.save('arquivo_a', a) np.save('arquivo_b', b) np.savez('arquivo_ab', a=a, b=b) %%bash ls *.np* c = np.load('arquivo_ab.npz') c.files
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Operações: +, -, , /, //, *, %
c['b'] // c['a'] a = np.array([1, 2, 3]) a **= 2 a
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Manipulando dados reais Vamos utilizar os dados do programa de observação do oceano Pirata. http://www.goosbrasil.org/pirata/dados/
np.loadtxt("./data/dados_pirata.csv", delimiter=',') !head -3 ./data/dados_pirata.csv data = np.loadtxt("./data/dados_pirata.csv", skiprows=1, usecols=range(2, 16), delimiter=',') data.shape, data.dtype data[data == -99999.] = np.NaN data data.max(), data.min() np.nanmax(data), np.nanmin(data) np.nanargmax(data)...
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
Dados com máscara (Masked arrays)
plt.pcolormesh(data) import numpy.ma as ma data = ma.masked_invalid(data) plt.pcolormesh(np.flipud(data.T)) plt.colorbar() data.max(), data.min(), data.mean() z = [1, 10, 100, 120, 13, 140, 180, 20, 300, 40,5, 500, 60, 80] fig, ax = plt.subplots() ax.plot(data[42, :], z, 'ko') ax.invert_yaxis()
02-NumPy.ipynb
ocefpaf/intro_python_notebooks
mit
First we create a SparkContext, the main object in the Spark API. This call may take a few seconds to return as it fires up a JVM under the covers.
sc = pyspark.SparkContext()
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Sample the data We point the context at a CSV file on disk. The result is a RDD, not the content of the file. This is a Spark transformation.
raw_rdd = sc.textFile("datasets/COUNT/titanic.csv")
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We query RDD for the number of lines in the file. The call here causes the file to be read and the result computed. This is a Spark action.
raw_rdd.count()
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We query for the first five rows of the RDD. Even though the data is small, we shouldn't get into the habit of pulling the entire dataset into the notebook. Many datasets that we might want to work with using Spark will be much too large to fit in memory of a single machine.
raw_rdd.take(5)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We see a header row followed by a set of data rows. We filter out the header to define a new RDD containing only the data rows.
header = raw_rdd.first() data_rdd = raw_rdd.filter(lambda line: line != header)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We take a random sample of the data rows to better understand the possible values.
data_rdd.takeSample(False, 5, 0)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We see that the first value in every row is a passenger number. The next three values are the passenger attributes we might use to predict passenger survival: ticket class, age group, and gender. The final value is the survival ground truth. Create labeled points (i.e., feature vectors and ground truth) Now we define a...
def row_to_labeled_point(line): ''' Builds a LabelPoint consisting of: survival (truth): 0=no, 1=yes ticket class: 0=1st class, 1=2nd class, 2=3rd class age group: 0=child, 1=adults gender: 0=man, 1=woman ''' passenger_id, klass, age, sex, survived = [segs.strip('"') for segs in lin...
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We apply the function to all rows.
labeled_points_rdd = data_rdd.map(row_to_labeled_point)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We take a random sample of the resulting points to inspect them.
labeled_points_rdd.takeSample(False, 5, 0)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Split for training and test We split the transformed data into a training (70%) and test set (30%), and print the total number of items in each segment.
training_rdd, test_rdd = labeled_points_rdd.randomSplit([0.7, 0.3], seed = 0) training_count = training_rdd.count() test_count = test_rdd.count() training_count, test_count
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Train and test a decision tree classifier Now we train a DecisionTree model. We specify that we're training a boolean classifier (i.e., there are two outcomes). We also specify that all of our features are categorical and the number of possible categories for each.
model = DecisionTree.trainClassifier(training_rdd, numClasses=2, categoricalFeaturesInfo={ 0: 3, 1: 2, 2: 2 ...
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We now apply the trained model to the feature values in the test set to get the list of predicted outcomines.
predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
We bundle our predictions with the ground truth outcome for each passenger in the test set.
truth_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Now we compute the test error (% predicted survival outcomes == actual outcomes) and display the decision tree for good measure.
accuracy = truth_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count) print('Accuracy =', accuracy) print(model.toDebugString())
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Train and test a logistic regression classifier For a simple comparison, we also train and test a LogisticRegressionWithSGD model.
model = LogisticRegressionWithSGD.train(training_rdd) predictions_rdd = model.predict(test_rdd.map(lambda x: x.features)) labels_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd) accuracy = labels_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count) print('A...
notebooks/Welcome to Spark with Python.ipynb
jupyter/docker-demo-images
bsd-3-clause
Two flips Suppose we have a coin with probability p. For example, p might be 0.5. You flip the coin twice and I want to compute the probability that this coin comes up head and heads in these 2 flips--obviously that's 0.5 times 0.5.
def f(p): return p*p print f(0.3)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Three Flips Just like before it will be an input to the function f and now I'm going to flip the coin 3 times and I want you to calculate the probability that the heads comes up exactly once. Three is not a variable so you could only works for 3 flips not for 2 or 4 but the only input variable is going to be the coin p...
def f(p): return 3 * p * (1-p) * (1-p) print f(0.5) print f(0.8)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Flip Two Coins So coin 1 has a probability of heads equals P₁ and coin 2 has a probability of heads equals P₂ and this might not be different probabilities. In my programming environment, I can account this by making 2 arguments separated by a comma, for example, 0.5 and 0.8, and then the function takes as an input, 2...
def f(p1,p2): return p1 * p2 print f(0.5,0.8)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Flip One Of Two So two coins again, C1, C2. And let's say each coin has its own probability of coming up heads. For the first coin, we're going to call it P1, and for the second, P2. And for reasons that should be clear later, we write it as a conditional. So that means, if the coin you're flipping is C1, then the p...
def f(p0,p1,p2): return p0 * p1 +(1-p0) * p2 print f(0.3,0.5,0.9)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Answer And the answer is 0.78. And the way I got this, you might have picked point C1. That happens with 0.3 probability, and then we have a 0.5 chance to find heads. Or we might have picked coin two, which has a probability of 1 minus 0.3, 0.7, and then chance of seeing head is 0.9. We work this all out, we get 0.78. ...
#Calculate the probability of a positive result given that #p0=P(C) #p1=P(Positive|C) #p2=P(Negative|Not C) def f(p0,p1,p2): return p0 * p1 + (1-p0) * (1-p2) print f(0.1, 0.9, 0.8)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Cancer Example 2 Let's look at the posterior probability of cancer given that we received the positive test result, and let's first do this manually for the example given up here. - P(C|Pos) = P(C) x P(Pos|C) = 0.1 * 0.9 = 0.09 - P(not C|Pos) = P(not C) x P(Pos|not C) = 0.9 * 0.2 = 0.18 - P(Pos) = P(C|Pos) + P(not C|Po...
#Return the probability of A conditioned on B given that #P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2 def f(p0,p1,p2): return p0 * p1 / (p0 * p1 + (1-p0) * (1-p2)) print f(0.1, 0.9, 0.8) print f(0.01, 0.7, 0.9)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Program Bayes Rule 2 Now, let's do one last modification and let's write this procedure assuming you observed a negative test result. This means the posterior of having cancer under a negative result is 0.0137 for those numbers over here and about 0.00336 for those numbers over here. In both cases, the posterior is s...
#Return the probability of A conditioned on Not B given that #P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2 def f(p0,p1,p2): return p0 * (1-p1) / (p0 * (1-p1) + (1-p0) * p2) print f(0.1, 0.9, 0.8) print f(0.01, 0.7, 0.9)
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
tuanvu216/udacity-course
mit
Those two vectors gives us the utilities to the row/column player when they play either of their pure strategies: $(A\sigma_c^T)_i$ is the utility of the row player when playing strategy $i$ against $\sigma_c=(y, 1-y)$ $(\sigma_rB)_j$ is the utility of the column player when playing strategy $j$ against $\sigma_r=(x, ...
import matplotlib import matplotlib.pyplot as plt %matplotlib inline matplotlib.rc("savefig", dpi=100) # Increase the quality of the images (not needed) ys = [0, 1] row_us = [[(A * sigma_c)[i].subs({y: val}) for val in ys] for i in range(2)] plt.plot(ys, row_us[0], label="$(A\sigma_c^T)_1$") plt.plot(ys, row_us[1], l...
nbs/chapters/04-Nash-equilibria.ipynb
drvinceknight/gt
mit
Neural Voxel Renderer This notebook illustrates how to train Neural Voxel Renderer (CVPR2020) in Tensorflow 2. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_rendere...
!pip install tensorflow_graphics import numpy as np import tensorflow as tf from tensorflow_graphics.projects.neural_voxel_renderer import helpers from tensorflow_graphics.projects.neural_voxel_renderer import models import datetime import matplotlib.pyplot as plt import os import re import time VOXEL_SIZE = (128, 1...
tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb
tensorflow/graphics
apache-2.0
Dataset loading We store our data in TFRecords with custom protobuf messages. Each training element contains the input voxels, the voxel rendering, the light position and the target image. The data is preprocessed (eg the colored voxels have been rendered and placed accordingly). See this colab on how to generate the t...
# Functions for dataset generation from a set of TFRecords. decode_proto = tf.compat.v1.io.decode_proto def tf_image_normalize(image): """Normalizes the image [-1, 1].""" return (2 * tf.cast(image, tf.float32) / 255.) - 1 def neural_voxel_plus_proto_get(element): """Extracts the contents from a VoxelSample pr...
tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb
tensorflow/graphics
apache-2.0
Train the model NVR+ is trained with Adam optimizer and L1 and perceptual VGG loss.
# ============================================================================== # Defining model and optimizer LEARNING_RATE = 0.002 nvr_plus_model = models.neural_voxel_renderer_plus_tf2() optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE) # Saving and logging directories checkpoint_dir = '/tmp/check...
tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb
tensorflow/graphics
apache-2.0
<p style="text-align: center;"> Y Py?</p> <img src="https://dl.dropboxusercontent.com/u/5880397/zendesk.jpg" width=500> Chris Hausler Data Engineer @ Zendesk <br><br> <center><img src="https://dl.dropboxusercontent.com/u/5880397/anaconda_logo_web.png"/><br> <b><a href="http://continuum.io/downloads">http://continuum...
import pandas as pd
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Data Frames
ages = pd.DataFrame([['John', 25], ['Mary', 9], ['Radek', 16], ['Mia', 64], ['Geroge', 4], ['Katrin', 21]]) ages
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Column Names and Index
ages.columns = ['Name', 'Age'] ages.set_index('Name', inplace=True) ages
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Using the Index
ages.ix[['John', 'Mary']] ages.ix[['John', 'Mary', 'Thomas']]
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Basic Arithmetic
ages * 2
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Another DataFrame
genders = pd.DataFrame(['male', 'female', 'male', 'female'], index=['John', 'Mary', 'Alberto', 'Karyn'], columns=['Gender']) genders
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Joins
people = genders.join(ages, how='left') people
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Dealing with Missing Values
people
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
1. Get rid of them
people.dropna()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
2. Fill them with something
people.fillna(people.Age.mean())
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
3. Use a function like forward fill
people.fillna(method='ffill')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Loading Data read_csv read_excel read_hdf read_sql read_json read_clipboard .... Get some real data
# get the data from here # https://data.melbourne.vic.gov.au/api/views/b2ak-trbp/rows.csv?accessType=DOWNLOAD data = pd.read_csv('pedestrian_count.csv', index_col=0, parse_dates=[0]) data.head()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Basic Info
data.info()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Descriptive stats
data.describe()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Plotting And advanced Pandas operations
import pylab as plt import seaborn as sns sns.set_context('poster')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Bridge pedestrian counts
bridges = data[['Webb Bridge', 'Princes Bridge', 'Sandridge Bridge']] ax = bridges.plot(figsize=(10, 6)) _ = ax.set_ylabel('# Pedestrians')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Daily totals for pedestrians
ax = bridges.resample('D', how='sum').plot(figsize=(10, 6)) _ = ax.set_ylabel('# Pedestrians')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Monthly Pedestrian Volume
axs = bridges.resample('M', how='sum').plot(subplots=True, figsize=(12, 7.5)) _ = axs[1].set_ylabel('# Pedestrians over the Month')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Median Pedestrians per hour, Flagstaff July 2014
dt = data.ix['2014-07']['Flagstaff Station'] ax = dt.groupby(dt.index.hour).median().plot(kind='bar', figsize=(10, 6)) ax.set_ylabel('# Pedestrians') _ = ax.set_xlabel('Hour of Day')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Daily Pedestrian Variation
fig, ax = plt.subplots(1, figsize=(8, 4)) data.resample('D', how='sum').boxplot(ax=ax) ax.set_xticklabels(data.columns, rotation=90, size=16) ax.set_ylim(0, 40000) _ = ax.set_ylabel('# Pedestrians')
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Scatter Plots
with sns.axes_style("white"): sns.jointplot('Princes Bridge', 'Flinders St Underpass', data, size=7);
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
(Machine) Learning And Introduction to Scikit-Learn <img src="https://dl.dropboxusercontent.com/u/5880397/ml_map.png" width=1000/> Load some Titanic Data
# get data here https://www.kaggle.com/c/titanic/download/train.csv titanic = pd.read_csv('train.csv').drop(['Name', 'Ticket', 'PassengerId'], axis=1) titanic.head()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Make some new features / cleanup
titanic["Alone"] = ((titanic.Parch + titanic.SibSp) == 0) * 1 titanic['Cabin'] = titanic.Cabin.fillna('NA') titanic['Embarked'] = titanic.Embarked.fillna('NA') titanic['Age'] = titanic.Age.fillna(titanic.Age.median()) titanic.head()
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Survival Probability
g = sns.factorplot("Pclass", "Survived", "Sex", data=titanic, kind="bar", size=6, palette="muted") g.despine(left=True) _ = g.set_ylabels("Survival Probability")
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Building Categorical Features
from sklearn.preprocessing import LabelBinarizer lbl = LabelBinarizer().fit(titanic.Embarked) print lbl.classes_ print lbl.transform(titanic.Embarked)
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
We can do them all at once
import numpy as np lbl = LabelBinarizer() X_categorical = np.hstack([lbl.fit_transform(titanic[c]) for c in ['Cabin', 'Embarked', 'Sex']]) print 'Array shape:', X_categorical.shape X_categorical
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
The Dataset
y = titanic.pop('Survived').values X_numeric = titanic._get_numeric_data().values X = np.hstack([X_numeric, X_categorical])
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
First Pass Cross-Validation
from sklearn.cross_validation import cross_val_score from sklearn.linear_model import SGDClassifier clf = SGDClassifier(loss='log') scores = cross_val_score(clf, X, y, cv=3, scoring='accuracy') print "Accuracy: {:.2f}".format(scores.mean())
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Can we do better?
for a in [0.0001, 0.001, 0.01, 0.1, 1., 10, 100]: clf = SGDClassifier(loss='log', alpha=a) scores = cross_val_score(clf, X, y, cv=3, scoring='accuracy') print "Alpha: {:.4f}\tAccuracy: {:.2f}".format(a, scores.mean())
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Can we do even better??
from sklearn.grid_search import RandomizedSearchCV params = {'alpha': np.logspace(-4, 4, 50), 'loss': ['log', 'modified_huber', 'perceptron'], 'penalty': ['l1', 'l2'], 'n_iter': [50, 100, 200]} clf = SGDClassifier() random_search = RandomizedSearchCV(clf, params, n_iter=100, ...
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
What about different classifiers?
from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB classifiers = [ SGDCl...
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Train them...
res = [] names = [] for clf in classifiers: scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy') names.append(clf.__class__.__name__) res.append(scores.mean())
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
Compare them
fig, ax = plt.subplots(1, figsize=(14, 6)) sns.barplot(np.array(names), np.array(res), ci=None, palette="muted", ax=ax) ax.set_ylabel("Accuracy") _ = plt.xticks(rotation=50, ha='right', size=12)
melb_data_science/y_py_2015_04_23.ipynb
chausler/talks
apache-2.0
1.2.4. Comparing word use between corpora In previous notebooks we examined changes in word use over time using several different statistical approaches. In this notebook, we will examine differences in word use between two different corpora. Web of Science dataset In this notebook we will use data retrieved from the ...
from tethne.readers import wos pj_corpus = wos.read('../data/Baldwin/PlantJournal/') pp_corpus = wos.read('../data/Baldwin/PlantPhysiology/')
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Conditional frequency distribution This next step should look familiar. We will create a conditional frequency distribution for words in these two corpora. We have two conditions: the journal is Plant Physiology and the journal is Plant Journal.
word_counts = nltk.ConditionalFreqDist([ (paper.journal, normalize_token(token)) for paper in chain(pj_corpus, pp_corpus) # chain() strings the two corpora together. for token in nltk.word_tokenize(getattr(paper, 'abstract', '')) if filter_token(token) ])
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Now we can use tabulate to generate a contingency table showing the number of times each word is used within each journal.
# Don't run this without setting ``samples``! word_counts.tabulate(samples=['photosynthesis', 'growth', 'stomatal'])
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Is there a difference? As a first step, we may wish to establish whether or not there is a difference between the two corpora. In this simplistic example, we will compare the rate at which a specific word is used in the two journals. In practice, your comparisons will probably be more sophisticated -- but this is a sta...
plant_jour_photosynthesis = word_counts['PLANT JOURNAL']['photosynthesis'] plant_jour_notphotosynthesis = word_counts['PLANT JOURNAL'].N() - plant_jour_photosynthesis plant_phys_photosynthesis = word_counts['PLANT PHYSIOLOGY']['photosynthesis'] plant_phys_notphotosynthesis = word_counts['PLANT PHYSIOLOGY'].N() - plant...
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
To calculate the expected values, we first calculate the expected probabilities of each word under the null hypothesis. The probability of "photosynthesis" occurring is the total number of occurrences of "photosynthesis" (sum of the first column) divided by the total number of tokens (sum of the whole table). The proba...
# We multiply the values in the contingency table by 1. to coerce the # integers to floating-point numbers, so that we can divide without # losing precision. expected_probabilities = 1.*contingency_table.sum(axis=0)/contingency_table.sum() expected_probabilities
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Now we calculate the expected counts from those probabilities. The expected counts can be found by multiplying the probabilities of the word occuring and not occuring by the total number of tokens in each corpus.
# We multiply each 2-element array by a square matrix containing ones, and then # transpose one of the resulting matrices so that the product gives the expected # counts. expected_counts = np.floor((np.ones((2, 2))*expected_probabilities)*\ (np.ones((2, 2))*contingency_table.sum(axis=1)).T)...
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Now we obtain the log likelihood using the equation above:
loglikelihood = np.sum(1.*contingency_table*np.log(1.*contingency_table/expected_counts)) loglikelihood
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
So, do the two corpora differ in terms of their use of the word "photosynthesis"? In other words, can we reject the null hypothesis (that they do not)? Per Dunning (1993), under the null hypothesis the distribution of the test statistic (log likelihood) should follow a $\chi^2$ distribution. So we can obtain the probab...
distribution = stats.chi2(df=1) # df: degrees of freedom.
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Here's the PDF of $\chi^2$ with one degree of freedom.
X = np.arange(1, 100, 0.1) plt.plot(X, distribution.pdf(X), lw=2) plt.ylabel('Probability') plt.xlabel('Value of $\chi^2$') plt.show()
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
We can calculate the probability of our observed log-likelihood from the PDF. If it is less than 0.05, then we can reject the null hypothesis.
distribution.pdf(loglikelihood), distribution.pdf(loglikelihood) < 0.05
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Money. A Bayesian approach We have shown that these two corpora differ significantly in their usage of the term "photosynthesis". In many cases, we may want to go one step further, and actually quantify that difference. We can use a similar approach to the one that we used when comparing word use between years: use an ...
count_data = pd.DataFrame(columns=['Journal', 'Year', 'Count']) chunk_size = 400 # This shouldn't be too large. i = 0 # The slice() function automagically divides each corpus up into # sequential years. We can use chain() to combine the two iterators # so that we only have to write this code once. for year, paper...
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
diging/methods
gpl-3.0
Introduction ...or, a very brief overview of object-oriented programming and type systems. Definition: objects are “a location in memory having a value and possibly referenced by an identifier.” Objects have a type, which a classification scheme to reduce the probability of errors. In the programming language context: ...
class Animal: """ the Animal class can be used to describe something that has a well-defined number of legs """ n_legs = -1 a = Animal() a.n_legs
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
At this point, I'm not too worried about the mechanism by which object attributes are set. However, if the thing represented by an Animal instance truly always has a well-defined number of legs, and that number doesn't change (no starfish, no apputation), then we should set this at object creation time.
class Animal: def __init__(self,n_legs=-1): """use the constructor's kw args 'n_legs' to set the number of legs""" self.n_legs = n_legs cow = Animal(4) cow.n_legs snake = Animal(0) snake.n_legs
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
To represent a more nuanced set of identities, we need to provide more classes. For example, pets have names, but are also animals.
class Pet(Animal): def __init__(self,name=None,n_legs=-1): self.name = name super().__init__(n_legs) fido = Pet(name="Fido",n_legs=4) print("The pet's name is " + fido.name + '.') print("It has " + str(fido.n_legs) + " legs.")
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
We're interested in more than simple, variable attributes. What about interaction? Remember that encapsulation of data provides a more robust framework for abstracting operations.
class Cat(Pet): def make_a_sound(self): return "Meow" class Dog(Pet): def make_a_sound(self): """return a random sound""" sounds = ['Arf','Grrrrrr'] return sounds[round(random.random())] pets = [] pets.append(Cat(name="Kitty")) pets.append(Dog(name="Buddy")) for pet in pets: ...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
A more realistic example Add functionality via subclassing.
class ListOfThings: def __init__(self,x): self.things = x def get_the_things(self): return self.things class OrderedListOfThings(ListOfThings): def get_the_things(self): return sorted(self.things) a_list = ListOfThings([1,3,4,2]) a_list.get_the_things() an_ordered_list = Order...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Problems Problems with defining use solely by inheritence: * Multiple inheritence is hard. What is the method resolution order? * Rigid/brittle structure...what if a base class definition changes? * There are Python-specific issues with inheriting from builtin classes Duck Typing "Don’t check whether it is-a duck: che...
# simple example: make two objects # this object has a clear sense of length x = [4,3,2,1] # what would the length of an integer be? y = 3 len(x) len(y)
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
In the previous example, we see that some objects follow the length protocol, and some don't. Specifically, the length protocol defines a global function len, and the method by which it interfaces with objects...namely, their __len__ method. With reference to the duck metaphor, the length protocol say (two different v...
# can we _force_ something to follow a protocol? def my_identity_function(x): return x my_identity_function('three') # now explicitly set the value of the __len__ attribute setattr(my_identity_function,'__len__','my length!') dir(my_identity_function) len(my_identity_function)
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
[sad trombone]...functions types are defined in C and can't be truly modified, despite our modification of the object's namespace dictionary. NOTE: Be careful with modifying or subclassing Python's builtin types: str, int, float, list, dict, and the like, as well as functions, class definition objects, and other such ...
# make a dict that replaces the value with a pair of the value class DoppelDict(dict): def __setitem__(self, key, value): """__setitem__ is called by the [] operator""" super().__setitem__(key, [value] * 2) # set one k,v pair via the constructor dd = DoppelDict(one=1) # set another k,v pair with t...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Because dict is a builtin class, it ignores attribute modifications applied via namespace changes. Let's now try to define a length-y decimal object. To define a modifiable class, let's use Python's decimal package, which is designed to represent a decimal interface.
y = decimal.Decimal(5) y len(y)
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Yay! It didn't work!
# now for our decimal with length # arbitrarily define length the number of digits to the left of the decimal point class LengthyDecimal(decimal.Decimal): def __len__(self): return math.floor(math.log10(self)) + 1 y = LengthyDecimal(5) y # length is the integer representation of log10 len(y) y = Lengthy...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Yay! Now let's try an integer that follows the length protocol.
# let the Decimal class manage construction and all the other attributes # enforce integer qualities only when __len__ is called # arbitrarily define length as the log10 of the integer representation of the Decimal class LengthyInteger(decimal.Decimal): def __len__(self): return int(math.log(int(self),10))...
python-interfaces/python-interfaces.ipynb
fionapigott/Data-Science-45min-Intros
unlicense