markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Exponentiation
2^2 # note the difference to python 2 ** 2
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Remainder
4 % 3
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Negation
!true # note the difference to numpys ~
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Equality
true == true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Inequality
true != true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Elementwise operation
[1 2; 3 3] .* [9 9;9 9] # elementwise [1 2; 3 3] * [9 9;9 9] #matrix materix product
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Check for nan
isnan(9)
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Ternary operator The syntax iscond ? do_true : else
1 != 1 ? println(3) : println(999)
999
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
And/or
true && true false || true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Sources - [ ] https://juliadocs.github.io/Julia-Cheat-Sheet/- [ ] https://github.com/JuliaLang/julia- [ ] https://arxiv.org/pdf/2003.10146.pdf- [ ] https://github.com/h-Klok/StatsWithJuliaBook- [ ] juliahub- [ ] juliaacademy- [ ] https://www.sas.upenn.edu/~jesusfv/Chapter_HPC_8_Julia.pdf- [ ] https://www.packtpub.com/...
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Eliminamos descargas anteriores
!rm *pdf
rm: *pdf: No such file or directory
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Obtenemos las urls de los informes (EPI y Generales)
response = subprocess.check_output(shlex.split('curl --request GET https://www.gob.cl/coronavirus/cifrasoficiales/')) url_reporte = [] url_informe_epi = [] for line in response.decode().splitlines(): if "Reporte_Covid19.pdf" in line: url = line.strip().split('https://')[1].split("\"")[0] ...
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Double Check
url_reporte url_informe_epi
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Descarga Informes
#for url in set(url_reporte): # subprocess.check_output(shlex.split("wget "+ url)) for url in set(url_informe_epi): subprocess.check_output(shlex.split("wget "+ url)) !ls
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
PreprocesamientoUsamos tabula-py: wrapper de Tabula App (escrita en Java). A library for extracting tables from PDF files https://github.com/chezou/tabula-py
import tabula dfs_files = {} for url in url_informe_epi: pdf_file = url.split('/')[-1] df = tabula.read_pdf(pdf_file, pages='all', multiple_tables=True) fecha = pdf_file.split('_')[-1].split('.')[0] print(fecha) dfs_files['tablas_' + fecha] = df
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Verificamos algunas tablas
tablas_20200401 = dfs_files['tablas_20200401v2'] tablas_20200330 = dfs_files['tablas_20200330'] df_comunas_20200401 = {} unnamed_primeraCol = {} for idx, df in enumerate(tablas_20200401): if 'Comuna' in df.columns: key= 'tabla_' + str(idx + 1) print(key) df_comunas_20200401[key] = ...
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Tabla empieza con un *Unnamed: 0*
df_comunas_20200401['tabla_22'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Tabla **no** empieza con un *Unnamed: 0*
df_comunas_20200330 = {} unnamed_primeraCol = {} for idx, df in enumerate(tablas_20200330): if 'Comuna' in df.columns: key= 'tabla_' + str(idx + 1) print(key) df_comunas_20200330[key] = df df_comunas_20200330['tabla_7'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Misma tabla empieza con un *Unnamed: 0*
df_comunas_20200330['tabla_23'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Misma tabla **no** empieza con un *Unnamed: 0* Separamos estas dos categorias:
df_comunas_20200401 = {} unnamed_primeraCol_20200401 = {} for idx, df in enumerate(tablas_20200401): if 'Comuna' in df.columns: key = 'tabla_' + str(idx + 1) df_comunas_20200401[key] = df if 'Unnamed' in df.columns[0]: print(key) unnamed_primeraCol...
tabla_7 tabla_13 tabla_18 tabla_19 tabla_22
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Resumen * El informe 20200330 tiene una tabla más al parecer (en realidad esto no es así y parecer que un cambio en el gráfico dejo la kgá).* La extracción de tablas parece tener los mismos errores en las mismas tablas.
%%capture """ for tup_1, tup_2 in zip(df_comunas.items(), df_comunas_2.items()): key_1, df_1 = tup_1 key_2, df_2 = tup_2 if (key_1 or key_2) in unnamed_primeraCol: if (df_1.columns == df_2.columns).all: print("LAS COLUMNAS DE LAS TABLAS *diferentes* coinciden!", key_1, key_2) ...
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Estandarizamos las tablas
for key in df_comunas_20200401.keys(): df = df_comunas_20200401[key] if key in unnamed_primeraCol_20200401.keys(): df['Comuna'] = df['Unnamed: 0'] df['N°'] = df['Unnamed: 1'] df['Tasa'] = df['Unnamed: 2'] df_comunas_20200401[key] = df.drop(labels='Unnamed: 0', ...
tabla_7 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_8 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_9 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_10 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_11 Index(['Comuna', 'N°', 'Población'...
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Ultima tabla tiene *Unnamed: 2*
df_comunas_20200401['tabla_21'] df_comunas_20200330['tabla_23']
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Class with Multiple Objects
class Birds: def __init__(self,bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds("Griffon Vulture") crane = Birds("Common Crane")...
Griffon Vulture flies above clouds Common Crane flies above clouds Emu is the national bird of the Philippines
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Encapsulation with Private Attributes
class foo: def __init__(self,a,b): self.a = a self.b = b def add(self): return self.a + self.b foo_object = foo(3,4) foo_object.add() foo_object.a = 6 foo_object.add() class foo: def __init__(self,a,b): self._a = a self._b = b def add(self): return self._a + self._b foo_object = foo(3,4...
_____no_output_____
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Encapsulation by mangling with double underscores
class Counter: def __init__(self): self.current = 0 def increment(self): self.current += 1 #current = current+1 def value(self): return self.current def reset(self): self.current = 0 counter = Counter() counter.increment() counter.increment() counter.increment() print(counter.value())
3
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Inheritance
class Person: def __init__(self,fname,sname): self.fname = fname self.sname = sname def printname(self): print(self.fname,self.sname) x = Person("Andrei","Benavidez") x.printname() class Teacher(Person): pass x = Teacher("Drei", "Benavidez") x.printname()
Andrei Benavidez Drei Benavidez
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Polymorphism
class RegularPolygon: def __init__(self,side): self._side = side class Square(RegularPolygon): def area(self): return self._side * self._side class EquilateralTriangle(RegularPolygon): def area(self): return self._side * self._side * 0.433 obj1 = Square(4) obj2 = EquilateralTriangle(3) obj1.area() o...
_____no_output_____
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
* Create a Python program that displays the name of 3 students (Student 1, Student 2, Student 3) and their grades* Create a class name "Person" and attributes - std1, std2, std3, pre, mid, fin* Compute for the average grade of each term using Grade() method* Information about student's grades must be hidden fro...
import random class Person: def __init__ (self, student, pre, mid, fin): self.student = student self.pre = pre *0.30 self.mid = mid *0.30 self.fin = fin *0.40 def Grade (self): print (self.student, "has an average grade of", self.pre, "in Prelims") print (self.student, "has an average grad...
Andrei has an average grade of 21.9 in Prelims Andrei has an average grade of 23.4 in Midterms Andrei has an average grade of 33.2 in Finals Ady has an average grade of 26.4 in Prelims Ady has an average grade of 27.599999999999998 in Midterms Ady has an average grade of 28.0 in Finals Drei has an average grade of 28.2...
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Part 1.1 基于枚举方法来搭建中文分词工具(新)此项目需要的数据:1. 综合类中文词库.xlsx: 包含了中文词,当做词典来用2. 以变量的方式提供了部分unigram概率 word_prob举个例子: 给定词典=[我们 学习 人工 智能 人工智能 未来 是], 另外我们给定unigram概率:p(我们)=0.25, p(学习)=0.15, p(人工)=0.05, p(智能)=0.1, p(人工智能)=0.2, p(未来)=0.1, p(是)=0.15 Step 1: 对于给定字符串:”我们学习人工智能,人工智能是未来“, 找出所有可能的分割方式- [我们,学习,人工智能,人工智能,是,未来]- [我们,学习,人工,智能,...
import pandas as pd import numpy as np path = "./data/综合类中文词库.xlsx" data_frame = pd.read_excel(path, header = None) dic_word_list = data_frame[data_frame.columns[0]].tolist() dic_words = dic_word_list # 保存词典库中读取的单词 # 以下是每一个单词出现的概率。为了问题的简化,我们只列出了一小部分单词的概率。 在这里没有出现的的单词但是出现在词典里的,统一把概率设置成为0.00001 # 比如 p("学院")=p("概率")=....
['北京', '的', '天气', '真好啊'] ['今天', '的', '课程', '内容', '很有', '意思'] ['经常', '有意见', '分歧']
Apache-2.0
enumerate.ipynb
chmoe/NLPLearning-CNWordSegmentation
1 Logistic Regression 1.1 Visualizing the data
import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt data = genfromtxt('data/ex2data1.txt', delimiter=',') # Print first five rows to see what it looks like print(data[:5, :]) X = data[:, 0:2] # scores on test1, test2 Y = data[:, 2] # admitted yes/no print(X[:5]) print(Y[:5]) plt.figure(figs...
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2 Implementation 1.2.1 Warmup exercise: sigmoid function
import math def sigmoid(z): g = 1. / (1. + math.exp(-z)) return g # Vectorize sigmoid function so it works on all elements of a numpy array sigmoid = np.vectorize(sigmoid) # Test sigmoid function test = np.array([[0]]) sigmoid(test) sigmoid(0) test = np.array([[-10,-1], [0,0], [1,10]]) sigmoid(test)
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.2 Cost function and gradient
# Setup the data matrix appropriately, and add ones for the intercept term [m, n] = X.shape # Add intercept term to X X = np.column_stack((np.ones(m), X)) # Initialize fitting parameters initial_theta = np.zeros([n + 1, 1]) def costFunction(theta, X, y): # Cost J = 0 m = len(y) for i in range(...
Cost at test theta (zeros): 0.218330193827 Expected cost (approx): 0.218 Gradient at test theta (zeros): [ 0.04290299 2.56623412 2.64679737] Expected gradients (approx): 0.043 2.566 2.647
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.3 Learning parameters using fminunc We're supposed to use Octave's ```fminunc``` function for this. I can't find a python implementation of this, so let's use ```scipy.optimize.minimize(method='TNC')``` instead.
from scipy.optimize import minimize res = minimize(fun=costFunction, x0=initial_theta, args=(X, Y), method='TNC', jac=True, options={'maxiter':400}) res theta = res.x print('Cost at theta found by fmin_tnc:\n', res.fun) print('Expected cost (approx):\n 0.203\n') print('Theta:\n', res.x) print('Expected theta (approx):...
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.4 Evaluating logistic regression
# Predict probability of admission for a student with score 45 on exam 1 and score 85 on exam 2 prob = sigmoid(np.dot([1, 45, 85], theta)) print('For a student with scores 45 and 85, we predict an admission probability of:\n', prob) print('Expected value:\n 0.775 +/- 0.002\n\n') # Compute accuracy on our training set ...
Training accuracy: 89.0 % Expected accuracy (approx): 89.0 %
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
zip(df2, axes.flatten())
fig, axes = plt.subplots(8,2) x= groups['q-value'] fig, axes = plt.subplots(6,6, figsize=(12,12), sharex=True) #axr = axes.ravel() #zip(groups, axes.flatten()) for ax, x in zip(axes.flat, x): sb.distplot(x[1], ax=ax) ax.set_title(x[0]) ax.axvline(0.05, color='r', ls=':') #axes.flat[-1].set_visible(Fal...
_____no_output_____
MIT
notebook/distribution_qvals_dmmpmm.ipynb
isabelleberger/isabelle-
Example notebook that does stuff with the output files from a xspec, namely:* the .txt from wdata that saves the data/model,* the *.fits from writefits that save out the fit parameters.IGH 14 Feb 2020 - Started IGH 20 Feb 2020 - Better latex font, and fancier error label
from astropy.io import fits import numpy as np import matplotlib.pyplot as plt import matplotlib import warnings warnings.simplefilter('ignore') # Some useful parameters # norm = 1e-14/(4piD_A^2)*\int n_e n_p dV # The norm factor from the XSPEC APEC model is defined here: https://heasarc.gsfc.nasa.gov/xanadu/xspec/man...
_____no_output_____
MIT
xspec/example_xspec.ipynb
ianan/nsigh
Inverse Kinematics OptimizationThe previous doc explained features and how they define objectives of a constrained optimization problem. Here we show how to use this to solve IK optimization problems.At the bottom there is more general text explaining the basic concepts. Demo of features in Inverse KinematicsLet's se...
import sys sys.path.append('../build') #rai/lib') import numpy as np import libry as ry C = ry.Config() C.addFile('../rai-robotModels/pr2/pr2.g') C.addFile('../rai-robotModels/objects/kitchen.g') C.view()
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
For simplicity, let's add a frame that represents goals
goal = C.addFrame("goal") goal.setShape(ry.ST.sphere, [.05]) goal.setColor([.5,1,1]) goal.setPosition([1,.5,1]) X0 = C.getFrameState() #store the initial configuration
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We create an IK engine. The only objective is that the `positionDiff` (position difference in world coordinates) between `pr2L` (the yellow blob in the left hand) and `goal` is equal to zero:
IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times = [1,2], feature=ry.FS.positionDiff, frames=['pr2L', 'goal'])
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We now call the optimizer (True means with random initialization/restart).
IK.optimize() IK.getReport()
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00914103 (kin:0.000131 coll:0.000132 feat:0 newton: 0.00105) setJointStateCount:35 sos:0.0808073 ineq:0 eq:0.238354
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The best way to retrieve the result is to copy the optimized IK configuration back into your working configuration C, which is now also displayed
#IK.getFrameState(1) C.setFrameState(IK.getFrameState(0))
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We can redo the optimization, but for a different configuration, namely a configuration where the goal is in another location.For this we move goal in our working configuration C, then copy C back into the IK engine's configurations:
## (iterate executing this cell for different goal locations!) # move goal goal.setPosition([.8,.2,.5]) # copy C into the IK's internal configuration(s) IK.setConfigurations(C) # reoptimize IK.optimize(0.) # 0: no adding of noise for a random restart #print(IK.getReport()) print(np.shape(IK.getFrameState(0))) print(...
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.000305789 (kin:0.000238 coll:0.000149 feat:0 newton: 0.001415) setJointStateCount:3 sos:0.000285026 ineq:0 eq:0.0270084 (179, 7) (7,)
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
Let's solve some other problems, always creating a novel IK engine:The relative position of `goal` in `pr2R` coordinates equals [0,0,-.2] (which is 20cm straight in front of the yellow blob)
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq,times=[1], feature=ry.FS.positionRel, frames=['goal','pr2R'], target=[0,0,-.2]) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00105824 (kin:5.2e-05 coll:1.1e-05 feat:0 newton: 0.000124) setJointStateCount:12 sos:0.00848536 ineq:0 eq:0.0341739
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The distance between `pr2R` and `pr2L` is zero:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.distance, frames=['pr2L','pr2R']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00069327 (kin:3.3e-05 coll:5e-06 feat:0 newton: 5.9e-05) setJointStateCount:6 sos:0.00209253 ineq:0 eq:0.0149894
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The 3D difference between the z-vector of `pr2R` and the z-vector of `goal`:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.vectorZDiff, frames=['pr2R', 'goal']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00144349 (kin:0.000111 coll:2.9e-05 feat:0 newton: 0.000115) setJointStateCount:12 sos:0.0163838 ineq:0 eq:0.0143332
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The scalar product between the z-vector of `pr2R` and the z-vector of `goal` is zero:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.scalarProductZZ, frames=['pr2R', 'goal']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.000686185 (kin:7.1e-05 coll:3e-06 feat:0 newton: 4.2e-05) setJointStateCount:4 sos:0.000248896 ineq:0 eq:0.00308733
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
etc etc More explanationsAll methods to compute paths or configurations solve constrained optimization problems. To use them, you need to learn to define constrained optimization problems. Some definitions:* An objective defines either a sum-of-square cost term, or an equality constraint, or an inequality constraint i...
# Designing a cylinder grasp D=0 C=0 import sys sys.path.append('../build') #rai/lib') import numpy as np import libry as ry C = ry.Config() C.addFile('../rai-robotModels/pr2/pr2.g') C.addFile('../rai-robotModels/objects/kitchen.g') C.view() C.setJointState([.7], ["l_gripper_l_finger_joint"]) C.setJointState( C.getJoin...
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
Welcome to Python FundamentalsIn this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:* Variables and Data Types * Operations* Input and Output Operations* Logic Control* Iterables* Functions Variable and Data Types
x = 1 a,b = 0, -1 type(x) y = 1,0 type(y) x = float(x) type(x) s,t,u ="0", "1", "one" type(s) s_int = int(s) s_int
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Operations Arithmetic
a,b,c,d = 2.0, -0.5, 0, -32 ### Addition S = a+b S ### Subtraction D = b-d D ### Multiplication P = a*d P ### Division Q = c/d Q ### Floor Division Fq = a//b Fq ### Exponentiation E = a**b E ### Modulo mod = d%a mod
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Assingment Operations
G, H, J, K = 0, 100, 2, 2 G += a G H -= d H J *= 2 J K **= 3 K
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Comparators
res_1, res_2, res_3 = 1, 2.0, "1" true_val = 1.0 ## Equality res_1 == true_val ## Non-equality res_2 != true_val ## Inequality t1 = res_1 > res_2 t2 = res_1 < res_2/2 t3 = res_1 >= res_2/2 t4 = res_1 <= res_2 t1
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Logical
res_1 == true_val res_1 is true_val res_1 is not true_val p, q = True, False conj = p and q conj p, q = True, False disj = p or q disj p, q = True, False nand = not(p and q) nand p, q = True, False xor = (not p and q) or (p and not q) xor
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
1/0
print ("Hello World") cnt = 1 string = "Hello World" print(string, ", Current run count is:", cnt) cnt +=1 print(f"{string}, Current count is {cnt}") sem_grade = 82.24356457461234 name = "cath" print("Hello {}, your semestral grade is: {}".format(name, sem_grade)) w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 print("The weights of ...
kimi no nawa: Cath Enter prelim grade: 1.00 Enter midterm grade: 1.00 Enter finals grade: 1.00 Hello Cath, your semestral grade is: None
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Looping Statements While
## while loops i, j = 0, 10 while(i<=j): print(f"{i}\t|\t{j}") i+=1
0 | 10 1 | 10 2 | 10 3 | 10 4 | 10 5 | 10 6 | 10 7 | 10 8 | 10 9 | 10 10 | 10
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
For
# for(int i=0; i<10; i++){ # printf(i) # } i=0 for i in range(11): print(i) playlist = ["Crazier", "Bahay-Kubo", "Happier"] print('Now Playing:\n') for song in playlist: print(song)
Now Playing: Crazier Bahay-Kubo Happier
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Flow Control Conditions Statemnents
numeral1, numeral2 = 12, 12 if(numeral1 == numeral2): print("Yey") elif(numeral1>numeral2): print("Hoho") else: print("AWW") print("Hip hip")
Yey Hip hip
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Functions
[ ] # void DeleteUser(int userid){ # delete(userid); # } def delete_user (userid): print("Successfully deleted user: {}". format(userid)) def delete_all_users (): print("Successfully deleted all users") userid = 202011844 delete_user(202011844) delete_all_users() def add(addend1, addend2): pr...
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Grade Calculator Create a grade calculator that computes for the semestral grade of a course. Students could type their names, the name of the course, then their prelim, midterm, and final grade.The program should print the semestral grade in 2 decimal points and should display the following emojis depending on the si...
w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 name = input("Enter your name: ") course = input("Enter your course: ") pg = float(input("Enter prelim grade: ")) mg = float(input("Enter midterm grade: ")) fg = float(input("Enter final grade: ")) sem_grade = (pg*w_pg)+(mg*w_mg)+(fg*w_fg) print("Hello {} from {}, your semestral grade i...
Enter your name: Catherine Enter your course: BS Chemical Engineering Enter prelim grade: 97 Enter midterm grade: 98 Enter final grade: 99 Hello Catherine from BS Chemical Engineering, your semestral grade is: 98.1 😀
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Nifti Read ExampleThe purpose of this notebook is to illustrate reading Nifti files and iterating over patches of the volumes loaded from them.
%matplotlib inline import os import sys from glob import glob import tempfile import numpy as np import matplotlib.pyplot as plt import nibabel as nib import torch from torch.utils.data import DataLoader import monai from monai.data import NiftiDataset, GridPatchDataset, create_test_image_3d from monai.transforms imp...
MONAI version: 0.1a1.dev8+6.gb3c5761.dirty Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] Numpy version: 1.18.1 Pytorch version: 1.4.0 Ignite version: 0.3.0
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Create a number of test Nifti files:
tempdir = tempfile.mkdtemp() for i in range(5): im, seg = create_test_image_3d(128, 128, 128) n = nib.Nifti1Image(im, np.eye(4)) nib.save(n, os.path.join(tempdir, 'im%i.nii.gz'%i)) n = nib.Nifti1Image(seg, np.eye(4)) nib.save(n, os.path.join(tempdir, 'seg%i.nii.gz'%i))
_____no_output_____
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Create a data loader which yields uniform random patches from loaded Nifti files:
images = sorted(glob(os.path.join(tempdir, 'im*.nii.gz'))) segs = sorted(glob(os.path.join(tempdir, 'seg*.nii.gz'))) imtrans = Compose([ ScaleIntensity(), AddChannel(), RandSpatialCrop((64, 64, 64), random_size=False), ToTensor() ]) segtrans = Compose([ AddChannel(), RandSpatialCrop((64, 6...
torch.Size([5, 1, 64, 64, 64]) torch.Size([5, 1, 64, 64, 64])
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Alternatively create a data loader which yields patches in regular grid order from loaded images:
imtrans = Compose([ ScaleIntensity(), AddChannel(), ToTensor() ]) segtrans = Compose([ AddChannel(), ToTensor() ]) ds = NiftiDataset(images, segs, transform=imtrans, seg_transform=segtrans) ds = GridPatchDataset(ds, (64, 64, 64)) loader = DataLoader(ds, batch_size=10, num_workers=2, p...
_____no_output_____
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Network Communities Detection In this notebook, we will explore some methods to perform a community detection using several algortihms. Before testing the algorithms, let us create a simple benchmark graph.
%matplotlib inline from matplotlib import pyplot as plt import numpy as np import pandas as pd import networkx as nx G = nx.barbell_graph(m1=10, m2=4)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Matrix Factorization We start by using some matrix factorization technique to extract the embeddings, which are visualized and then clustered traditional clustering algorithms.
from gem.embedding.hope import HOPE gf = HOPE(d=4, beta=0.01) gf.learn_embedding(G) embeddings = gf.get_embedding() from sklearn.manifold import TSNE tsne = TSNE(n_components=2) emb2d = tsne.fit_transform(embeddings) plt.plot(embeddings[:, 0], embeddings[:, 1], 'o', linewidth=0)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
We start by using a GaussianMixture model to perform the clustering
from sklearn.mixture import GaussianMixture gm = GaussianMixture(n_components=3, random_state=0) #.(embeddings) labels = gm.fit_predict(embeddings) colors = ["blue", "green", "red"] nx.draw_spring(G, node_color=[colors[label] for label in labels])
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Spectral Clustering We now perform a spectral clustering based on the adjacency matrix of the graph. It is worth noting that this clustering is not a mutually exclusive clustering and nodes may belong to more than one community
adj=np.array(nx.adjacency_matrix(G).todense()) from communities.algorithms import spectral_clustering communities = spectral_clustering(adj, k=3)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
In the next plot we highlight the nodes that belong to a community using the red color. The blue nodes do not belong to the given community
plt.figure(figsize=(20, 5)) for ith, community in enumerate(communities): cols = ["red" if node in community else "blue" for node in G.nodes] plt.subplot(1,3,ith+1) plt.title(f"Community {ith}") nx.draw_spring(G, node_color=cols)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
The next command shows the node ids belonging to the different communities
communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Non Negative Matrix Factorization Here, we again use matrix factorization, but now using the Non-Negative Matrix Factorization, and associating the clusters with the latent dimensions.
from sklearn.decomposition import NMF nmf = NMF(n_components=2) emb = nmf.fit_transform(adj) plt.plot(emb[:, 0], emb[:, 1], 'o', linewidth=0)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
By setting a threshold value of 0.01, we determine which nodes belong to the given community.
communities = [set(np.where(emb[:,ith]>0.01)[0]) for ith in range(2)] plt.figure(figsize=(20, 5)) for ith, community in enumerate(communities): cols = ["red" if node in community else "blue" for node in G.nodes] plt.subplot(1,3,ith+1) plt.title(f"Community {ith}") nx.draw_spring(G, node_color=cols)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Although the example above does not show this, in general also this clustering method may be non-mutually exclusive, and nodes may belong to more than one community Louvain and Modularity Optimization Here, we use the Louvain method, which is one of the most popular methods for performing community detection, even on ...
from communities.algorithms import louvain_method communities = louvain_method(adj) c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values nx.draw_spring(G, node_color=c) communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Girvan Newman The Girvan–Newman algorithm detects communities by progressively removing edges from the original graph. The algorithm removes the “most valuable” edge, traditionally the edge with the highest betweenness centrality, at each step. As the graph breaks down into pieces, the tightly knit community structure...
from communities.algorithms import girvan_newman communities = girvan_newman(adj, n=2) c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values nx.draw_spring(G, node_color=c) communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and...
# import libraries import pandas as pd from sqlalchemy import create_engine from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.multioutput import MultiOutputClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.metric...
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
2. Write a tokenization function to process your text data
def tokenize(text): tokens = word_tokenize(text) lemmatizer = WordNetLemmatizer() clean_tokens = [] for tok in tokens: clean_tok = lemmatizer.lemmatize(tok).lower().strip() clean_tokens.append(clean_tok) return clean_tokens
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) h...
# create the NLP ML Pipeline pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()), ('clf', MultiOutputClassifier(RandomForestClassifier())) ])
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
4. Train pipeline- Split data into train and test sets- Train pipeline
# Split the data in train and test sets X_train, X_test, Y_train, Y_test = train_test_split(X, Y) # Fit the pipeline pipeline.fit(X_train, Y_train)
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
# Predict using the test data Y_pred = pipeline.predict(X_test) # Print the classification report for for each column for i, column in enumerate(Y_train.columns): print("Columns: ", column) print(classification_report(Y_test.values[:,i], Y_pred[:,i])) print()
Columns: related precision recall f1-score support 0 0.76 0.26 0.39 1539 1 0.80 0.98 0.88 4967 2 1.00 0.04 0.08 48 accuracy 0.80 6554 macro avg 0.86 0...
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
6. Improve your modelUse grid search to find better parameters.
# Define GridSearch parameters parameters = {'clf__estimator__n_estimators': range(100,200,100), 'clf__estimator__min_samples_split': range(2,3)} # Instantiate GridSearch object cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=4) # Use GridSearch to find the best parameters cv.fit(X_train, Y_tr...
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - esp...
# Predict using the trained model with the best parameters Y_pred = cv.predict(X_test) # Print the classification report for for each column for i, column in enumerate(Y_train.columns): print("Columns: ", column) print(classification_report(Y_test.values[:,i], Y_pred[:,i])) print()
Columns: related precision recall f1-score support 0 0.77 0.25 0.38 1539 1 0.80 0.98 0.88 4967 2 1.00 0.04 0.08 48 accuracy 0.80 6554 macro avg 0.86 0...
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
# TODO: Model is taking too long to fit # I have to find a better engine to process it, # before testing new ideas
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
9. Export your model as a pickle file
# Save the model to pickl file with open("DISASSTER_MODEL.pkl", 'wb') as file: file.write(pickle.dumps(cv))
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
Examples and Exercises from Think Stats, 2nd Editionhttp://thinkstats2.comCopyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
from __future__ import print_function, division %matplotlib inline import numpy as np import brfss import thinkstats2 import thinkplot
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
I'll start with the data from the BRFSS again.
df = brfss.ReadBrfss(nrows=None)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Here are the mean and standard deviation of female height in cm.
female = df[df.sex==2] female_heights = female.htm3.dropna() mean, std = female_heights.mean(), female_heights.std() mean, std
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`NormalPdf` returns a Pdf object that represents the normal distribution with the given parameters.`Density` returns a probability density, which doesn't mean much by itself.
pdf = thinkstats2.NormalPdf(mean, std) pdf.Density(mean + std)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`thinkplot` provides `Pdf`, which plots the probability density with a smooth curve.
thinkplot.Pdf(pdf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`Pdf` provides `MakePmf`, which returns a `Pmf` object that approximates the `Pdf`.
pmf = pdf.MakePmf() thinkplot.Pmf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
If you have a `Pmf`, you can also plot it using `Pdf`, if you have reason to think it should be represented as a smooth curve.
thinkplot.Pdf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE).If you run this a few times, you'll see how much variation there is in the estimate.
thinkplot.Pdf(pdf, label='normal') sample = np.random.normal(mean, std, 500) sample_pdf = thinkstats2.EstimatedPdf(sample, label='sample') thinkplot.Pdf(sample_pdf, label='sample KDE') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
MomentsRaw moments are just sums of powers.
def RawMoment(xs, k): return sum(x**k for x in xs) / len(xs)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The first raw moment is the mean. The other raw moments don't mean much.
RawMoment(female_heights, 1), RawMoment(female_heights, 2), RawMoment(female_heights, 3) def Mean(xs): return RawMoment(xs, 1) Mean(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The central moments are powers of distances from the mean.
def CentralMoment(xs, k): mean = RawMoment(xs, 1) return sum((x - mean)**k for x in xs) / len(xs)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The first central moment is approximately 0. The second central moment is the variance.
CentralMoment(female_heights, 1), CentralMoment(female_heights, 2), CentralMoment(female_heights, 3) def Var(xs): return CentralMoment(xs, 2) Var(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel.
def StandardizedMoment(xs, k): var = CentralMoment(xs, 2) std = np.sqrt(var) return CentralMoment(xs, k) / std**k
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The third standardized moment is skewness.
StandardizedMoment(female_heights, 1), StandardizedMoment(female_heights, 2), StandardizedMoment(female_heights, 3) def Skewness(xs): return StandardizedMoment(xs, 3) Skewness(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median.
def Median(xs): cdf = thinkstats2.Cdf(xs) return cdf.Value(0.5)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
But in this case the mean is greater than the median, which indicates skew to the right.
Mean(female_heights), Median(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust.
def PearsonMedianSkewness(xs): median = Median(xs) mean = RawMoment(xs, 1) var = CentralMoment(xs, 2) std = np.sqrt(var) gp = 3 * (mean - median) / std return gp
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right.
PearsonMedianSkewness(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW