markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
By generating a scatter plot using the second feature `features[:, 1]` and `labels`,we can clearly observe the linear correlation between the two.
d2l.set_figsize() # The semicolon is for displaying the plot only d2l.plt.scatter(features[:, (1)].numpy(), labels.numpy(), 1);
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Reading the DatasetRecall that training models consists ofmaking multiple passes over the dataset,grabbing one minibatch of examples at a time,and using them to update our model.Since this process is so fundamentalto training machine learning algorithms,it is worth defining a utility functionto shuffle the dataset and...
def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The examples are read at random, in no particular order random.shuffle(indices) for i in range(0, num_examples, batch_size): j = tf.constant(indices[i: min(i + batch_size, num_exam...
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
In general, note that we want to use reasonably sized minibatchesto take advantage of the GPU hardware,which excels at parallelizing operations.Because each example can be fed through our models in paralleland the gradient of the loss function for each example can also be taken in parallel,GPUs allow us to process hund...
batch_size = 10 for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break
tf.Tensor( [[ 0.34395403 0.250355 ] [ 0.8474066 -0.08658892] [ 1.332213 -0.05381915] [-1.0579451 0.5105379 ] [-0.48678052 0.12689345] [-0.19708689 -0.7590605 ] [-1.4754761 -0.98582214] [ 0.35217085 0.43196547] [-1.7024363 0.54085165] [-0.10568867 -1.4778754 ]], shape=(10, 2), dtype=float32) tf.Te...
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
As we run the iteration, we obtain distinct minibatchessuccessively until the entire dataset has been exhausted (try this).While the iteration implemented above is good for didactic purposes,it is inefficient in ways that might get us in trouble on real problems.For example, it requires that we load all the data in mem...
w = tf.Variable(tf.random.normal(shape=(2, 1), mean=0, stddev=0.01), trainable=True) b = tf.Variable(tf.zeros(1), trainable=True)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
After initializing our parameters,our next task is to update them untilthey fit our data sufficiently well.Each update requires taking the gradientof our loss function with respect to the parameters.Given this gradient, we can update each parameterin the direction that may reduce the loss.Since nobody wants to compute ...
def linreg(X, w, b): #@save """The linear regression model.""" return tf.matmul(X, w) + b
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Defining the Loss FunctionSince [**updating our model requires takingthe gradient of our loss function,**]we ought to (**define the loss function first.**)Here we will use the squared loss functionas described in :numref:`sec_linear_regression`.In the implementation, we need to transform the true value `y`into the pre...
def squared_loss(y_hat, y): #@save """Squared loss.""" return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Defining the Optimization AlgorithmAs we discussed in :numref:`sec_linear_regression`,linear regression has a closed-form solution.However, this is not a book about linear regression:it is a book about deep learning.Since none of the other models that this book introducescan be solved analytically, we will take this o...
def sgd(params, grads, lr, batch_size): #@save """Minibatch stochastic gradient descent.""" for param, grad in zip(params, grads): param.assign_sub(lr*grad/batch_size)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
TrainingNow that we have all of the parts in place,we are ready to [**implement the main training loop.**]It is crucial that you understand this codebecause you will see nearly identical training loopsover and over again throughout your career in deep learning.In each iteration, we will grab a minibatch of training ex...
lr = 0.03 num_epochs = 3 net = linreg loss = squared_loss for epoch in range(num_epochs): for X, y in data_iter(batch_size, features, labels): with tf.GradientTape() as g: l = loss(net(X, w, b), y) # Minibatch loss in `X` and `y` # Compute gradient on l with respect to [`w`, `b`] ...
epoch 1, loss 0.029337
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
In this case, because we synthesized the dataset ourselves,we know precisely what the true parameters are.Thus, we can [**evaluate our success in trainingby comparing the true parameterswith those that we learned**] through our training loop.Indeed they turn out to be very close to each other.
print(f'error in estimating w: {true_w - tf.reshape(w, true_w.shape)}') print(f'error in estimating b: {true_b - b}')
error in estimating w: [-0.00040174 -0.00101519] error in estimating b: [0.00056839]
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\my...
u = [-2,-1,0,1] v = [1,2,3] uv = [] vu = [] for i in range(len(u)): # one element of u is picked for j in range(len(v)): # now we iteratively select every element of v uv.append(u[i]*v[j]) # this one element of u is iteratively multiplied with every element of v print("u-tensor-v is",uv) for i...
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Task 2 Find $ A \otimes B $ for the given matrices$ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }.$ Solution
A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print("A =") for i in range(len(A)): print(A[i]) print() # print a line print("B =") for i in range(len(B)): print(B[i]) # let's define A-tensor-B as a (6x6)-dimensional zero matrix AB = [] for i in range(6): AB.append([]) ...
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Task 3 Find $ B \otimes A $ for the given matrices$ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }.$ Solution
A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print() # print a line print("B =") for i in range(len(B)): print(B[i]) print("A =") for i in range(len(A)): print(A[i]) # let's define B-tensor-A as a (6x6)-dimensional zero matrix BA = [] for i in range(6): BA.append([])...
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Mask R-CNN - Train on Nuclei Dataset (updated from train_shape.ipynb)This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is...
import os import sys import random import math import re import time import tqdm import numpy as np import cv2 import matplotlib import matplotlib.pyplot as plt from config import Config import utils import model as modellib import visualize from model import log %matplotlib inline # Root directory of the project R...
/home/lf/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) /home/lf/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the...
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Configurations
class NucleiConfig(Config): """Configuration for training on the toy shapes dataset. Derives from the base Config class and overrides values specific to the toy shapes dataset. """ # Give the configuration a recognizable name NAME = "nuclei" # Train on 1 GPU and 8 images per GPU. We can put...
Configurations: BACKBONE_SHAPES [[128 128] [ 64 64] [ 32 32] [ 16 16] [ 8 8]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 4 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_...
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Notebook Preferences
def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Change the default size attribute to control the size of rendered images """ _, ax = plt.subplots(rows, cols, figsize=(...
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
DatasetLoad the nuclei datasetExtend the Dataset class and add a method to get the nuclei dataset, `load_image_info()`, and override the following methods:* load_image()* load_mask()* image_reference()
class NucleiDataset(utils.Dataset): """Load the images and masks from dataset.""" def load_image_info(self, set_path, img_set): """Get the picture names(ids) of the dataset.""" # Add classes self.add_class("nucleis", 1, "regular") # TO DO : Three different image types ...
594
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Bounding BoxesAlthough we don't have the specific box coordinates in the dataset, we can compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bou...
# Load random image and mask. image_id = random.choice(dataset_train.image_ids) image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) # Compute Bounding box bbox = utils.extract_bboxes(mask) # Display image and additional stats print("image_id ", image_id, dataset_train.image_r...
image_id 527 /home/lf/Nuclei/data/stage1_train_fixed/3b0709483b1e86449cc355bb797e841117ba178c6ae1ed955384f4da6486aa20 image shape: (256, 320, 3) min: 28.00000 max: 214.00000 mask shape: (256, 320, 17) min: 0.00000 max: 255.00000 class_ids sh...
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Ceate Model
# Create model in training mode model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) # Which weights to start with? init_with = "coco" # imagenet, coco, or last if init_with == "imagenet": model.load_weights(model.get_imagenet_weights(), by_name=True) elif init_...
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For t...
# Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='h...
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Detection Example
class InferenceConfig(NucleiConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_NMS_THRESHOLD = 0.3 DETECTION_MAX_INSTANCES = 300 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, ...
Processing 1 images image shape: (1024, 1024, 3) min: 0.00000 max: 232.00000 molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 128.10000 image_metas shape: (1, 10) min: 0.00000 max: 1024.00000
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Evaluation
# Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. # image_ids = np.random.choice(dataset_val.image_ids, 10) image_ids = dataset_val.image_ids APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ m...
mAP: 0.808316577444
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Writing the Results
# Get the Test set. TESTSET_DIR = os.path.join(DATA_DIR, "stage1_test") dataset_test = NucleiDataset() dataset_test.load_image_info(TESTSET_DIR) dataset_test.prepare() print("Predict {} images".format(dataset_test.num_images)) # Load random image and mask(Original Size). image_id = np.random.choice(dataset_test.image_...
Processing 1 images image shape: (256, 256, 3) min: 10.00000 max: 255.00000 molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 10) min: 0.00000 max: 912.00000 /data2/liangfeng/nuclei_models/nuclei20...
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Índice1  Socioeconomic data validation1.0.1  Goals1.0.2  Data scources1.0.3  Methodology1.0.4  Results1.0.4.1  Outputs1.0.5  Authors1.1  Import data1.2  INSE data analysis1.2.1  Filtering model (refence) and risk (attent...
# Import config import os import sys sys.path.insert(0, '../') from config import RAW_PATH, TREAT_PATH, OUTPUT_PATH # DATA ANALYSIS & VIZ TOOLS from copy import deepcopy import pandas as pd import numpy as np pd.options.display.max_columns = 999 import geopandas as gpd from shapely.wkt import loads import matplotlib...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Import data
inse = pd.read_excel(RAW_PATH / "INSE_2015.xlsx") schools_ideb = pd.read_csv(OUTPUT_PATH / "kepler_with_filters.csv")
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
INSE data analysis
inse.rename(columns={"CO_ESCOLA" : "cod_inep"}, inplace=True) inse.head() schools_ideb['ano'] = pd.to_datetime(schools_ideb['ano']) schools_ideb.head()
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Filtering model (`refence`) and risk (`attention`) schools
reference = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & ((schools_ideb['pessimo_pra_bom_bin'] == 1) | (schools_ideb['ruim_pra_bom_bin'] == 1))] reference.info() attention = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & (schools_ideb['nivel_atencao'] == 4)] attention.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 176 entries, 4127 to 4728 Data columns (total 14 columns): ano 176 non-null datetime64[ns] cod_inep 176 non-null int64 geometry 176 non-null object ideb 176 non-null float64 nome_ab...
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Join INSE data
inse_cols = ["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"] reference = pd.merge(reference, inse[inse_cols], how = "left", on = "cod_inep") attention = pd.merge(attention, inse[inse_cols], how = "left", on = "cod_inep") reference['tipo_escola'] = 'Escola referência' reference.info() attention[...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Comparing INSE data in categories
sns.distplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas de risco') sns.distplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas modelo') plt.legend() pylab.rcParams['figure.figsize'] = (10, 8) title = "Comparação do nível sócio-econômico das escolas selecionadas" ylabel="...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Statistical INSE analysis Normality testFrom [this article:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/)> According to the available literature, **assessing the normality assumption should be taken into account for using parametric statistical tests.** It seems that the most popular test for normality, tha...
from scipy.stats import normaltest, shapiro, probplot
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
D'Agostino and Pearson's
normaltest(attention["INSE_VALOR_ABSOLUTO"].dropna()) normaltest(reference["INSE_VALOR_ABSOLUTO"].dropna())
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Shapiro-Wiki
shapiro(attention["INSE_VALOR_ABSOLUTO"].dropna()) qs = probplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt) shapiro(reference["INSE_VALOR_ABSOLUTO"].dropna()) ws = probplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
*t* testAbout parametric tests: [here](https://www.healthknowledge.org.uk/public-health-textbook/research-methods/1b-statistical-methods/parametric-nonparametric-tests)We can test the hypothesis of INSE be related to IDEB scores from the risk ($\mu_r$) and model schools ($\mu_m$) as it follows:$H_0 = \mu_r = \mu_m$$H_...
from scipy.stats import ttest_ind as ttest, normaltest, kstest attention["INSE_VALOR_ABSOLUTO"].dropna().describe() reference["INSE_VALOR_ABSOLUTO"].dropna().describe()
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Model x risk schools
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=True) ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=False)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Cohen's DMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).
from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # == Code made by Guilherme Almeida, 2019 == # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Model x risk schools
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(reference["INSE_VALOR_ABSOLUTO"], attention["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Best evolution model x risk schools
best_evolution = df_inse[df_inse['tipo_especifico'] == "pessimo_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Other model x risk schools
medium_evolution = df_inse[df_inse['tipo_especifico'] == "ruim_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
ruim_pra_bom["tipo_especifico"] = "Ruim para bom"pessimo_pra_bom["tipo_especifico"] = "Muito ruim para bom"risco["tipo_especifico"] = "Desempenho abaixo\ndo esperado"
referencias.head() referencias = pd.merge(referencias, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how = "left", on = "cod_inep") risco = pd.merge(risco, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how="left", on="cod_inep") referencias.INSE_VALOR_...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testes estatísticos Cohen's DMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).
from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d2) # calculate the variance of the samples s1, s2 = va...
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Todas as escolas referência vs. escolas risco
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Só as escolas muito ruim pra bom vs. escolas risco
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Tentando inferir causalidadeSabemos que existe uma diferença significativa entre os níveis sócio econômicos dos 2 grupos. Mas até que ponto essa diferença no INSE é capaz de explicar a diferença no IDEB? Será que resta algum efeito que pode ser atribuído às práticas de gestão? Esses testes buscam encontrar uma respost...
#pega a nota do IDEB pra servir de DV ideb = pd.read_csv("./pr-educacao/data/output/ideb_merged_kepler.csv") ideb["ano_true"] = ideb["ano"].apply(lambda x: int(x[0:4])) ideb = ideb.query("ano_true == 2017").copy() nota_ideb = ideb[["cod_inep", "ideb"]] df = pd.merge(df, nota_ideb, how = "left", on = "cod_inep") df.drop...
OLS Regression Results ============================================================================== Dep. Variable: ideb R-squared: 0.843 Model: OLS Adj. R-squared: 0.841 Meth...
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
O problema de fazer a regressão da maneira como eu coloquei acima é que tipo_bin foi criada parcialmente em função do IDEB (ver histogramas abaixo), então não é uma variável verdadeiramente independente. Talvez uma estratégia seja comparar modelos simples só com INSE e só com tipo_bin.
df.ideb.hist() df.query("tipo_bin == 0").ideb.hist() df.query("tipo_bin == 1").ideb.hist() #correlação simples from scipy.stats import pearsonr pearsonr(df[["ideb"]], df[["INSE_VALOR_ABSOLUTO"]]) iv_inse = add_constant(df[["INSE_VALOR_ABSOLUTO"]]) iv_ideb = add_constant(df[["tipo_bin"]]) modelo_inse = ols_py(df[["ide...
OLS Regression Results ============================================================================== Dep. Variable: ideb R-squared: 0.187 Model: OLS Adj. R-squared: 0.182 Meth...
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testes pareadosNossa unidade de observação, na verdade, não deveria ser uma escola, mas sim um par de escolas. Abaixo, tento fazer as análises levando em consideração o delta de INSE e o delta de IDEB para cada par de escolas. Isso é importante: sabemos que o INSE faz a diferença no IDEB geral, mas a pergunta é se ele...
pairs = pd.read_csv("sponsors_mais_proximos.csv") pairs.head() pairs.shape inse_risco = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_risco.columns = ["cod_inep_risco","inse_risco"] inse_ref = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_ref.columns = ["cod_inep_referencia","inse_referencia"] pairs = pd.merge(pairs...
OLS Regression Results ============================================================================== Dep. Variable: delta_ideb R-squared: 0.000 Model: OLS Adj. R-squared: -0.010 Meth...
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testando a assumption de que distância física se correlaciona com distância de INSE
pairs.head() sns.regplot("distancia", "delta_inse", data = clean_pairs.query("distancia < 4000")) multi_iv = add_constant(clean_pairs[["distancia", "delta_inse"]]) modelo_ze = ols_py(clean_pairs[["delta_ideb"]], multi_iv).fit() print(modelo_ze.summary())
OLS Regression Results ============================================================================== Dep. Variable: delta_ideb R-squared: 0.000 Model: OLS Adj. R-squared: -0.021 Meth...
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Intro & Resources* [Sutton/Barto ebook](https://goo.gl/7utZaz); [Silver online course](https://goo.gl/AWcMFW) Learning to Optimize Rewards* Definitions: software *agents* make *observations* & take *actions* within an *environment*. In return they can receive *rewards* (positive or negative). Policy Search* **Policy...
!pip3 install --upgrade gym import gym env = gym.make("CartPole-v0") obs = env.reset() obs env.render()
[2017-04-27 13:05:47,311] Making new env: CartPole-v0
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
* **make()** creates environment* **reset()** returns a 1st env't* **CartPole()** - each observation = 1D numpy array (hposition, velocity, angle, angularvelocity)![cartpole](pics/cartpole.png)
img = env.render(mode="rgb_array") img.shape # what actions are possible? # in this case: 0 = accelerate left, 1 = accelerate right env.action_space # pole is leaning right. let's go further to the right. action = 1 obs, reward, done, info = env.step(action) obs, reward, done, info
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
* new observation: * hpos = obs[0]<0 * velocity = obs[1]>0 = moving to the right * angle = obs[2]>0 = leaning right * ang velocity = obs[3]<0 = slowing down?* reward = 1.0* done = False (episode not over)* info = (empty)
# example policy: # (1) accelerate left when leaning left, (2) accelerate right when leaning right # average reward over 500 episodes? def basic_policy(obs): angle = obs[2] return 0 if angle < 0 else 1 totals = [] for episode in range(500): episode_rewards = 0 obs = env.reset() for step in range(...
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
NN Policies* observations as inputs - actions to be executed as outputs - determined by p(action)* approach lets agent find best balance between **exploring new actions** & **reusing known good actions**. Evaluating Actions: Credit Assignment problem* Reinforcement Learning (RL) training not like supervised learning. ...
import tensorflow as tf from tensorflow.contrib.layers import fully_connected # 1. Specify the neural network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # simple task, don't need more hidden neurons n_outputs = 1 ...
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Policy Gradient (PG) algorithms* example: ["reinforce" algo, 1992](https://goo.gl/tUe4Sh) Markov Decision processes (MDPs)* Markov chains = stochastic processes, no memory, fixed states, random transitions* Markov decision processes = similar to MCs - agent can choose action; transition probabilities depend on the ac...
# Define MDP: nan=np.nan # represents impossible actions T = np.array([ # shape=[s, a, s'] [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]], [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]], ]) R = np.array([ # shape=[s, a, s']...
Q: [[ 1.89189499e+01 1.70270580e+01 1.36216526e+01] [ 3.09979853e-05 -inf -4.87968388e+00] [ -inf 5.01336811e+01 -inf]] Optimal action for each state: [0 0 1]
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Temporal Difference Learning & Q-Learning* In general - agent has no knowledge of transition probabilities or rewards* **Temporal Difference Learning** (TD Learning) similar to value iteration, but accounts for this lack of knowlege.* Algorithm tracks running average of most recent awards & anticipated rewards.* **Q-L...
import numpy.random as rnd learning_rate0 = 0.05 learning_rate_decay = 0.1 n_iterations = 20000 s = 0 # start in state 0 Q = np.full((3, 3), -np.inf) # -inf for impossible actions for state, actions in enumerate(possible_actions): Q[state, actions] = 0.0 # Initial value = 0.0, for all po...
Q: [[ -inf 2.47032823e-323 -inf] [ 0.00000000e+000 -inf 0.00000000e+000] [ -inf 0.00000000e+000 -inf]] Optimal action for each state: [1 0 1]
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Exploration Policies* Q-Learning works only if exploration is thorough - not always possible.* Better alternative: explore more interesting routes using a *sigma* probability Approximate Q-Learning* TODO Ms Pac-Man with Deep Q-Learning
env = gym.make('MsPacman-v0') obs = env.reset() obs.shape, env.action_space # action_space = 9 possible joystick actions # observations = atari screenshots as 3D NumPy arrays mspacman_color = np.array([210, 164, 74]).mean() # crop image, shrink to 88x80 pixels, convert to grayscale, improve contrast def preprocess_o...
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Ms PacMan Observation | Deep-Q net- | -![observation](pics/mspacman-before-after.png) | ![alt](pics/mspacman-deepq.png)
# Create DQN # 3 convo layers, then 2 FC layers including output layer from tensorflow.contrib.layers import convolution2d, fully_connected input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_...
1.09000234097 1.35392784142 1.56906713688 2.5765440191 1.57079289043 1.75170834792 1.97005553639 1.97246688247 2.16126081383 1.550295331 1.75750140131 1.56052656734 1.7519523176 1.74495741558 1.95223849511 1.35289915931 1.56913152564 2.96387254691 1.76067311585 1.35536773229 1....
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
SQLAlchemy Homework - Surfs Up! Before You Begin1. Create a new repository for this project called `sqlalchemy-challenge`. **Do not add this homework to an existing repository**.2. Clone the new repository to your computer.3. Add your Jupyter notebook and `app.py` to this folder. These will be the main scripts to run ...
%matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt import seaborn as sns from scipy.stats import linregress from sklearn import datasets
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Reflect Tables into SQLAlchemy ORM Precipitation Analysis* Design a query to retrieve the last 12 months of precipitation data.* Select only the `date` and `prcp` values.* Load the query results into a Pandas DataFrame and set the index to the date column.* Sort the DataFrame values by `date`.* Plot the results using...
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") #Base.metadata.create_all(engine) inspector = inspect(eng...
dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'pri...
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Exploratory Climate Analysis
# Design a query to retrieve the last 12 months of precipitation data and plot the results # Design a query to retrieve the last 12 months of precipitation data. max_date = session.query(func.max(Measurement.date)).all()[0][0] # Select only the date and prcp values. #datetime.datetime.strptime(date_time_str, '%Y-%m-...
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
***STATION ANALYSIS***.\1) Design a query to calculate the total number of stations.\2) Design a query to find the most active stations.\3) List the stations and observation counts in descending order.\4) Which station has the highest number of observations?.\ Hint: You will need to use a function such as func.min, f...
Station = Base.classes.station session = Session(engine) # Getting column values from each table, here 'station' columns = inspector.get_columns('station') for c in columns: print(c) # Get columns of 'measurement' table columns = inspector.get_columns('measurement') for c in columns: print(c) engine.execute('Se...
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Bonus Challenge Assignment
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date ...
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Step 2 - Climate AppNow that you have completed your initial analysis, design a Flask API based on the queries that you have just developed.* Use Flask to create your routes. Routes* `/` * Home page. * List all routes that are available.* `/api/v1.0/precipitation` * Convert the query results to a dictionary using `...
import numpy as np import datetime as dt from datetime import timedelta, datetime import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, distinct, text, desc from flask import Flask, jsonify ##################################...
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
License.Copyright 2021 Tristan Behrens.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwared...
import tensorflow as tf from transformers import GPT2LMHeadModel, TFGPT2LMHeadModel from transformers import PreTrainedTokenizerFast from tokenizers import Tokenizer import os import numpy as np from source.helpers.samplinghelpers import * # Where the checkpoint lives. # Note can be downloaded from: https://ai-guru.s3...
_____no_output_____
Apache-2.0
sample.ipynb
AI-Guru/MMM-JSB
Maximum Likelihood Estimation (Generic models) This tutorial explains how to quickly implement new maximum likelihood models in `statsmodels`. We give two examples: 1. Probit model for binary dependent variables2. Negative binomial model for count dataThe `GenericLikelihoodModel` class eases the process by providing t...
import numpy as np from scipy import stats import statsmodels.api as sm from statsmodels.base.model import GenericLikelihoodModel
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
The ``Spector`` dataset is distributed with ``statsmodels``. You can access a vector of values for the dependent variable (``endog``) and a matrix of regressors (``exog``) like this:
data = sm.datasets.spector.load_pandas() exog = data.exog endog = data.endog print(sm.datasets.spector.NOTE) print(data.exog.head())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Them, we add a constant to the matrix of regressors:
exog = sm.add_constant(exog, prepend=True)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
To create your own Likelihood Model, you simply need to overwrite the loglike method.
class MyProbit(GenericLikelihoodModel): def loglike(self, params): exog = self.exog endog = self.endog q = 2 * endog - 1 return stats.norm.logcdf(q*np.dot(exog, params)).sum()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Estimate the model and print a summary:
sm_probit_manual = MyProbit(endog, exog).fit() print(sm_probit_manual.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Compare your Probit implementation to ``statsmodels``' "canned" implementation:
sm_probit_canned = sm.Probit(endog, exog).fit() print(sm_probit_canned.params) print(sm_probit_manual.params) print(sm_probit_canned.cov_params()) print(sm_probit_manual.cov_params())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Notice that the ``GenericMaximumLikelihood`` class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates. Example 2: Negative Binomial Regression for Count DataConsider a negative binomial regression model for count data withlog-like...
import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
New Model ClassWe create a new model class which inherits from ``GenericLikelihoodModel``:
from statsmodels.base.model import GenericLikelihoodModel class NBin(GenericLikelihoodModel): def __init__(self, endog, exog, **kwds): super(NBin, self).__init__(endog, exog, **kwds) def nloglikeobs(self, params): alph = params[-1] beta = params[:-1] ll = _ll_nb2(self.en...
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Two important things to notice: + ``nloglikeobs``: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). + ``start_params``: A one-dimensional array of starting values needs to be provided. The size of this array determines ...
import statsmodels.api as sm medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data medpar.head()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
The model we are interested in has a vector of non-negative integers asdependent variable (``los``), and 5 regressors: ``Intercept``, ``type2``,``type3``, ``hmo``, ``white``.For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
y = medpar.los X = medpar[["type2", "type3", "hmo", "white"]].copy() X["constant"] = 1
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Then, we fit the model and extract some information:
mod = NBin(y, X) res = mod.fit()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Extract parameter estimates, standard errors, p-values, AIC, etc.:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('P-values: ', res.pvalues) print('AIC: ', res.aic)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
As usual, you can obtain a full list of available information by typing``dir(res)``.We can also look at the summary of the estimation results.
print(res.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Testing We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0) print(res_nbin.summary()) print(res_nbin.params) print(res_nbin.bse)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Connecting MLOS to a C++ applicationThis notebook walks through connecting MLOS to a C++ application within a docker container.We will start a docker container, and run an MLOS Agent within it. The MLOS Agent will start the actual application, and communicate with it via a shared memory channel.In this example, the ML...
from mlos.Grpc.OptimizerMonitor import OptimizerMonitor import grpc # create a grpc channel and instantiate the OptimizerMonitor channel = grpc.insecure_channel('127.0.0.1:50051') optimizer_monitor = OptimizerMonitor(grpc_channel=channel) optimizer_monitor # There should be one optimizer running in the docker container...
_____no_output_____
MIT
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
We can now get the observations exactly the same way as for the Python example in `SmartCacheOptimization.ipynb`
optimizer = optimizers[0] features_df, objectives_df = optimizer.get_all_observations() import pandas as pd features, targets = optimizer.get_all_observations() data = pd.concat([features, targets], axis=1) data.to_json("CacheLatencyMainCPP.json") data lru_data, mru_data = data.groupby('cache_implementation') import ...
_____no_output_____
MIT
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
algorithm
def permute(values): n = len(values) # i: position of pivot for i in reversed(range(n - 1)): if values[i] < values[i + 1]: break else: # very last permutation values[:] = reversed(values[:]) return values # j: position of the next candidate f...
_____no_output_____
MIT
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
run
x = [4, 3, 2, 1] for i in range(25): print(permute(x)) permute(list('FADE'))
_____no_output_____
MIT
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
Fairness This exercise we explore the concepts and techniques in fairness in machine learning Through this exercise one can * Increase awareness of different types of biases that can occur * Explore feature data to identify potential sources of biases before training the model. * Evaluate model performanc...
### setup %tensorflow_version 2.x from __future__ import absolute_import, division, print_function, unicode_literals ## title Import revelant modules and install Facets import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers from matplotlib import pyplot as plt from matplotli...
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Analysing the dataset with facets We analyse the dataset to identify any peculiarities before we train the model Here are some of the questions to ask before we can go ahead with the training * Are there missing feature values for a large number of observations? * Are there features that are m...
## title Visualize the Data in Facets fsg = FeatureStatisticsGenerator() dataframes = [{'table': train_df, 'name': 'trainData'}] censusProto = fsg.ProtoFromDataFrames(dataframes) protostr = base64.b64encode(censusProto.SerializeToString()).decode("utf-8") HTML_TEMPLATE = """<script src="https://cdnjs.cloudflare.com/aj...
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Task 1 We can perform the fairness analysis on the visualization dataset in the faucet, click on the Show Raw Data button on the histograms and categorical features to see the distribution of values, and from that try to find if there are any missing features?, features missing that can affect other features? are t...
## first convert the pandas data frame of the adult datset to tensor flow arrays. def pandas_to_numpy(data): # Drop empty rows. data = data.dropna(how="any", axis=0) # Separate DataFrame into two Numpy arrays labels = np.array(data['income_bracket'] == ">50K") features = data.drop('income_bracket', axis=1) ...
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
We can now train a neural network based on the features which we derived earlier, we use a feed-forward neural network withtwo hidden layers.We first convert our high dimensional categorical features into a real-valued vector, which we call an embedded vector.We use 'gender' for filtering the test for subgroup evaluat...
deep_columns = [ tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(age_buckets), tf.feature_column.indicator_column(relationship), tf.feature_column.embedding_column(native_country, dimension=8), tf.feature_column.emb...
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Confusion Matrix A confusion matrix is a gird which evaluates a models performance with predictions vs ground truth for your model and summarizes how often the model made the correct prediction and how often it made the wrong prediction. Let's start by creating a binary confusion matrix for our income-predict...
## Function to Visualize and plot the Binary Confusion Matrix def plot_confusion_matrix( confusion_matrix, class_names, subgroup, figsize = (8,6)): df_cm = pd.DataFrame( confusion_matrix, index=class_names, columns=class_names, ) rcParams.update({ 'font.family':'sans-serif', 'font.sans-serif':[...
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
# restart (or reset) your virtual machine #!kill -9 -1
_____no_output_____
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
[Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection)
!git clone https://github.com/tensorflow/models.git
Cloning into 'models'... remote: Enumerating objects: 18, done. remote: Counting objects: 100% (18/18), done. remote: Compressing objects: 100% (17/17), done. remote: Total 30176 (delta 7), reused 11 (delta 1), pack-reused 30158 Receiving objects: 100% (30176/30176), 510.33 MiB | 15.16 MiB/s, done. Resolvin...
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
COCO API installation
!git clone https://github.com/cocodataset/cocoapi.git %cd cocoapi/PythonAPI !make !cp -r pycocotools /content/models/research/
Cloning into 'cocoapi'... remote: Enumerating objects: 959, done. remote: Total 959 (delta 0), reused 0 (delta 0), pack-reused 959 Receiving objects: 100% (959/959), 11.69 MiB | 6.35 MiB/s, done. Resolving deltas: 100% (571/571), done. /content/cocoapi/PythonAPI python setup.py build_ext --inplace running build_e...
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Protobuf Compilation
%cd /content/models/research/ !protoc object_detection/protos/*.proto --python_out=.
/content/models/research
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Add Libraries to PYTHONPATH
%cd /content/models/research/ %env PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection %env
/content/models/research env: PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Testing the Installation
!python object_detection/builders/model_builder_test.py %cd /content/models/research/object_detection
/content/models/research/object_detection
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
[Tensorflow Face Detector](https://github.com/yeephycho/tensorflow-face-detection)
%cd /content !git clone https://github.com/yeephycho/tensorflow-face-detection.git %cd tensorflow-face-detection !wget https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg filename = 'grace_hopper.jpg' #!python inference_usbCam_face.py grace_hopper.jpg import sys import time import num...
inference time cost: 2.3050696849823
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Writing a Molecular Monte Carlo SimulationStarting today, make sure you have the functions1. `calculate_LJ` - written in class1. `read_xyz` - provided in class1. `calculate_total_energy` - modified version provided in this notebook written for homework which has cutoff1. `calculate_distance` - should be the version wr...
# add imports here import math import random def calculate_total_energy(coordinates, box_length, cutoff): """ Calculate the total energy of a set of particles using the Lennard Jones potential. Parameters ---------- coordinates : list A nested list containing the x, y,z coordinate for ...
_____no_output_____
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
The Metropolis Criterion$$ P_{acc}(m \rightarrow n) = \text{min} \left[ 1,e^{-\beta \Delta U} \right] $$
def accept_or_reject(delta_U, beta): """ Accept or reject a move based on the Metropolis criterion. Parameters ---------- detlta_U : float The change in energy for moving system from state m to n. beta : float 1/temperature Returns ------- boolean Wh...
_____no_output_____
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
Monte Carlo Loop
# Read or generate initial coordinates coordinates, box_length = read_xyz('lj_sample_configurations/lj_sample_config_periodic1.txt') # Set simulation parameters reduced_temperature = 0.9 num_steps = 5000 max_displacement = 0.1 cutoff = 3 #how often to print an update freq = 1000 # Calculated quantities beta = 1 /...
-4351.540194543858 -198.4888837441566 0 -5.6871567358709845 1000 -5.651180182170634 2000 -5.637020769853117 3000 -5.63623029990943 4000 -5.62463482708468
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
Data Acquisition
arxiv_files = sorted(glob('../data/arxiv/*')) scirate_files = sorted(glob('../data/scirate/*')) arxiv_data = [] for file in arxiv_files: with open(file, 'r') as f: arxiv_data.append(json.load(f)) print(len(arxiv_data)) scirate_data = [] for file in scirate_files: with open(file, 'r') as f: ...
_____no_output_____
MIT
playground/eda.ipynb
tukai21/arxiv-ranking
EDA Entry ID: paper name (DOI?)We can create an arbitrary paper id that corresponds to each paper title, authors, and DOI.Possible features:- Arxiv order- Scirate order- Paper length (pages)- Title length (words)- Number of authors- Total of citations of the authors (or first author? last author?)- Bag of Words of ti...
# obtain features from both Arxiv and Scirate paper lists index = [] title = [] authors = [] num_authors = [] title_length = [] arxiv_order = [] submit_time = [] submit_weekday = [] paper_size = [] num_versions = [] for res in arxiv_data: date = res['date'] papers = res['papers'] for paper in papers: ...
_____no_output_____
MIT
playground/eda.ipynb
tukai21/arxiv-ranking
Portfolio Exercise: Starbucks Background InformationThe dataset you will be provided in this portfolio exercise was originally used as a take-home assignment provided by Starbucks for their job candidates. The data for this exercise consists of about 120,000 data points split in a 2:1 ratio among training and test fi...
# load in packages from itertools import combinations from test_results import test_results, score import numpy as np import pandas as pd import scipy as sp import sklearn as sk import matplotlib.pyplot as plt import seaborn as sb %matplotlib inline # load in the data train_data = pd.read_csv('./training.csv') train...
_____no_output_____
MIT
Project/Starbucks/.ipynb_checkpoints/Starbucks-checkpoint.ipynb
kundan7kumar/Machine-Learning
리눅스 명령어
!ls !ls -l !pwd # 현재 위치 !ls -l ./sample_data !ls -l ./ !ls -l ./Wholesale_customers_data.csv import pandas as pd df = pd.read_csv('./Wholesale_customers_data.csv') df.info() X = df.iloc[:,:] X.shape from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X)
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
K-means 클러스터> max_iterint, default=300 /> Maximum number of iterations of the k-means algorithm for a single run.
from sklearn import cluster kmeans =cluster.KMeans(n_clusters=5) kmeans.fit(X) kmeans.labels_
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
첫번재 라인은 무엇으로 label , 두번째 라인은 무엇으로 label 해줌.> 이친구들을 df에 label을 붙여줌.
df['label'] = kmeans.labels_ df.head()
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
보고서 작성에는 2차원으로 보는게 젤 좋음.시각화의 시점은 무조건 XY로
df.plot(kind='scatter', x='Grocery',y='Frozen',c='label', cmap='Set1', figsize=(10,10))
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning