markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
TODO: Backpropagate the errorNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
# TODO: Write the error term formula def error_term_formula(x, y, output): return (y - output) * sigmoid_prime(x) # Neural Network hyperparameters epochs = 1000 learnrate = 0.5 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random...
Epoch: 0 Train loss: 0.27336783372760837 ========= Epoch: 100 Train loss: 0.2144589591438936 ========= Epoch: 200 Train loss: 0.21248210601845877 ========= Epoch: 300 Train loss: 0.21145849287875826 ========= Epoch: 400 Train loss: 0.2108945778573249 ========= Epoch: 500 Train loss: 0.21055121998038537 ========= ...
MIT
Introduction to Neural Networks/StudentAdmissions.ipynb
kushkul/Facebook-Pytorch-Scholarship-Challenge
Calculating the Accuracy on the Test Data
# Calculate accuracy on test data test_out = sigmoid(np.dot(features_test, weights)) predictions = test_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy))
Prediction accuracy: 0.800
MIT
Introduction to Neural Networks/StudentAdmissions.ipynb
kushkul/Facebook-Pytorch-Scholarship-Challenge
Lambda School Data Science*Unit 2, Sprint 2, Module 2*--- Random Forests Assignment- [ ] Read [โ€œAdopting a Hypothesis-Driven Workflowโ€](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Define a function ...
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.m...
_____no_output_____
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
1. Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.
import pandas as pd import numpy as np %matplotlib inline # Splitting the train into a train & val train, val = train_test_split(train, train_size=0.80, test_size=0.02, stratify=train['status_group'], random_state=42) def wrangle(X): X = X.copy() X['latitude'] = X['latitude'].replace(-...
(47520, 38) (1188, 38)
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
2. Try Ordinal Encoding
pip install category_encoders %%time import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), RandomF...
_____no_output_____
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
1. Load the pre-trained VGG-16 (only the feature extractor)
# Load the ImageNet VGG-16 model, ***excluding*** the latter part regarding the classifier # Default of input_shape is 224x224x3 for VGG-16 img_w,img_h = 32,32 vgg_extractor = tf.keras.applications.vgg16.VGG16(weights = "imagenet", include_top=False, input_shape = (img_w, img_h, 3)) vgg_extractor.summary()
Metal device set to: Apple M1 systemMemory: 16.00 GB maxCacheSize: 5.33 GB
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
2. Extend VGG-16 to match our requirement
# Freeze all layers in VGG-16 for i,layer in enumerate(vgg_extractor.layers): print( f"Layer {i}: name = {layer.name} , trainable = {layer.trainable} => {False}" ) layer.trainable = False # freeze this layer x = vgg_extractor.output # Add our custom layer(s) to the end of the existing model x = tf.keras....
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________...
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
3. Prepare our own dataset
# Load CIFAR-10 color image dataset (x_train , y_train), (x_test , y_test) = tf.keras.datasets.cifar10.load_data() # Inspect the dataset print( f"x_train: type={type(x_train)} dtype={x_train.dtype} shape={x_train.shape} max={x_train.max(axis=None)} min={x_train.min(axis=None)}" ) print( f"y_train: type={type(y_train)}...
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
4. Transfer learning
# Set loss function, optimizer and evaluation metric model.compile( loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["acc"] ) history = model.fit( x_train_vgg, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(x_test_vgg,y_test) ) # Summarize history for accuracy plt.figure(figsize=(15,...
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
5. Evaluate and test the model
# Evaluate the trained model on the test set results = model.evaluate(x_test_vgg, y_test, batch_size=128) print("test loss, test acc:", results) # Test using the model on x_test_vgg[0] i = 0 y_pred = model.predict( x_test_vgg[i].reshape(1,32,32,3) ) plt.imshow( x_test[i] ) plt.title( f"x_test[{i}]: predict=[{np.argmax...
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
This notebook serves as a refresher with some basic Python code and functions 1) Define a variable called x, with initial value of 5. multiply by 2 four times and print the value each time
x = 5 for i in range(4): x = x*2 print(i, x)
0 10 1 20 2 40 3 80
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
2) Define a list
p = [9, 4, -5, 0, 10.9] # Get length of list len(p) # index of a specific element p.index(0) # first element in list p[0] print(sum(p))
18.9
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
3) Create a numpy array
import numpy as np a = np.array([5, -19, 30, 10]) # Get first element a[0] # Get last element a[-1] # Get first 3 elements print(a[0:3]) print(a[:3]) # Get size of the array a.shape
_____no_output_____
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
4) Define a dictionary that stores the age of three students. Mark: 26, Camilla: 23, Jason: 30
students = {'Mark':26, 'Camilla': 23, 'Jason':30} students['Mark'] students.keys()
_____no_output_____
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
5) Create a square function
def square_number(x): x2 = x**2 return x2 x_squared = square_number(5) print(x_squared)
25
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
6) List comprehension
# add 2 to every element in the numpy array number_array = np.arange(10, 21) print("original array:", number_array) number_array_plus_two = [x+2 for x in number_array] print("array plus 2:", number_array_plus_two) # select only even numbers even_numbers =[x for x in numbers_array if x%2==0] print(even_numbers)
[10, 12, 14, 16, 18, 20]
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
7) Random numbers
np.random.seed(42) rand_number = np.random.random(size =5) print(rand_number) np.random.seed(42) rand_number2 = np.random.random(size =5) print(rand_number2)
[0.37454012 0.95071431 0.73199394 0.59865848 0.15601864]
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
Imports and Paths
import torch from torch import nn import torch.nn.functional as F import torch.optim as optim import numpy as np import pandas as pd import os import shutil from skimage import io, transform import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader import matplotli...
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Load the Data
df = pd.read_csv(f'{PATH}Data_Entry_2017.csv') df.shape df.head() img_list = os.listdir(f'{PATH}images') len(img_list)
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Collate the data
df_pa = df.loc[df.view=='PA',:] df_pa.reset_index(drop=True, inplace=True) trn_sz = int(df_pa.shape[0]/2) df_pa_trn = df_pa.loc[:trn_sz,:] df_pa_tst = df_pa.loc[trn_sz:,:] df_pa_tst.shape pneumo = [] for i,v in df_pa_trn.iterrows(): if "pneumo" in v['labels'].lower(): pneumo.append('pneumo') else: ...
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Copy images to train and test folders
# dst = os.path.join(PATH,'trn') # src = os.path.join(PATH,'images') # for i,v in df_pa_trn.iterrows(): # src2 = os.path.join(src,v.image) # shutil.copy2(src2,dst) # dst = os.path.join(PATH,'tst') # src = os.path.join(PATH,'images') # for i,v in df_pa_tst.iterrows(): # src2 = os.path.join(src,v.image) ...
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Create the Dataset and Dataloader
class TDataset(Dataset): def __init__(self, df, root_dir, transform=None): """ Args: df (dataframe): df with all the annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied ...
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Define and train a CNN
class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution kernel self.conv1 = nn.Conv2d(1, 6, 5) self.pool = nn.MaxPool2d(2, 2) # 6 input image channel, 16 output channels, 5x5 square convolution k...
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Compute LCMV inverse solution on evoked data in volume source spaceCompute LCMV inverse solution on an auditory evoked dataset in a volume sourcespace. It stores the solution in a nifti file for visualisation e.g. withFreeview.
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.beamformer import lcmv from nilearn.plotting import plot_stat_map from nilearn.image import index_img print(__doc_...
_____no_output_____
BSD-3-Clause
0.12/_downloads/plot_lcmv_beamformer_volume.ipynb
drammock/mne-tools.github.io
Get epochs
event_id, tmin, tmax = 1, -0.2, 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels events = mne.read_events(event_fname) # Set up pick list: EEG + MEG - bad channels (modify to your needs) left_temporal_chann...
_____no_output_____
BSD-3-Clause
0.12/_downloads/plot_lcmv_beamformer_volume.ipynb
drammock/mne-tools.github.io
Speeding-up gradient-boostingIn this notebook, we present a modified version of gradient boosting whichuses a reduced number of splits when building the different trees. Thisalgorithm is called "histogram gradient boosting" in scikit-learn.We previously mentioned that random-forest is an efficient algorithm sinceeach ...
from sklearn.datasets import fetch_california_housing data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will make a quick benchmark of the original gradient boosting.
from sklearn.model_selection import cross_validate from sklearn.ensemble import GradientBoostingRegressor gradient_boosting = GradientBoostingRegressor(n_estimators=200) cv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Gradient Boosting Decision Tree") print(f"R2 score via cross-vali...
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
We recall that a way of accelerating the gradient boosting is to reduce thenumber of split considered within the tree building. One way is to bin thedata before to give them into the gradient boosting. A transformer called`KBinsDiscretizer` is doing such transformation. Thus, we can pipelinethis preprocessing with the ...
import numpy as np from sklearn.preprocessing import KBinsDiscretizer discretizer = KBinsDiscretizer( n_bins=256, encode="ordinal", strategy="quantile") data_trans = discretizer.fit_transform(data) data_trans
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
NoteThe code cell above will generate a couple of warnings. Indeed, for some ofthe features, we requested too much bins in regard of the data dispersionfor those features. The smallest bins will be removed.We see that the discretizer transforms the original data into an integer.This integer represents the bin index whe...
[len(np.unique(col)) for col in data_trans.T]
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
After this transformation, we see that we have at most 256 unique values perfeatures. Now, we will use this transformer to discretize data beforetraining the gradient boosting regressor.
from sklearn.pipeline import make_pipeline gradient_boosting = make_pipeline( discretizer, GradientBoostingRegressor(n_estimators=200)) cv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Gradient Boosting Decision Tree with KBinsDiscretizer") print(f"R2 score via cross-validation: ...
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
Here, we see that the fit time has been drastically reduced but that thestatistical performance of the model is identical. Scikit-learn provides aspecific classes which are even more optimized for large dataset, called`HistGradientBoostingClassifier` and `HistGradientBoostingRegressor`. Eachfeature in the dataset `data...
from sklearn.experimental import enable_hist_gradient_boosting from sklearn.ensemble import HistGradientBoostingRegressor histogram_gradient_boosting = HistGradientBoostingRegressor( max_iter=200, random_state=0) cv_results_hgbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Histogram Gradien...
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
Select only numeric columns
train_data_raw2 = clean_func(train_data_raw) train_data = train_data_raw2.iloc[:, train_data_raw2.columns != target] train_data_target = train_data_raw2[target].values X_train,X_test,Y_train,Y_test = train_test_split(train_data ,train_data_target ,test_size=0...
_____no_output_____
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Models- logreg- random forest random forest naive
model_rf = RandomForestClassifier( n_estimators=100 ) model_rf.fit(X_train, Y_train) # Cross Validation RF scores = cross_val_score(model_rf, X_train, Y_train, cv=10) print(scores) pred_rf = model_rf.predict(X_test) metrics.accuracy_score(Y_test,pred_rf)
_____no_output_____
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Random Forest Grid Search
model_rf_gs = RandomForestClassifier() # parmeter dict param_grid = dict( n_estimators=np.arange(60,101,20) , min_samples_leaf=np.arange(2,4,1) #, criterion = ["gini","entropy"] #, max_features = np.arange(0.1,0.5,0.1) ) print(param_grid) grid = GridSearchCV(model_rf_gs,param_grid=param_grid,scoring =...
Optimal number of features : 16 == Feature short list == ['PassengerId' 'Pclass' 'Age' 'SibSp' 'Parch' 'Fare' 'female' 'male' 'embarked_cobh' 'embark_queenstown' 'embark_southampton' 'Cabin_letter_B' 'Cabin_letter_C' 'Cabin_letter_D' 'Cabin_letter_E' 'Cabin_letter_empty']
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
- Converge about 16- let us comare 16 vs full features on test set
Y_pred = model.predict(X_test) model_score = metrics.accuracy_score(Y_test,Y_pred) Y_pred_simple = model_simple.predict(X_test[feature_short]) model_simple_score = metrics.accuracy_score(Y_test,Y_pred_simple) print("model acc: %.3f" % model_score) print("simple model acc: %.3f" % model_simple_score)
model acc: 0.806 simple model acc: 0.825
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Amazon Sentiment Data
%load_ext autoreload %autoreload 2 import lxmls.readers.sentiment_reader as srs from lxmls.deep_learning.utils import AmazonData corpus = srs.SentimentCorpus("books") data = AmazonData(corpus=corpus)
_____no_output_____
MIT
labs/notebooks/non_linear_classifiers/exercise_4.ipynb
mpc97/lxmls
Implement Pytorch Forward pass As the final exercise today implement the log `forward()` method in lxmls/deep_learning/pytorch_models/mlp.pyUse the previous exercise as reference. After you have completed this you can run both systems for comparison.
# Model geometry = [corpus.nr_features, 20, 2] activation_functions = ['sigmoid', 'softmax'] # Optimization learning_rate = 0.05 num_epochs = 10 batch_size = 30 import numpy as np from lxmls.deep_learning.pytorch_models.mlp import PytorchMLP model = PytorchMLP( geometry=geometry, activation_functions=activatio...
_____no_output_____
MIT
labs/notebooks/non_linear_classifiers/exercise_4.ipynb
mpc97/lxmls
Explore endangered languages from UNESCO Atlas of the World's Languages in Danger InputEndangered languages- https://www.kaggle.com/the-guardian/extinct-languages/version/1 (updated in 2016)- original data: http://www.unesco.org/languages-atlas/index.php?hl=en&page=atlasmap (published in 2010)Countries of the world- h...
import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Load data
df = pd.read_csv("../../data/endangerment/extinct_languages.csv") print(df.shape) print(df.dtypes) df.head() df.columns ENDANGERMENT_MAP = { "Vulnerable": 1, "Definitely endangered": 2, "Severely endangered": 3, "Critically endangered": 4, "Extinct": 5, } df["Endangerment code"] = df["Degree of enda...
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Distribution of the degree of endangerment
plt.xticks(fontsize=16) plt.yticks(fontsize=16) df["Degree of endangerment"].hist(figsize=(15,5)).get_figure().savefig('endangered_hist.png', format="png")
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Show distribution on map
countries_map = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) countries_map.head() # Plot Europe fig, ax = plt.subplots(figsize=(20, 10)) countries_map.plot(color='lightgrey', ax=ax) plt.xlim([-30, 50]) plt.ylim([30, 75]) df.plot( x="Longitude", y="Latitude", kind="scatter", title="En...
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Get endangered languages only for Europe
countries = pd.read_csv("../../data/general/country_codes.tsv", sep="\t") europe = countries[countries["Area"] == "Europe"] europe europe_countries = set(europe["Name"].to_list()) europe_countries df[df["Countries"].isna()] df = df[df["Countries"].notna()] df[df["Countries"].isna()] df["In Europe"] = df["Countries"].ap...
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Save output
df_europe.to_csv("../../data/endangerment/endangered_languages_europe.csv", index=False)
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Polynomials Class
from sympy import * import numpy as np x = Symbol('x') class polinomio: def __init__(self, coefficienti: list): self.coefficienti = coefficienti self.grado = 0 if len(self.coefficienti) == 0 else len( self.coefficienti) - 1 i = 0 while i < len(self.coefficienti): ...
2x^4 + 2x^3 + 4x^2 + 2x + 2
MIT
Polinomi.ipynb
RiccardoTancredi/Polynomials
Function to list overlapping Landsat 8 scenesThis function is based on the following tutorial: http://geologyandpython.com/get-landsat-8.htmlThis function uses the area of interest (AOI) to retrieve overlapping Landsat 8 scenes. It will also output on the scenes with the largest portion of overlap and with less than ...
def landsat_scene_list(aoi, start_date, end_date): '''Creates a list of Landsat 8, level 1, tier 1 scenes that overlap with an aoi and are captured within a specified date range. Parameters ---------- aoi : str The path to a shape file of an aoi with geometry. start-date : str ...
_____no_output_____
BSD-3-Clause
notebooks/testing/previously in ignored file/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb
sarahmjaffe/sagebrush-ecosystem-modeling-with-landsat8
TEST**Can DELETE everything below once tested and approved!**
# WILL DELETE WHEN FUNCTIONS ARE SEPARATED OUT def NEON_site_extent(path_to_NEON_boundaries, site): '''Extracts a NEON site extent from an individual site as long as the original NEON site extent shape file contains a column named 'siteID'. Parameters ---------- path_to_NEON_boundaries : str ...
_____no_output_____
BSD-3-Clause
notebooks/testing/previously in ignored file/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb
sarahmjaffe/sagebrush-ecosystem-modeling-with-landsat8
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan dan memuat model Liht di TensorFlow.org Jalankan di Google Colab Lihat kode di GitHub Unduh notebook Progres dari model dapat disimpan ketika proses training dan setelah training. Ini berarti sebuah model dapat melanjutkan proses training dengan kondisi yang sama dengan ketika pr...
try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass !pip install pyyaml h5py # Required to save models in HDF5 format from __future__ import absolute_import, division, print_function, unicode_literals import os import tensorflow as tf from tensorflow import keras p...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Memperoleh datasetUntuk menunjukan bagaimana cara untuk menyimpan dan memuat berat dari model, Anda akan menggunakan [Dataset MNIST](http://yann.lecun.com/exdb/mnist/). Untuk mempercepat operasi ini, gunakan hanya 1000 data pertama:
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0 test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Mendefinisikan sebuah model Mulai dengan membangun sebuah model sekuensial sederhana:
# Define a simple sequential model def create_model(): model = tf.keras.models.Sequential([ keras.layers.Dense(512, activation='relu', input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categoric...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan cek poin ketika proses training You can use a trained model without having to retrain it, or pick-up training where you left offโ€”in case the training process was interrupted. The `tf.keras.callbacks.ModelCheckpoint` callback allows to continually save the model both *during* and at *the end* of training.Anda...
checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, ...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
!ls {checkpoint_dir}
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Create a new, untrained model. When restoring a model from weights-only, you must have a model with the same architecture as the original model. Since it's the same model architecture, you can share weights despite that it's a different *instance* of the model.Now rebuild a fresh, untrained model, and evaluate it on th...
# Create a basic model instance model = create_model() # Evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Then load the weights from the checkpoint and re-evaluate:
# Loads the weights model.load_weights(checkpoint_path) # Re-evaluate the model loss,acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Checkpoint callback optionsThe callback provides several options to provide unique names for checkpoints and adjust the checkpointing frequency.Train a new model, and save uniquely named checkpoints once every five epochs:
# Include the epoch in the file name (uses `str.format`) checkpoint_path = "training_2/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights every 5 epochs cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1,...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Sekarang, lihat hasil cek poin dan pilih yang terbaru:
!ls {checkpoint_dir} latest = tf.train.latest_checkpoint(checkpoint_dir) latest
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Catatan: secara default format tensorflow hanya menyimpan 5 cek poin terbaru.Untuk tes, reset model dan muat cek poin terakhir:
# Create a new model instance model = create_model() # Load the previously saved weights model.load_weights(latest) # Re-evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Apa sajakah file-file ini? Kode di atas menyimpan berat dari model ke sebuah kumpulan [cek poin](https://www.tensorflow.org/guide/saved_modelsave_and_restore_variables)-file yang hanya berisikan berat dari model yan sudah dilatih dalam format biner. Cek poin terdiri atas:* Satu atau lebih bagian (*shard*) yang berisi ...
# Save the weights model.save_weights('./checkpoints/my_checkpoint') # Create a new model instance model = create_model() # Restore the weights model.load_weights('./checkpoints/my_checkpoint') # Evaluate the model loss,acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2...
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan keseluruhan modelGunakan [`model.save`](https://www.tensorflow.org/api_docs/python/tf/keras/Modelsave) untuk menyimpan arsitektur dari model, berat, dan konfigurasi training dalam satu file/folder. Hal ini menyebabkan Anda dapat melakukan ekspor dari suatu model sehingga model tersebut dapat digunakan tanpa ...
# Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model to a HDF5 file. # The '.h5' extension indicates that the model shuold be saved to HDF5. model.save('my_model.h5')
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Sekarang, buat ulang model dari file tersebut:
# Recreate the exact same model, including its weights and the optimizer new_model = tf.keras.models.load_model('my_model.h5') # Show the model architecture new_model.summary()
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Cek akurasi dari model:
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Teknik ini menyimpan semuanya:* Nilai berat* Konfigurasi model (arsitektur)* konfigurasi dari optimizerKeras menyimpan model dengan cara menginspeksi arsitekturnya. Saat ini, belum bisa menyimpan optimizer TensorFlow (dari `tf.train`). Ketika menggunakannya, Anda harus mengkompilasi kembali model setelah dimuat, dan An...
# Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model as a SavedModel. !mkdir -p saved_model model.save('saved_model/my_model')
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Format SavedModel merupakan direktori yang berisi sebuah *protobuf binary* dan sebuah cek poin TensorFlow. Mememiksa direktori dari model tersimpan:
# my_model directory !ls saved_model # Contains an assets folder, saved_model.pb, and variables folder. !ls saved_model/my_model
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Muat ulang Keras model yang baru dari model tersimpan:
new_model = tf.keras.models.load_model('saved_model/my_model') # Check its architecture new_model.summary()
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Model yang sudah terestorasi dikompilasi dengan argument yang sama dengan model asli. Coba lakukan evaluasi dan prediksi menggunakan model tersebut:
# Evaluate the restored model loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100*acc)) print(new_model.predict(test_images).shape)
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
์ฆ์‹์„ ํ†ตํ•œ ๋ฐ์ดํ„ฐ์…‹ ํฌ๊ธฐ ํ™•์žฅ 1. Google Drive์™€ ์—ฐ๋™
from google.colab import drive drive.mount("/content/gdrive") path = "gdrive/'My Drive'/'Colab Notebooks'/CNN" !ls gdrive/'My Drive'/'Colab Notebooks'/CNN/datasets
cats_and_dogs_small
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
2. ๋ชจ๋ธ ์ƒ์„ฑ
from tensorflow.keras import layers, models, optimizers
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
0. Sequential ๊ฐ์ฒด ์ƒ์„ฑ1. conv layer(filter32, kernel size(3,3), activation 'relu', input_shape()2. pooling layer(pool_size(2,2))3. conv layer(filter 64, kernel size(3,3), activation 'relu'4. pooling layer(pool_size(2,2))5. conv layer(filter 128, kernel size(3,3), activation 'relu'6. pooling layer(pool_size(2,2))7. conv la...
model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.M...
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
3. ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ
import os base_dir = '/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small' train_dir = os.path.join(base_dir,'train') validation_dir = os.path.join(base_dir,'validation') test_dir=os.path.join(base_dir,'test') # [์ฝ”๋“œ์ž‘์„ฑ] # train_datagen์ด๋ผ๋Š” ImageDataGenerator ๊ฐ์ฒด ์ƒ์„ฑ # train_datagen์˜ ์ฆ์‹ ์˜ต์…˜ # 1. scal...
Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes. Found 1000 images belonging to 2 classes.
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
4. ๋ชจ๋ธ ํ›ˆ๋ จ
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50)
WARNING:tensorflow:From <ipython-input-16-c480ae1e8dcf>:5: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. Epoch 1/30 100/100 [==============================] - 526s 5s/s...
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
5. ์„ฑ๋Šฅ ์‹œ๊ฐํ™”
import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) +1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') p...
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
* acc์™€ val_acc ๋ชจ๋‘ ์ฆ๊ฐ€ํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์•„ ๊ณผ์ ํ•ฉ์ด ๋ฐœ์ƒํ•˜์ง€ ์•Š์•˜์Œ
plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
6. ๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ
test_loss, test_accuracy = model.evaluate_generator(test_generator, steps=50) print(test_loss) print(test_accuracy)
0.5713947415351868 0.7160000205039978
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
7. ๋ชจ๋ธ ์ €์žฅ
model.save('/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small_augmentation.h5')
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
Start Julia evironment
# Install any required python packages here # !pip install <packages> # Here we install Julia %%capture %%shell if ! command -v julia 3>&1 > /dev/null then wget -q 'https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.2-linux-x86_64.tar.gz' \ -O /tmp/julia.tar.gz tar -x -f /tmp/julia.tar.gz -C...
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
After you run the first cell (the the cell directly above this text), go to Colab's menu bar and select **Edit** and select **Notebook settings** from the drop down. Select *Julia 1.6* in Runtime type. You can also select your prefered harwdware acceleration (defaults to GPU). You should see something like this:> ![Col...
VERSION
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
**The next three cells are for GPU benchmarking. If you are using this notebook for the first time and have GPU enabled, you can give it a try.** Import all the Julia Packages Here, we first import all the required packages. CUDA is used to offload some of the processing to the gpu, Flux is the package for putting to...
import Pkg Pkg.add(["CUDA","Flux","MLDatasets","Images","Makie","CairoMakie","ImageMagick"]) using CUDA, Flux, MLDatasets, Images, Makie, Statistics, CairoMakie,ImageMagick using Base.Iterators: partition
 Resolving package versions...  No Changes to `~/.julia/environments/v1.6/Project.toml`  No Changes to `~/.julia/environments/v1.6/Manifest.toml`
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's look at the functions we can call from the MNIST set itself
names(MNIST)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's assume we want to get the training data from the MNIST package. Now, let's see what we get returned if we call that function
Base.return_types(MNIST.traindata)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
This does not mean a heck of a lot to me initially, but we can basically see we get 2 tuples returned. So let's go ahead and assign some x and y to each of the tuples so we can probe further.
x, y = MNIST.traindata();
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's now further investigate the x
size(x)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
We know from the MNIST dataset, that the set contains 60000 images, each of size 28x28. So clearly we are looking at the images themselves. So this is our input. Let's plot an example to make sure.
i = rand(1:60000) heatmap(x[:,:,i],colormap = :grays)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Similarly, let's have a quick look at the size of y. I expect this is the label associated with the images
y[i]
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
And then let's check that the image above is labelled as what we expect.
y[7] show(names(Images)) ?imshow names(ImageShow)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
์ปดํŒŒ์ผ๋Ÿฌ์—์„œ ๋ณ€์ˆ˜, ์กฐ๊ฑด๋ฌธ ๋‹ค๋ฃจ๊ธฐ๋ณ€์ˆ˜๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ธฐ๊ณ„์ƒํƒœ์— ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ๋ฉ”๋ชจ๋ฆฌ ์—ฐ์‚ฐ์„ ์œ„ํ•œ ์ €๊ธ‰์–ธ์–ด ๋ช…๋ น์„ ์ถ”๊ฐ€ํ•œ๋‹ค.์กฐ๊ฑด๋ฌธ์„ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š” ์‹คํ–‰์ฝ”๋“œ๋ฅผ ์ˆœ์ฐจ์ ์œผ๋กœ๋งŒ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ํŠน์ • ์ฝ”๋“œ ์œ„์น˜๋กœ ์ด๋™ํ•˜์—ฌ ์‹คํ–‰ํ•˜๋Š” ์ €๊ธ‰์–ธ์–ด ๋ช…๋ น์„ ์ถ”๊ฐ€ํ•œ๋‹ค.
data Expr = Var Name -- x | Val Value -- n | Add Expr Expr -- e1 + e2 -- | Sub Expr Expr -- | Mul Expr Expr -- | Div Expr Expr | If Expr Expr Expr -- if e then e1 else e0 deriving Show type Name = String -- ๋ณ€์ˆ˜์˜ ์ด๋ฆ„์€ ๋ฌธ์ž์—ด๋กœ ํ‘œํ˜„ type ...
_____no_output_____
MIT
0917 Compilers with variables and conditionals.ipynb
hnu-pl/compiler2019fall
์•„๋ž˜๋Š” e2๋ฅผ ์ปดํŒŒ์ผํ•œ code2๋ฅผ ์‹คํ–‰ํ•œ ๊ฒฐ๊ณผ๊ฐ€ ์™œ ์›ํ•˜๋Š” ๋Œ€๋กœ ๋‚˜์˜ค์ง€ ์•Š๋Š”์ง€ ์ข€๋” ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ธฐ ์œ„ํ•ดstep ํ•จ์ˆ˜๋ฅผ ํ•œ๋‹จ๊ณ„์”ฉ ํ˜ธ์ถœํ•ด ๊ฐ€๋ฉฐ ๊ฐ๊ฐ์˜ ๋ช…๋ น ์‹คํ–‰ ์ „ํ›„์˜ ๊ธฐ๊ณ„์ƒํƒœ vm0,...,vm6๋ฅผ ์•Œ์•„๋ณธ ๋‚ด์šฉ์ด๋‹ค.
vm0@(s0, _,c0:cs0) = ([], mem0, code2) vm0 vm1@(s1,mem1,c1:cs1) = step c0 (s0,mem0,cs0) vm1 vm2@(s2,mem2,c2:cs2) = step c1 (s1,mem1,cs1) vm2 vm3@(s3,mem3,c3:cs3) = step c2 (s2,mem2,cs2) vm3 vm4@(s4,mem4,c4:cs4) = step c3 (s3,mem3,cs3) vm4 vm5@(s5,mem5,c5:cs5) = step c4 (s4,mem4,cs4) vm5 vm6 = step c5 (s5,mem5,cs5) vm...
_____no_output_____
MIT
0917 Compilers with variables and conditionals.ipynb
hnu-pl/compiler2019fall
BASIC PYTHON FOR RESEARCHERS _by_ [**_Megat Harun Al Rashid bin Megat Ahmad_**](https://www.researchgate.net/profile/Megat_Harun_Megat_Ahmad) last updated: April 14, 2016 ------- _8. Database and Data Analysis_ ---$Pandas$ is an open source library for data analysis in _Python_. It gives _Python_ similar capabilities...
import pandas as pd import numpy as np
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.1 Data Structures_** Data structures (similar to _Sequence_ in _Python_) of $Pandas$ revolves around the **_Series_** and **_DataFrame_** structures. Both are fast as they are built on top of $Numpy$. A **_Series_** is a one-dimensional object with a lot of similar properties similar to a list or dictionary in...
# Creating a series (with different type of data) s1 = pd.Series([34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW']) s1
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
The index of a **_Series_** can be specified during its creation and giving it a similar function to a dictionary.
# Creating a series with specified index lt = [34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW'] s2 = pd.Series(lt, index = ['b1', 'r1', 'solid angle', 18, 'reactor power', 'unit']) s2
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Data can be extracted by specifying the element position or index (similar to list/dictionary).
s1[3], s2['solid angle']
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
**_Series_** can also be constructed from a dictionary.
pop_cities = {'Kuala Lumpur':1588750, 'Seberang Perai':818197, 'Kajang':795522, 'Klang':744062, 'Subang Jaya':708296} cities = pd.Series(pop_cities) cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
The elements can be sort using the $Series.order()$ function. This will not change the structure of the original variable.
cities.order(ascending=False) cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Another sorting function is the $sort()$ function but this will change the structure of the **_Series_** variable.
# Sorting with descending values cities.sort(ascending=False) cities cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Conditions can be applied to the elements.
# cities with population less than 800,000 cities[cities<800000] # cities with population between 750,000 and 800,000 cities[cities<800000][cities>750000]
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---A **_DataFrame_** is a 2-dimensional data structure with named rows and columns. It is similar to _R_'s _data.frame_ object and function like a spreadsheet. **_DataFrame_** can be considered to be made of series of **_Series_** data according to the column names. **_DataFrame_** can be created by passing a 2-dimens...
# Creating a DataFrame by passing a 2-D numpy array of random number # Creating first the date-time index using date_range function # and checking it. dates = pd.date_range('20140801', periods = 8, freq = 'D') dates # Creating the column names as list Kedai = ['Kedai A', 'Kedai B', 'Kedai C', 'Kedai D', 'Kedai E'] #...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---Some of the useful functions that can be applied to a **_DataFrame_** include:
df.head() # Displaying the first five (default) rows df.head(3) # Displaying the first three (specified) rows df.tail(2) # Displaying the last two (specified) rows df.index # Showing the index of rows df.columns # Showing the fields of columns df.values...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
**_NaN_** means empty, missing data or unavailable.
df[df['Kedai B']<0] # With reference to specific value in a column (e.g. Kedai B) df2 = df.copy() # Made a copy of a database df2 # Adding column df2['Tambah'] = ['satu','satu','dua','tiga','empat','tiga','lima','enam'] df2 # Adding row using append() function. The previous loc() is possibly dep...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.2 Data Operations_** We have seen few operations previously on **_Series_** and **_DataFrame_** and here this will be explored further.
df.mean() # Statistical mean (column) - same as df.mean(0), 0 means column df.mean(1) # Statistical mean (row) - 1 means row df.mean()['Kedai C':'Kedai E'] # Statistical mean (range of columns) df.max() # Statistical max (column) df.max()['Kedai...
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---Other statistical functions can be checked by typing df.__. The data in a **_DataFrame_** can be represented by a variable declared using the $lambda$ operator.
df.apply(lambda x: x.max() - x.min()) # Operating array values with function df.apply(lambda z: np.log(z)) # Operating array values with function
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Replacing, rearranging and operations of data between columns can be done much like spreadsheet.
df3 = df.copy() df3[r'Kedai A^2/Kedai E'] = df3['Kedai A']**2/df3['Kedai E'] df3
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher