markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
*Step 8: Visualization Check*
fig = plt.figure() ax1 = fig.add_subplot(2,2,1) ax1.set_title(str(channel_1_color + ' Filled')) ax1.imshow(filled_c1, cmap='gray') ax2 = fig.add_subplot(2,2,2) ax2.set_title(str(channel_2_color + ' Filled')) ax2.imshow(filled_c2, cmap='gray') fig.set_size_inches(10.5, 10.5, forward=True)
_____no_output_____
MIT
scripts/object_identification_basic.ipynb
hhelmbre/qdbvcella
*Step 9: Labeling Objects*
label_objects1, nb_labels1 = ndi.label(filled_c1) sizes1 = np.bincount(label_objects1.ravel()) mask_sizes1 = sizes1 > 100 mask_sizes1[0] = 0 cells_cleaned_c1 = mask_sizes1[label_objects1] label_objects2, nb_labels2 = ndi.label(filled_c2) sizes2 = np.bincount(label_objects2.ravel()) mask_sizes2 = sizes2 > 100 mask_sizes...
_____no_output_____
MIT
scripts/object_identification_basic.ipynb
hhelmbre/qdbvcella
*Step 10: Visualization Check*
fig = plt.figure() ax1 = fig.add_subplot(2,2,1) ax1.set_title(str(channel_1_color + ' Labeled')) ax1.imshow(labeled_c1) ax2 = fig.add_subplot(2,2,2) ax2.set_title(str(channel_2_color + ' Labeled')) ax2.imshow(labeled_c2) fig.set_size_inches(10.5, 10.5, forward=True)
_____no_output_____
MIT
scripts/object_identification_basic.ipynb
hhelmbre/qdbvcella
*Step 11: Get Region Props*
regionprops_c1 = measure.regionprops(labeled_c1) regionprops_c2 = measure.regionprops(labeled_c2) df = pd.DataFrame(columns=['centroid x', 'centroid y','equiv_diam']) k = 1 for props in regionprops_c1: #Get the properties that I need for areas #Add them into a pandas dataframe that has the same number of rows a...
Count Blue: 114 Count Green: 16
MIT
scripts/object_identification_basic.ipynb
hhelmbre/qdbvcella
E4 Sensor Concatenation This sensor concatenation file compiles all .csv files of subjects by sensor type. A column is added with the "Subject_ID" and arranges the data in order of ascending ID number. The output of this function is a csv file. *** **Input:** Properly formatted .csv files from the E4FileFormatter (DB...
import pandas as pd import glob import os os.chdir('../00_source')
_____no_output_____
Apache-2.0
DigitalBiomarkers-HumanActivityRecognition/00_source/.ipynb_checkpoints/20_sensor_concat-checkpoint.ipynb
Big-Ideas-Lab/DBDP
Import & Concatenate Sensor Data of Choice**Functions:*** $\underline{data\_concat()}$ - reads all files in data directory (00_source) and concatenates those of one sensor type. Adds subject ID column to resulting .csv file > data = data type to be concatenated as a string > cols = column names in resulting datafra...
# Select files of specific data and concat to one dataframe def data_concat(data, cols, file_name): """ data = data type to be concatenated as a string cols = column names in resulting dataframe as a list file_name = output csv file name as a string """ all_filenames = [i for i in glob.glob(f'*...
_____no_output_____
Apache-2.0
DigitalBiomarkers-HumanActivityRecognition/00_source/.ipynb_checkpoints/20_sensor_concat-checkpoint.ipynb
Big-Ideas-Lab/DBDP
variance
print(ind_data.info())
<class 'pandas.core.frame.DataFrame'> Index: 0 entries Empty DataFrameNone
MIT
jupyterexample/StudyPandas2.ipynb
newrey/QUANTAXIS
DataWe assume that data has already been downloaded via notebook [1_data.ipynb](1_data.ipynb). Training data (for input `X` with associated label masks `Y`) can be provided via lists of numpy arrays, where each image can have a different size. Alternatively, a single numpy array can also be used if all images have th...
X = sorted(glob('data/dsb2018/train/images/*.tif')) Y = sorted(glob('data/dsb2018/train/masks/*.tif')) assert all(Path(x).name==Path(y).name for x,y in zip(X,Y)) X = list(map(imread,X)) Y = list(map(imread,Y)) n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1]
_____no_output_____
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Normalize images and fill small label holes.
axis_norm = (0,1) # normalize channels independently # axis_norm = (0,1,2) # normalize channels jointly if n_channel > 1: print("Normalizing image channels %s." % ('jointly' if axis_norm is None or 2 in axis_norm else 'independently')) sys.stdout.flush() X = [normalize(x,1,99.8,axis=axis_norm) for x in tqdm(...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 447/447 [00:01<00:00, 462.35it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 447/447 [00:04<00:00, 111.61it/s]
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Split into train and validation datasets.
assert len(X) > 1, "not enough training data" rng = np.random.RandomState(42) ind = rng.permutation(len(X)) n_val = max(1, int(round(0.15 * len(ind)))) ind_train, ind_val = ind[:-n_val], ind[-n_val:] X_val, Y_val = [X[i] for i in ind_val] , [Y[i] for i in ind_val] X_trn, Y_trn = [X[i] for i in ind_train], [Y[i] for i ...
number of images: 447 - training: 380 - validation: 67
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Training data consists of pairs of input image and label instances.
i = min(9, len(X)-1) img, lbl = X[i], Y[i] assert img.ndim in (2,3) img = img if img.ndim==2 else img[...,:3] plt.figure(figsize=(16,10)) plt.subplot(121); plt.imshow(img,cmap='gray'); plt.axis('off'); plt.title('Raw image') plt.subplot(122); plt.imshow(lbl,cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels') None...
_____no_output_____
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
ConfigurationA `StarDist2D` model is specified via a `Config2D` object.
print(Config2D.__doc__) # 32 is a good default choice (see 1_data.ipynb) n_rays = 32 # Use OpenCL-based computations for data generator during training (requires 'gputools') use_gpu = False and gputools_available() # Predict on subsampled grid for increased efficiency and larger field of view grid = (2,2) conf = Con...
_____no_output_____
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
**Note:** The trained `StarDist2D` model will *not* predict completed shapes for partially visible objects at the image boundary if `train_shape_completion=False` (which is the default option).
model = StarDist2D(conf, name='stardist', basedir='models')
Using default values: prob_thresh=0.5, nms_thresh=0.4.
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Check if the neural network has a large enough field of view to see up to the boundary of most objects.
median_size = calculate_extents(list(Y), np.median) fov = np.array(model._axes_tile_overlap('YX')) if any(median_size > fov): print("WARNING: median object size larger than field of view of the neural network.")
_____no_output_____
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Training You can define a function/callable that applies augmentation to each batch of the data generator.
augmenter = None # def augmenter(X_batch, Y_batch): # """Augmentation for data batch. # X_batch is a list of input images (length at most batch_size) # Y_batch is the corresponding list of ground-truth label images # """ # # ... # return X_batch, Y_batch
_____no_output_____
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
We recommend to monitor the progress during training with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard). You can start it in the shell from the current working directory like this: $ tensorboard --logdir=.Then connect to [http://localhost:6006/](http://localhost:6006/) with yo...
quick_demo = True if quick_demo: print ( "NOTE: This is only for a quick demonstration!\n" " Please set the variable 'quick_demo = False' for proper (long) training.", file=sys.stderr, flush=True ) model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter, ...
NOTE: This is only for a quick demonstration! Please set the variable 'quick_demo = False' for proper (long) training.
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
Threshold optimization While the default values for the probability and non-maximum suppression thresholds already yield good results in many cases, we still recommend to adapt the thresholds to your data. The optimized threshold values are saved to disk and will be automatically loaded with the model.
model.optimize_thresholds(X_val, Y_val)
NMS threshold = 0.3: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 16/20 [00:46<00:17, 4.42s/it, 0.485 -> 0.796] NMS threshold = 0.4: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 16/20 [00:46<00:17, 4.45s/it, 0.485 -> 0.796] NMS threshold = 0.5: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 16/20 [00:50<00:18, 4.63s/it, 0.485 -> 0.796]
BSD-3-Clause
examples/2D/2_training.ipynb
feberhardt/stardist
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipy...
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import gee...
_____no_output_____
MIT
Datasets/Vectors/landsat_wrs2_grid.ipynb
YuePanEdward/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map
_____no_output_____
MIT
Datasets/Vectors/landsat_wrs2_grid.ipynb
YuePanEdward/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset dataset = ee.FeatureCollection('projects/google/wrs2_descending') empty = ee.Image().byte() Map.setCenter(-78, 36, 8) Map.addLayer(empty.paint(dataset, 0, 2), {}, 'Landsat WRS-2 grid')
_____no_output_____
MIT
Datasets/Vectors/landsat_wrs2_grid.ipynb
YuePanEdward/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
Datasets/Vectors/landsat_wrs2_grid.ipynb
YuePanEdward/earthengine-py-notebooks
Title : Longest Palindromic SubstringChapter : Dynamic ProgrammingLink : [YouTube](https://youtu.be/LYHFaO1lgYM)ChapterLink : [PlayList](https://youtube.com/playlist?list=PLDV-cCQnUlIa0owhTLK-VT994Qh6XTy4v)문제: μ£Όμ–΄μ§„ string sμ—μ„œ, κ°€μž₯ κΈ΄ palindromic substring을 returnν•˜μ—¬λΌ
def longestPalindrome(s: str) -> str: str_length = len(s) dp_table = [[0] * str_length for i in range(str_length)] for idx in range (str_length): dp_table[idx][idx] = 1 for idx in range (str_length -1): start_char = s[idx] end_char = s[idx+1] if start_char == end_char: dp_table[idx][...
_____no_output_____
MIT
dynamicProgramming/lgstPalSubstring.ipynb
NoCodeProgram/CodingTest
Create Temporary Datasets for AnalysisSimulate the proofcheck dataset until you get access to it
import pandas as pd mov_meta = pd.read_csv('movie_metadata.csv') mov_meta.head() # For the sake of simplicity only look at colmns with numeric data mov_meta_nrw=mov_meta._get_numeric_data() mov_meta_nrw.head() # variable of interest is gross and to make data similar to proofcheck will create binary variable # 1 if movi...
_____no_output_____
MIT
jupyter/model_comparison.ipynb
mseinstein/Proofcheck
This notebook shows:* How to launch the [**StarGANv1**](https://arxiv.org/abs/1711.09020) model for inference* Example of results for both * attrubutes **detection** * new face **generation** with desired attributesHere I use [**PyTorch** implementation](https://github.com/yunjey/stargan) of the StarGANv1 model.[...
import os import sys os.environ["KMP_DUPLICATE_LIB_OK"] = "True" sys.path.extend(["../code/", "../stargan/"]) import torch import torchvision.transforms as T from PIL import Image import matplotlib.pyplot as plt from config import get_config from solver import Solver
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Load model Let's first load the config for the model. It is mostly default except for the:* model checkpoint path* style classes, their order and numberNote that in the original StarGANv1 model 5 classes are used: `[Black_Hair Blond_Hair Brown_Hair Male Young]`.I retrained the model **4** times for different **face pa...
config = get_config(""" --model_save_dir ../models/celeba_128_eyes/ --test_iters 200000 --c_dim 5 --selected_attrs Arched_Eyebrows Bushy_Eyebrows Bags_Under_Eyes Eyeglasses Narrow_Eyes """)
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Load the model architecture with the provided config.
model = Solver(None, None, config)
Generator( (main): Sequential( (0): Conv2d(8, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False) (1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=...
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Restore model weights.
model.restore_model(model.test_iters)
Loading the trained models from step 200000...
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Prediction example Let's read a test image.Note that the **face position and size** should be comparable to what the model has seen in the training data (CelebA). Here I do not use any face detector and crop the faces manually. But in production environment one needs to setup the face detector correspondingly.
image = Image.open("../data/test.jpg") image
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
The input to the network is **3x128x128 image in a range [-1; 1]** (note that the channels is the first dimension).Thus one need to do preprocessing in advance.
transform = [] transform.append(T.Resize(128)) transform.append(T.CenterCrop(128)) transform.append(T.ToTensor()) transform.append(T.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))) transform = T.Compose(transform)
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Create a batch of 1 image
x_real = torch.stack([transform(image)]) x_real.shape
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Attributes prediction Let's first predict the attbibutes of the image. To do so I use the **Discriminator** part of the network. In StarGAN architecture it predicts not only the fake/real label but also the classes/attributes/styles of the image.Here I call this vector **eigen style vector**. Note that due to the poss...
with torch.no_grad(): eigen_style_vector = torch.sigmoid(model.D(x_real)[1])
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Below is the probability of each label. The photo indeed depicts a person with big and little bit arched eyebrows.
for proba, tag in zip(eigen_style_vector.numpy()[0], model.selected_attrs): print(f"{tag:20s}: {proba:.3f}")
Arched_Eyebrows : 0.334 Bushy_Eyebrows : 0.207 Bags_Under_Eyes : 0.054 Eyeglasses : 0.000 Narrow_Eyes : 0.081
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Now let's look at how well the **Generator** model can recreate the face without altering it using the just computed eigen style vector.
with torch.no_grad(): res_eigen = model.G(x_real, eigen_style_vector) res_eigen.shape
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Plot the original face and the reconstructed one:
plt.figure(figsize=(9, 8)) plt.subplot(121) _img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Original", fontsize=16) plt.subplot(122) _img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Eigen style reconstruc...
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Looks good enough. Face modification using new attributes Now let's try to modify the face starting from the eigen style vector.Let's say, I want to **add eyeglasses**. To do so I am to set the corresponding style vector component to 1.
eigen_style_vector_modified_1 = eigen_style_vector.clone() eigen_style_vector_modified_1[:, 3] = 1
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Now the style vector looks the following:
for proba, tag in zip(eigen_style_vector_modified_1.numpy()[0], model.selected_attrs): print(f"{tag:20s}: {proba:.3f}")
Arched_Eyebrows : 0.334 Bushy_Eyebrows : 0.207 Bags_Under_Eyes : 0.054 Eyeglasses : 1.000 Narrow_Eyes : 0.081
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Let's try to generate face with this modified style vector:
with torch.no_grad(): res_modified_1 = model.G(x_real, eigen_style_vector_modified_1) res_modified_1.shape
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Plot the faces:
plt.figure(figsize=(13.5, 8)) plt.subplot(131) _img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Original", fontsize=16) plt.subplot(132) _img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Eigen style reconst...
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Now let's try to **change two attributes simultaneously**:* Make the eyes narrow* Add archness to the eyebrows
eigen_style_vector_modified_2 = eigen_style_vector.clone() eigen_style_vector_modified_2[:, 0] = 1 eigen_style_vector_modified_2[:, 4] = 1
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Now the style vector looks the following:
for proba, tag in zip(eigen_style_vector_modified_2.numpy()[0], model.selected_attrs): print(f"{tag:20s}: {proba:.3f}")
Arched_Eyebrows : 1.000 Bushy_Eyebrows : 0.207 Bags_Under_Eyes : 0.054 Eyeglasses : 0.000 Narrow_Eyes : 1.000
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Let's try to generate face with this modified style vector:
with torch.no_grad(): res_modified_2 = model.G(x_real, eigen_style_vector_modified_2) res_modified_2.shape
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
Plot the faces:
plt.figure(figsize=(18, 8)) plt.subplot(141) _img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Original", fontsize=16) plt.subplot(142) _img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0)) plt.imshow(_img) plt.axis("off") plt.title("Eigen style reconstru...
_____no_output_____
MIT
notebooks/11_InferenceEyes.ipynb
vladimir-chernykh/facestyle-gan
This example uses the [Universal Bank](https://www.kaggle.com/sriharipramod/bank-loan-classification) data set and some example code of running classification trees from chapter 9 of [Data Mining for Business Analytics](https://www.dataminingbook.com/book/python-edition)> The data include customer demographic informati...
data = pd.read_csv('data/UniversalBank.csv') data.head()
_____no_output_____
MIT
43-workout-solution_decision_trees.ipynb
hanisaf/advanced-data-management-and-analytics
Courtesy - Statistics.com Data Description:ID Customer IDAge Customer's age in completed yearsExperience years of professional experienceIncome Annual income of the customer ($000)ZIPCode Home Address ZIP code.Family Family size of the customerCCAvg Avg. spending on credit cards p...
bank_df = data.drop(columns=['ID', 'ZIP Code']) X = bank_df.drop(columns=['Personal Loan']) y = bank_df['Personal Loan'] train_X, valid_X, train_y, valid_y = train_test_split(X, y, test_size=0.4, random_state=1) dtree = DecisionTreeClassifier() dtree.fit(train_X, train_y) print(export_text(dtree, feature_names=list(...
|--- Income <= 110.50 | |--- CCAvg <= 2.95 | | |--- class: 0 | |--- CCAvg > 2.95 | | |--- CD Account <= 0.50 | | | |--- Income <= 92.50 | | | | |--- class: 0 | | | |--- Income > 92.50 | | | | |--- Education <= 1.50 | | | | | |--- class: 0 | | | | |--- Educatio...
MIT
43-workout-solution_decision_trees.ipynb
hanisaf/advanced-data-management-and-analytics
Data Set-up and Cleaning
# Standard Library Imports import pandas as pd import numpy as np
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
For this section, I will be concatenating all the data sets into one large dataset. Load the datasets
inpatient = pd.read_csv('./data/Train_Inpatientdata-1542865627584.csv') outpatient = pd.read_csv('./data/Train_Outpatientdata-1542865627584.csv') beneficiary = pd.read_csv('./data/Train_Beneficiarydata-1542865627584.csv') fraud = pd.read_csv('./data/Train-1542865627584.csv') # Increase the max display options of the co...
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
Inspect the first 5 rows of the datasets
# Inspect the first 5 rows of the inpatient claims inpatient.head() # Inspect the first 5 rows of the outpatient claims outpatient.head() # Inspect the first 5 rows of the beneficiary dataset beneficiary.head() # Inspect the first 5 rows of the fraud column fraud.head()
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
Check the number of rows and columns for each dataset
inpatient.shape outpatient.shape beneficiary.shape fraud.shape
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
Some columns in the inpatient dataset are not in the outpatient dataset or in the fraud (target) dataset and vice versa. In order to make sense of the data I would have to merge them together. Combine the Inpatient, Outpatient, beneficiary and fraud datasets
# Map the inpatient and outpatient columns, 1 for outpatient, 0 for inpatient inpatient["IsOutpatient"] = 0 outpatient["IsOutpatient"] = 1 # Merging the datasets together patient_df = pd.concat([inpatient, outpatient],axis = 0) patient_df = patient_df.merge(beneficiary, how = 'left', on = 'BeneID').merge(fraud, how = '...
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
After merging the dataset, we now have a dataframe with the fraud target column.
patient_df.describe() patient_df.dtypes # Convert columns with Date attributes to Datetime datatype : "ClaimStartDt", "ClaimEndDt", "AdmissionDt", "DischargeDt", "DOB", "DOD" patient_df[["ClaimStartDt", "ClaimEndDt", "AdmissionDt", "DischargeDt", "DOB", "DOD"]] = patient_df[["ClaimStartDt", "ClaimEndDt", "AdmissionDt",...
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
Change other binary variables to 0 and 1
# Change the Gender column and any column having 'ChronicCond' to binary variables to 0 and 1 chronic = patient_df.columns[patient_df.columns.str.contains("ChronicCond")].tolist() patient_df[chronic] = patient_df[chronic].apply(lambda x: np.where(x == 2,0,1)) patient_df['Gender'] = patient_df['Gender'].apply(lambda x: ...
_____no_output_____
CC0-1.0
1_Data_Cleaning.ipynb
oaagboro/Healthcare_Insurance_Fraud
Understanding the dataIn this first part, we load the data and perform some initial exploration on it. The main goal of this step is to acquire some basic knowledge about the data, how the various features are distributed, if there are missing values in it and so on.
### imports import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np %matplotlib inline # load hourly data hourly_data = pd.read_csv('../data/hour.csv')
_____no_output_____
MIT
Chapter01/Exercise1.03/Exercise1.03.ipynb
fenago/Applied_Data_Analytics
Check data format, number of missing values in the data and general statistics:
# print some generic statistics about the data print(f"Shape of data: {hourly_data.shape}") print(f"Number of missing values in the data: {hourly_data.isnull().sum().sum()}") # get statistics on the numerical columns hourly_data.describe().T # create a copy of the original data preprocessed_data = hourly_data.copy() ...
_____no_output_____
MIT
Chapter01/Exercise1.03/Exercise1.03.ipynb
fenago/Applied_Data_Analytics
Registered vs casual use analysis
# assert that total numer of rides is equal to the sum of registered and casual ones assert (preprocessed_data.casual + preprocessed_data.registered == preprocessed_data.cnt).all(), \ 'Sum of casual and registered rides not equal to total number of rides' # plot distributions of registered vs casual rides sns.distplot(...
_____no_output_____
MIT
Chapter01/Exercise1.03/Exercise1.03.ipynb
fenago/Applied_Data_Analytics
Exercise 1.03: Estimating average registered rides
# compute population mean of registered rides population_mean = preprocessed_data.registered.mean() # get sample of the data (summer 2011) sample = preprocessed_data[(preprocessed_data.season == "summer") &\ (preprocessed_data.yr == 2011)].registered # perform t-test and compute p-value...
_____no_output_____
MIT
Chapter01/Exercise1.03/Exercise1.03.ipynb
fenago/Applied_Data_Analytics
Finetuning of the pretrained Japanese BERT modelFinetune the pretrained model to solve multi-class classification problems. This notebook requires the following objects:- trained sentencepiece model (model and vocab files)- pretraiend Japanese BERT modelDataset is livedoor ニγƒ₯ースコーパス in https://www.rondhuit.com/downloa...
import configparser import glob import os import pandas as pd import subprocess import sys import tarfile from urllib.request import urlretrieve CURDIR = os.getcwd() CONFIGPATH = os.path.join(CURDIR, os.pardir, 'config.ini') config = configparser.ConfigParser() config.read(CONFIGPATH)
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Data preparingYou need execute the following cells just once.
FILEURL = config['FINETUNING-DATA']['FILEURL'] FILEPATH = config['FINETUNING-DATA']['FILEPATH'] EXTRACTDIR = config['FINETUNING-DATA']['TEXTDIR']
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Download and unzip data.
%%time urlretrieve(FILEURL, FILEPATH) mode = "r:gz" tar = tarfile.open(FILEPATH, mode) tar.extractall(EXTRACTDIR) tar.close()
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Data preprocessing.
def extract_txt(filename): with open(filename) as text_file: # 0: URL, 1: timestamp text = text_file.readlines()[2:] text = [sentence.strip() for sentence in text] text = list(filter(lambda line: line != '', text)) return ''.join(text) categories = [ name for name i...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Save data as tsv files. test:dev:train = 2:2:6. To check the usability of finetuning, we also prepare sampled training data (1/5 of full training data).
df[:len(df) // 5].to_csv( os.path.join(EXTRACTDIR, "test.tsv"), sep='\t', index=False) df[len(df) // 5:len(df)*2 // 5].to_csv( os.path.join(EXTRACTDIR, "dev.tsv"), sep='\t', index=False) df[len(df)*2 // 5:].to_csv( os.path.join(EXTRACTDIR, "train.tsv"), sep='\t', index=False) ### 1/5 of full training data. # df[:len(d...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Finetune pre-trained modelIt will take a lot of hours to execute the following cells on CPU environment. You can also use colab to recieve the power of TPU. You need to uplode the created data onto your GCS bucket.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.goog...
PRETRAINED_MODEL_PATH = '../model/model.ckpt-1400000' FINETUNE_OUTPUT_DIR = '../model/livedoor_output' %%time # It will take many hours on CPU environment. !python3 ../src/run_classifier.py \ --task_name=livedoor \ --do_train=true \ --do_eval=true \ --data_dir=../data/livedoor \ --model_file=../model/wiki-ja...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Predict using the finetuned modelLet's predict test data using the finetuned model.
import sys sys.path.append("../src") import tokenization_sentencepiece as tokenization from run_classifier import LivedoorProcessor from run_classifier import model_fn_builder from run_classifier import file_based_input_fn_builder from run_classifier import file_based_convert_examples_to_features from utils import str...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Read test data set and add prediction results.
import pandas as pd test_df = pd.read_csv("../data/livedoor/test.tsv", sep='\t') test_df['predict'] = [ label_list[elem['probabilities'].argmax()] for elem in result ] test_df.head() sum( test_df['label'] == test_df['predict'] ) / len(test_df)
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
A littel more detailed check using `sklearn.metrics`.
!pip install scikit-learn from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix print(classification_report(test_df['label'], test_df['predict'])) print(confusion_matrix(test_df['label'], test_df['predict']))
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Simple baseline model.
import pandas as pd from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix train_df = pd.read_csv("../data/livedoor/train.tsv", sep='\t') dev_df = pd.read_csv("../data/livedoor/dev.tsv", sep='\t') test_df = pd.read_csv("../data/livedoor/test.tsv", sep='\t') !apt-get install -q -y...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
The following set up is not exactly identical to that of BERT because inside Classifier it uses `train_test_split` with shuffle. In addition, parameters are not well tuned, however, we think it's enough to check the power of BERT.
%%time model = GradientBoostingClassifier(n_estimators=200, validation_fraction=len(train_df)/len(dev_df), n_iter_no_change=5, tol=0.01, random_state=23) ### 1/5 of full training...
_____no_output_____
Apache-2.0
notebook/finetune-to-livedoor-corpus.ipynb
minhpqn/bert-japanese
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Mixed precision View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewMixed precision is the use of both 16-bit and 32-bit floating-point types in a model during training to make it run faster and use less memory. By keeping certain parts of the model ...
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.mixed_precision import experimental as mixed_precision
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Supported hardwareWhile mixed precision will run on most hardware, it will only speed up models on recent NVIDIA GPUs and Cloud TPUs. NVIDIA GPUs support using a mix of float16 and float32, while TPUs support a mix of bfloat16 and float32.Among NVIDIA GPUs, those with compute capability 7.0 or higher will see the grea...
!nvidia-smi -L
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
All Cloud TPUs support bfloat16.Even on CPUs and older GPUs, where no speedup is expected, mixed precision APIs can still be used for unit testing, debugging, or just to try out the API. Setting the dtype policy To use mixed precision in Keras, you need to create a `tf.keras.mixed_precision.experimental.Policy`, typic...
policy = mixed_precision.Policy('mixed_float16') mixed_precision.set_policy(policy)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
The policy specifies two important aspects of a layer: the dtype the layer's computations are done in, and the dtype of a layer's variables. Above, you created a `mixed_float16` policy (i.e., a `mixed_precision.Policy` created by passing the string `'mixed_float16'` to its constructor). With this policy, layers use flo...
print('Compute dtype: %s' % policy.compute_dtype) print('Variable dtype: %s' % policy.variable_dtype)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
As mentioned before, the `mixed_float16` policy will most significantly improve performance on NVIDIA GPUs with compute capability of at least 7.0. The policy will run on other GPUs and CPUs but may not improve performance. For TPUs, the `mixed_bfloat16` policy should be used instead. Building the model Next, let's st...
inputs = keras.Input(shape=(784,), name='digits') if tf.config.list_physical_devices('GPU'): print('The model will run with 4096 units on a GPU') num_units = 4096 else: # Use fewer units on CPUs so the model finishes in a reasonable amount of time print('The model will run with 64 units on a CPU') num_units =...
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Each layer has a policy and uses the global policy by default. Each of the `Dense` layers therefore have the `mixed_float16` policy because you set the global policy to `mixed_float16` previously. This will cause the dense layers to do float16 computations and have float32 variables. They cast their inputs to float16 i...
print('x.dtype: %s' % x.dtype.name) # 'kernel' is dense1's variable print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Next, create the output predictions. Normally, you can create the output predictions as follows, but this is not always numerically stable with float16.
# INCORRECT: softmax and model output will be float16, when it should be float32 outputs = layers.Dense(10, activation='softmax', name='predictions')(x) print('Outputs dtype: %s' % outputs.dtype.name)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
A softmax activation at the end of the model should be float32. Because the dtype policy is `mixed_float16`, the softmax activation would normally have a float16 compute dtype and output a float16 tensors.This can be fixed by separating the Dense and softmax layers, and by passing `dtype='float32'` to the softmax layer
# CORRECT: softmax and model output are float32 x = layers.Dense(10, name='dense_logits')(x) outputs = layers.Activation('softmax', dtype='float32', name='predictions')(x) print('Outputs dtype: %s' % outputs.dtype.name)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Passing `dtype='float32'` to the softmax layer constructor overrides the layer's dtype policy to be the `float32` policy, which does computations and keeps variables in float32. Equivalently, we could have instead passed `dtype=mixed_precision.Policy('float32')`; layers always convert the dtype argument to a policy. Be...
# The linear activation is an identity function. So this simply casts 'outputs' # to float32. In this particular case, 'outputs' is already float32 so this is a # no-op. outputs = layers.Activation('linear', dtype='float32')(outputs)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Next, finish and compile the model, and generate input data.
model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype('float32...
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
This example cast the input data from int8 to float32. We don't cast to float16 since the division by 255 is on the CPU, which runs float16 operations slower than float32 operations. In this case, the performance difference in negligible, but in general you should run input processing math in float32 if it runs on the ...
initial_weights = model.get_weights()
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Training the model with Model.fitNext, train the model.
history = model.fit(x_train, y_train, batch_size=8192, epochs=5, validation_split=0.2) test_scores = model.evaluate(x_test, y_test, verbose=2) print('Test loss:', test_scores[0]) print('Test accuracy:', test_scores[1])
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Notice the model prints the time per sample in the logs: for example, "4us/sample". The first epoch may be slower as TensorFlow spends some time optimizing the model, but afterwards the time per sample should stabilize. If you are running this guide in Colab, you can compare the performance of mixed precision with floa...
x = tf.constant(256, dtype='float16') (x ** 2).numpy() # Overflow x = tf.constant(1e-5, dtype='float16') (x ** 2).numpy() # Underflow
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
In practice, overflow with float16 rarely occurs. Additionally, underflow also rarely occurs during the forward pass. However, during the backward pass, gradients can underflow to zero. Loss scaling is a technique to prevent this underflow. Loss scaling backgroundThe basic concept of loss scaling is simple: simply mul...
loss_scale = policy.loss_scale print('Loss scale: %s' % loss_scale)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
The loss scale prints a lot of internal state, but you can ignore it. The most important part is the `current_loss_scale` part, which shows the loss scale's current value. You can instead use a static loss scale by passing a number when constructing a dtype policy.
new_policy = mixed_precision.Policy('mixed_float16', loss_scale=1024) print(new_policy.loss_scale)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
The dtype policy constructor always converts the loss scale to a `LossScale` object. In this case, it's converted to a `tf.mixed_precision.experimental.FixedLossScale`, the only other `LossScale` subclass other than `DynamicLossScale`. Note: *Using anything other than a dynamic loss scale is not recommended*. Choosing ...
optimizer = keras.optimizers.RMSprop() optimizer = mixed_precision.LossScaleOptimizer(optimizer, loss_scale='dynamic')
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Passing `'dynamic'` is equivalent to passing `tf.mixed_precision.experimental.DynamicLossScale()`. Next, define the loss object and the `tf.data.Dataset`s.
loss_object = tf.keras.losses.SparseCategoricalCrossentropy() train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(10000).batch(8192)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(8192)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Next, define the training step function. Two new methods from the loss scale optimizer are used in order to scale the loss and unscale the gradients:* `get_scaled_loss(loss)`: Multiplies the loss by the loss scale* `get_unscaled_gradients(gradients)`: Takes in a list of scaled gradients as inputs, and divides each one ...
@tf.function def train_step(x, y): with tf.GradientTape() as tape: predictions = model(x) loss = loss_object(y, predictions) scaled_loss = optimizer.get_scaled_loss(loss) scaled_gradients = tape.gradient(scaled_loss, model.trainable_variables) gradients = optimizer.get_unscaled_gradients(scaled_gradie...
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
The `LossScaleOptimizer` will likely skip the first few steps at the start of training. The loss scale starts out high so that the optimal loss scale can quickly be determined. After a few steps, the loss scale will stabilize and very few steps will be skipped. This process happens automatically and does not affect tra...
@tf.function def test_step(x): return model(x, training=False)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Load the initial weights of the model, so you can retrain from scratch.
model.set_weights(initial_weights)
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Finally, run the custom training loop.
for epoch in range(5): epoch_loss_avg = tf.keras.metrics.Mean() test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='test_accuracy') for x, y in train_dataset: loss = train_step(x, y) epoch_loss_avg(loss) for x, y in test_dataset: predictions = test_step(x) test_accuracy.updat...
_____no_output_____
Apache-2.0
site/en/guide/mixed_precision.ipynb
DorianKodelja/docs
Tabular Datasets As we have already discovered, Elements are simple wrappers around your data that provide a semantically meaningful representation. HoloViews can work with a wide variety of data types, but many of them can be categorized as either: * **Tabular:** Tables of flat columns, or * **Gridded:** Array-li...
import numpy as np import pandas as pd import holoviews as hv hv.extension('bokeh', 'matplotlib') diseases = pd.read_csv('../assets/diseases.csv.gz') diseases.head()
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
This particular dataset was the subject of an excellent piece of visual journalism in the [Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15). The WSJ data details the incidence of various diseases over time, and was downloaded from the [University of Pittsburgh's Project Tycho]...
vdims = [('measles', 'Measles Incidence'), ('pertussis', 'Pertussis Incidence')] ds = hv.Dataset(diseases, ['Year', 'State'], vdims)
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
Here we've used an optional tuple-based syntax **``(name,label)``** to specify a more meaningful description for the ``vdims``, while using the original short descriptions for the ``kdims``. We haven't yet specified what to do with the ``Week`` dimension, but we are only interested in yearly averages, so let's just te...
ds = ds.aggregate(function=np.mean) ds
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
(We'll cover aggregations like ``np.mean`` in detail later, but here the important bit is simply that the ``Week`` dimension can now be ignored.)The ``repr`` shows us both the ``kdims`` (in square brackets) and the ``vdims`` (in parentheses) of the ``Dataset``. Because it can hold arbitrary combinations of dimensions,...
%%opts Curve [width=600 height=250] {+framewise} (ds.to(hv.Curve, 'Year', 'measles') + ds.to(hv.Curve, 'Year', 'pertussis')).cols(1)
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
Here we specified two ``Curve`` elements showing measles and pertussis incidence respectively (the vdims), per year (the kdim), and laid them out in a vertical column. You'll notice that even though we specified only the short name for the value dimensions, the plot shows the longer names ("Measles Incidence", "Pertus...
%%opts Bars [width=800 height=400 tools=['hover'] group_index=1 legend_position='top_left'] states = ['New York', 'New Jersey', 'California', 'Texas'] ds.select(State=states, Year=(1980, 1990)).to(hv.Bars, ['Year', 'State'], 'measles').sort()
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
FacetingAbove we already saw what happens to key dimensions that we didn't explicitly assign to the Element using the ``.to`` method: they are grouped over, popping up a set of widgets so the user can select the values to show at any one time. However, using widgets is not always the most effective way to view the dat...
%%opts Curve [width=200] (color='indianred') grouped = ds.select(State=states, Year=(1930, 2005)).to(hv.Curve, 'Year', 'measles') grouped.grid('State')
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
Or we can take the same grouped object and ``.overlay`` the individual curves instead of laying them out in a grid:
%%opts Curve [width=600] (color=Cycle(values=['indianred', 'slateblue', 'lightseagreen', 'coral'])) grouped.overlay('State')
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
These faceting methods even compose together, meaning that if we had more key dimensions we could ``.overlay`` one dimension, ``.grid`` another and have a widget for any other remaining key dimensions. AggregatingInstead of selecting a subset of the data, another common operation supported by HoloViews is computing ag...
%%opts Curve [width=600] agg = ds.aggregate('Year', function=np.mean, spreadfn=np.std) (hv.Curve(agg) * hv.ErrorBars(agg,vdims=['measles', 'measles_std'])).redim.range(measles=(0, None))
_____no_output_____
BSD-3-Clause
examples/getting_started/3-Tabular_Datasets.ipynb
adsbxchange/holoviews
First steps with xmovie
import warnings import matplotlib.pyplot as plt import xarray as xr from shapely.errors import ShapelyDeprecationWarning from xmovie import Movie warnings.filterwarnings( action='ignore', category=ShapelyDeprecationWarning, # in cartopy ) warnings.filterwarnings( action="ignore", category=UserWarning...
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
Basics
# Load test dataset ds = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0, 150)) # Create movie object mov = Movie(ds.air)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
Preview movie frames
# Preview 10th frame mov.preview(10) plt.savefig("movie_preview.png") ! rm -f frame*.png *.mp4 *.gif
rm: cannot remove 'frame*.png': No such file or directory rm: cannot remove '*.mp4': No such file or directory rm: cannot remove '*.gif': No such file or directory
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
Create movie files
mov.save('movie.mp4') # Use to save a high quality mp4 movie mov.save('movie_gif.gif') # Use to save a gif
Movie created at movie.mp4 Movie created at movie_mp4.mp4 GIF created at movie_gif.gif
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie