INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Applies a dropout mask whose size is determined by passed argument 'sz'. Args: x (nn.Variable): A torch Variable object sz (tuple(int, int, int)): The expected size of the new tensor dropout (float): The dropout fraction to apply This method uses the bernoulli distribution to decide whi...
def dropout_mask(x, sz, dropout): """ Applies a dropout mask whose size is determined by passed argument 'sz'. Args: x (nn.Variable): A torch Variable object sz (tuple(int, int, int)): The expected size of the new tensor dropout (float): The dropout fraction to apply This method use...
for each string defined in self.weights, the corresponding attribute in the wrapped module is referenced, then deleted, and subsequently registered as a new parameter with a slightly modified name. Args: None Returns: None
def _setup(self): """ for each string defined in self.weights, the corresponding attribute in the wrapped module is referenced, then deleted, and subsequently registered as a new parameter with a slightly modified name. Args: None Returns: None ...
Uses pytorch's built-in dropout function to apply dropout to the parameters of the wrapped module. Args: None Returns: None
def _setweights(self): """ Uses pytorch's built-in dropout function to apply dropout to the parameters of the wrapped module. Args: None Returns: None """ for name_w in self.weights: raw_w = getattr(self.module, name_w + '_raw') ...
Load a saved `DataBunch` from `path/file`. `file` can be file-like (file or buffer)
def load_data(path:PathOrStr, file:PathLikeOrBinaryStream='data_save.pkl', bs:int=64, val_bs:int=None, num_workers:int=defaults.cpus, dl_tfms:Optional[Collection[Callable]]=None, device:torch.device=None, collate_fn:Callable=data_collate, no_check:bool=False, **kwargs)->DataBunch: "Load ...
Create a `DataBunch` from `train_ds`, `valid_ds` and maybe `test_ds` with a batch size of `bs`. Passes `**dl_kwargs` to `DataLoader()`
def create(cls, train_ds:Dataset, valid_ds:Dataset, test_ds:Optional[Dataset]=None, path:PathOrStr='.', bs:int=64, val_bs:int=None, num_workers:int=defaults.cpus, dl_tfms:Optional[Collection[Callable]]=None, device:torch.device=None, collate_fn:Callable=data_collate, no_check:bool=False, *...
Returns appropriate `Dataset` for validation, training, or test (`ds_type`).
def dl(self, ds_type:DatasetType=DatasetType.Valid)->DeviceDataLoader: "Returns appropriate `Dataset` for validation, training, or test (`ds_type`)." #TODO: refactor return (self.train_dl if ds_type == DatasetType.Train else self.test_dl if ds_type == DatasetType.Test else ...
Returns a list of all DeviceDataLoaders. If you need a specific DeviceDataLoader, access via the relevant property (`train_dl`, `valid_dl`, etc) as the index of DLs in this list is not guaranteed to remain constant.
def dls(self)->List[DeviceDataLoader]: "Returns a list of all DeviceDataLoaders. If you need a specific DeviceDataLoader, access via the relevant property (`train_dl`, `valid_dl`, etc) as the index of DLs in this list is not guaranteed to remain constant." res = [self.train_dl, self.fix_dl, self.single_...
Save the `DataBunch` in `self.path/file`. `file` can be file-like (file or buffer)
def save(self, file:PathLikeOrBinaryStream= 'data_save.pkl')->None: "Save the `DataBunch` in `self.path/file`. `file` can be file-like (file or buffer)" if not getattr(self, 'label_list', False): warn("Serializing the `DataBunch` only works when you created it using the data block API.") ...
Get one batch from the data loader of `ds_type`. Optionally `detach` and `denorm`.
def one_batch(self, ds_type:DatasetType=DatasetType.Train, detach:bool=True, denorm:bool=True, cpu:bool=True)->Collection[Tensor]: "Get one batch from the data loader of `ds_type`. Optionally `detach` and `denorm`." dl = self.dl(ds_type) w = self.num_workers self.num_workers = 0 ...
Get `item` into a batch. Optionally `detach` and `denorm`.
def one_item(self, item, detach:bool=False, denorm:bool=False, cpu:bool=False): "Get `item` into a batch. Optionally `detach` and `denorm`." ds = self.single_ds with ds.set_item(item): return self.one_batch(ds_type=DatasetType.Single, detach=detach, denorm=denorm, cpu=cpu)
Show a batch of data in `ds_type` on a few `rows`.
def show_batch(self, rows:int=5, ds_type:DatasetType=DatasetType.Train, reverse:bool=False, **kwargs)->None: "Show a batch of data in `ds_type` on a few `rows`." x,y = self.one_batch(ds_type, True, True) if reverse: x,y = x.flip(0),y.flip(0) n_items = rows **2 if self.train_ds.x._square_...
Export the minimal state of `self` for inference in `self.path/file`. `file` can be file-like (file or buffer)
def export(self, file:PathLikeOrBinaryStream='export.pkl'): "Export the minimal state of `self` for inference in `self.path/file`. `file` can be file-like (file or buffer)" xtra = dict(normalize=self.norm.keywords) if getattr(self, 'norm', False) else {} try_save(self.valid_ds.get_state(**xtra),...
Check the underlying data in the training set can be properly loaded.
def sanity_check(self): "Check the underlying data in the training set can be properly loaded." final_message = "You can deactivate this warning by passing `no_check=True`." if not hasattr(self.train_ds, 'items') or len(self.train_ds.items) == 0 or not hasattr(self.train_dl, 'batch_sampler'): re...
Instantiate a `OneCycleScheduler` with `lr_max`.
def one_cycle_scheduler(lr_max:float, **kwargs:Any)->OneCycleScheduler: "Instantiate a `OneCycleScheduler` with `lr_max`." return partial(OneCycleScheduler, lr_max=lr_max, **kwargs)
Fit a model following the 1cycle policy.
def fit_one_cycle(learn:Learner, cyc_len:int, max_lr:Union[Floats,slice]=defaults.lr, moms:Tuple[float,float]=(0.95,0.85), div_factor:float=25., pct_start:float=0.3, final_div:float=None, wd:float=None, callbacks:Optional[CallbackList]=None, tot_epochs:int=None, start_epoch:int=None)...
Explore lr from `start_lr` to `end_lr` over `num_it` iterations in `learn`. If `stop_div`, stops when loss diverges.
def lr_find(learn:Learner, start_lr:Floats=1e-7, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, wd:float=None): "Explore lr from `start_lr` to `end_lr` over `num_it` iterations in `learn`. If `stop_div`, stops when loss diverges." start_lr = learn.lr_range(start_lr) start_lr = np.array(start_lr) if i...
Put `learn` in FP16 precision mode.
def to_fp16(learn:Learner, loss_scale:float=None, max_noskip:int=1000, dynamic:bool=True, clip:float=None, flat_master:bool=False, max_scale:float=2**24)->Learner: "Put `learn` in FP16 precision mode." learn.to_fp32() learn.model = model2half(learn.model) learn.data.add_tfm(batch_to_half) ...
Put `learn` back to FP32 precision mode.
def to_fp32(learn:Learner): "Put `learn` back to FP32 precision mode." learn.data.remove_tfm(batch_to_half) for cb in learn.callbacks: if isinstance(cb, MixedPrecision): learn.callbacks.remove(cb) learn.model = learn.model.float() return learn
Add mixup https://arxiv.org/abs/1710.09412 to `learn`.
def mixup(learn:Learner, alpha:float=0.4, stack_x:bool=False, stack_y:bool=True) -> Learner: "Add mixup https://arxiv.org/abs/1710.09412 to `learn`." learn.callback_fns.append(partial(MixUpCallback, alpha=alpha, stack_x=stack_x, stack_y=stack_y)) return learn
Add gradient clipping of `clip` during training.
def clip_grad(learn:Learner, clip:float=0.1)->Learner: "Add gradient clipping of `clip` during training." learn.callback_fns.append(partial(GradientClipping, clip=clip)) return learn
Create a `ClassificationInterpretation` object from `learner` on `ds_type` with `tta`.
def _learner_interpret(learn:Learner, ds_type:DatasetType=DatasetType.Valid): "Create a `ClassificationInterpretation` object from `learner` on `ds_type` with `tta`." return ClassificationInterpretation.from_learner(learn, ds_type=ds_type)
If we have `last_metrics` plot them in our pbar graph
def on_epoch_end(self, n_epochs:int, last_metrics:MetricsList, **kwargs)->bool: "If we have `last_metrics` plot them in our pbar graph" if last_metrics is not None and np.any(last_metrics): rec = self.learn.recorder iters = range_of(rec.losses) val_iter = np.array(rec...
Clip the gradient before the optimizer step.
def on_backward_end(self, **kwargs): "Clip the gradient before the optimizer step." if self.clip: nn.utils.clip_grad_norm_(self.learn.model.parameters(), self.clip)
check if loss is reduction
def on_train_begin(self, **kwargs): "check if loss is reduction" if hasattr(self.loss_func, "reduction") and (self.loss_func.reduction != "sum"): warn("For better gradients consider 'reduction=sum'")
accumulate samples and batches
def on_batch_begin(self, last_input, last_target, **kwargs): "accumulate samples and batches" self.acc_samples += last_input.shape[0] self.acc_batches += 1
accumulated step and reset samples, True will result in no stepping
def on_backward_end(self, **kwargs): "accumulated step and reset samples, True will result in no stepping" if (self.acc_batches % self.n_step) == 0: for p in (self.learn.model.parameters()): if p.requires_grad: p.grad.div_(self.acc_samples) self.acc_samples = 0 ...
step the rest of the accumulated grads if not perfectly divisible
def on_epoch_end(self, **kwargs): "step the rest of the accumulated grads if not perfectly divisible" for p in (self.learn.model.parameters()): if p.requires_grad: p.grad.div_(self.acc_samples) if not self.drop_last: self.learn.opt.step() self.learn.opt.zero_grad()
Create an instance of `ClassificationInterpretation`
def from_learner(cls, learn: Learner, ds_type:DatasetType=DatasetType.Valid): "Create an instance of `ClassificationInterpretation`" preds = learn.get_preds(ds_type=ds_type, with_loss=True) return cls(learn, *preds)
Confusion matrix as an `np.ndarray`.
def confusion_matrix(self, slice_size:int=1): "Confusion matrix as an `np.ndarray`." x=torch.arange(0,self.data.c) if slice_size is None: cm = ((self.pred_class==x[:,None]) & (self.y_true==x[:,None,None])).sum(2) else: cm = torch.zeros(self.data.c, self.data.c, dtype=x.dtype)...
Plot the confusion matrix, with `title` and using `cmap`.
def plot_confusion_matrix(self, normalize:bool=False, title:str='Confusion matrix', cmap:Any="Blues", slice_size:int=1, norm_dec:int=2, plot_txt:bool=True, return_fig:bool=None, **kwargs)->Optional[plt.Figure]: "Plot the confusion matrix, with `title` and using `cmap`." # T...
Sorted descending list of largest non-diagonal entries of confusion matrix, presented as actual, predicted, number of occurrences.
def most_confused(self, min_val:int=1, slice_size:int=1)->Collection[Tuple[str,str,int]]: "Sorted descending list of largest non-diagonal entries of confusion matrix, presented as actual, predicted, number of occurrences." cm = self.confusion_matrix(slice_size=slice_size) np.fill_diagonal(cm, 0)...
`k` largest(/smallest) losses and indexes, defaulting to all losses (sorted by `largest`).
def top_losses(self, k:int=None, largest=True): "`k` largest(/smallest) losses and indexes, defaulting to all losses (sorted by `largest`)." return self.losses.topk(ifnone(k, len(self.losses)), largest=largest)
Calculates the F-beta score (the weighted harmonic mean of precision and recall). This is the micro averaged version where the true positives, false negatives and false positives are calculated globally (as opposed to on a per label basis). beta == 1 places equal weight on precision and recall, b < 1 empha...
def fbeta(log_preds, targs, beta, thresh=0.5, epsilon=1e-8): """Calculates the F-beta score (the weighted harmonic mean of precision and recall). This is the micro averaged version where the true positives, false negatives and false positives are calculated globally (as opposed to on a per label basis). ...
see fbeta
def fbeta_np(preds, targs, beta, thresh=0.5, epsilon=1e-8): """ see fbeta """ assert beta > 0, 'beta needs to be greater than 0' beta2 = beta ** 2 rec = recall_np(preds, targs, thresh) prec = precision_np(preds, targs, thresh) return (1 + beta2) * prec * rec / (beta2 * prec + rec + epsilon)
Distributed training of Imagenet. Fastest speed is if you run with: python -m fastai.launch
def main( gpu:Param("GPU to run on", str)=None ): """Distributed training of Imagenet. Fastest speed is if you run with: python -m fastai.launch""" path = Path('/mnt/fe2_disk/') tot_epochs,size,bs,lr = 60,224,256,3e-1 dirname = 'imagenet' gpu = setup_distrib(gpu) if gpu is None: bs *= torch.cud...
Get the metadata associated with `arch`.
def cnn_config(arch): "Get the metadata associated with `arch`." torch.backends.cudnn.benchmark = True return model_meta.get(arch, _default_meta)
Cut off the body of a typically pretrained `model` at `cut` (int) or cut the model as specified by `cut(model)` (function).
def create_body(arch:Callable, pretrained:bool=True, cut:Optional[Union[int, Callable]]=None): "Cut off the body of a typically pretrained `model` at `cut` (int) or cut the model as specified by `cut(model)` (function)." model = arch(pretrained) cut = ifnone(cut, cnn_config(arch)['cut']) if cut is None:...
Model head that takes `nf` features, runs through `lin_ftrs`, and about `nc` classes.
def create_head(nf:int, nc:int, lin_ftrs:Optional[Collection[int]]=None, ps:Floats=0.5, concat_pool:bool=True, bn_final:bool=False): "Model head that takes `nf` features, runs through `lin_ftrs`, and about `nc` classes." lin_ftrs = [nf, 512, nc] if lin_ftrs is None else [nf] + lin_ftrs + [nc] ...
Create custom convnet architecture
def create_cnn_model(base_arch:Callable, nc:int, cut:Union[int,Callable]=None, pretrained:bool=True, lin_ftrs:Optional[Collection[int]]=None, ps:Floats=0.5, custom_head:Optional[nn.Module]=None, split_on:Optional[SplitFuncOrIdxList]=None, bn_final:bool=False, concat_pool:bool=True): "Create custom c...
Build convnet style learner.
def cnn_learner(data:DataBunch, base_arch:Callable, cut:Union[int,Callable]=None, pretrained:bool=True, lin_ftrs:Optional[Collection[int]]=None, ps:Floats=0.5, custom_head:Optional[nn.Module]=None, split_on:Optional[SplitFuncOrIdxList]=None, bn_final:bool=False, init=nn.init.kaiming_norm...
Build Unet learner from `data` and `arch`.
def unet_learner(data:DataBunch, arch:Callable, pretrained:bool=True, blur_final:bool=True, norm_type:Optional[NormType]=NormType, split_on:Optional[SplitFuncOrIdxList]=None, blur:bool=False, self_attention:bool=False, y_range:Optional[Tuple[float,float]]=None, last_cross:bool=True, ...
Create an instance of `ClassificationInterpretation`. `tta` indicates if we want to use Test Time Augmentation.
def _cl_int_from_learner(cls, learn:Learner, ds_type:DatasetType=DatasetType.Valid, tta=False): "Create an instance of `ClassificationInterpretation`. `tta` indicates if we want to use Test Time Augmentation." preds = learn.TTA(ds_type=ds_type, with_loss=True) if tta else learn.get_preds(ds_type=ds_type, with_l...
Show images in `top_losses` along with their prediction, actual, loss, and probability of actual class.
def _cl_int_plot_top_losses(self, k, largest=True, figsize=(12,12), heatmap:bool=True, heatmap_thresh:int=16, return_fig:bool=None)->Optional[plt.Figure]: "Show images in `top_losses` along with their prediction, actual, loss, and probability of actual class." tl_val,tl_idx = self.to...
Show images in `top_losses` along with their prediction, actual, loss, and probability of predicted class in a multilabeled dataset.
def _cl_int_plot_multi_top_losses(self, samples:int=3, figsize:Tuple[int,int]=(8,8), save_misclassified:bool=False): "Show images in `top_losses` along with their prediction, actual, loss, and probability of predicted class in a multilabeled dataset." if samples >20: print("Max 20 samples") retu...
Gets indices with top losses.
def from_toplosses(cls, learn, n_imgs=None, **kwargs): "Gets indices with top losses." train_ds, train_idxs = cls.get_toplosses_idxs(learn, n_imgs, **kwargs) return train_ds, train_idxs
Sorts `ds_type` dataset by top losses and returns dataset and sorted indices.
def get_toplosses_idxs(cls, learn, n_imgs, **kwargs): "Sorts `ds_type` dataset by top losses and returns dataset and sorted indices." dl = learn.data.fix_dl if not n_imgs: n_imgs = len(dl.dataset) _,_,top_losses = learn.get_preds(ds_type=DatasetType.Fix, with_loss=True) idxs = to...
For a LabelList `ll_input`, resize each image to `size` using `resize_method` and `padding_mode`.
def padded_ds(ll_input, size=(250, 300), resize_method=ResizeMethod.CROP, padding_mode='zeros', **kwargs): "For a LabelList `ll_input`, resize each image to `size` using `resize_method` and `padding_mode`." return ll_input.transform(tfms=crop_pad(), size=size, resize_method=resize_method, padding_mode=p...
Gets the indices for the most similar images.
def from_similars(cls, learn, layer_ls:list=[0, 7, 2], **kwargs): "Gets the indices for the most similar images." train_ds, train_idxs = cls.get_similars_idxs(learn, layer_ls, **kwargs) return train_ds, train_idxs
Gets the indices for the most similar images in `ds_type` dataset
def get_similars_idxs(cls, learn, layer_ls, **kwargs): "Gets the indices for the most similar images in `ds_type` dataset" hook = hook_output(learn.model[layer_ls[0]][layer_ls[1]][layer_ls[2]]) dl = learn.data.fix_dl ds_actns = cls.get_actns(learn, hook=hook, dl=dl, **kwargs) si...
Gets activations at the layer specified by `hook`, applies `pool` of dim `pool_dim` and concatenates
def get_actns(learn, hook:Hook, dl:DataLoader, pool=AdaptiveConcatPool2d, pool_dim:int=4, **kwargs): "Gets activations at the layer specified by `hook`, applies `pool` of dim `pool_dim` and concatenates" print('Getting activations...') actns = [] learn.model.eval() with torch.no...
Computes the similarity function between each embedding of `t1` and `t2` matrices.
def comb_similarity(t1: torch.Tensor, t2: torch.Tensor, **kwargs): # https://github.com/pytorch/pytorch/issues/11202 "Computes the similarity function between each embedding of `t1` and `t2` matrices." print('Computing similarities...') w1 = t1.norm(p=2, dim=1, keepdim=True) w2 ...
Returns the `n` largest indices from a numpy array `arr`.
def largest_indices(arr, n): "Returns the `n` largest indices from a numpy array `arr`." #https://stackoverflow.com/questions/6910641/how-do-i-get-indices-of-n-maximum-values-in-a-numpy-array flat = arr.flatten() indices = np.argpartition(flat, -n)[-n:] indices = indices[np.argso...
Sorts `similarities` and return the indexes in pairs ordered by highest similarity.
def sort_idxs(cls, similarities): "Sorts `similarities` and return the indexes in pairs ordered by highest similarity." idxs = cls.largest_indices(similarities, len(similarities)) idxs = [(idxs[0][i], idxs[1][i]) for i in range(len(idxs[0]))] return [e for l in idxs for e in l]
Returns an image widget for specified file name `img`.
def make_img_widget(cls, img, layout=Layout(), format='jpg'): "Returns an image widget for specified file name `img`." return widgets.Image(value=img, format=format, layout=layout)
Return a Button widget with specified `handler`.
def make_button_widget(cls, label, file_path=None, handler=None, style=None, layout=Layout(width='auto')): "Return a Button widget with specified `handler`." btn = widgets.Button(description=label, layout=layout) if handler is not None: btn.on_click(handler) if style is not None: btn.but...
Return a Dropdown widget with specified `handler`.
def make_dropdown_widget(cls, description='Description', options=['Label 1', 'Label 2'], value='Label 1', file_path=None, layout=Layout(), handler=None): "Return a Dropdown widget with specified `handler`." dd = widgets.Dropdown(description=description, options=options, value...
Make a horizontal box with `children` and `layout`.
def make_horizontal_box(cls, children, layout=Layout()): "Make a horizontal box with `children` and `layout`." return widgets.HBox(children, layout=layout)
Make a vertical box with `children` and `layout`.
def make_vertical_box(cls, children, layout=Layout(), duplicates=False): "Make a vertical box with `children` and `layout`." if not duplicates: return widgets.VBox(children, layout=layout) else: return widgets.VBox([children[0], children[2]], layout=layout)
Create a list of images, filenames and labels but first removing files that are not supposed to be displayed.
def create_image_list(self, dataset, fns_idxs): "Create a list of images, filenames and labels but first removing files that are not supposed to be displayed." items = dataset.x.items if self._duplicates: chunked_idxs = chunks(fns_idxs, 2) chunked_idxs = [chunk for chunk ...
Relabel images by moving from parent dir with old label `class_old` to parent dir with new label `class_new`.
def relabel(self, change): "Relabel images by moving from parent dir with old label `class_old` to parent dir with new label `class_new`." class_new,class_old,file_path = change.new,change.old,change.owner.file_path fp = Path(file_path) parent = fp.parents[1] self._csv_dict[fp] =...
Handler for 'Next Batch' button click. Delete all flagged images and renders next batch.
def next_batch(self, _): "Handler for 'Next Batch' button click. Delete all flagged images and renders next batch." for img_widget, delete_btn, fp, in self._batch: fp = delete_btn.file_path if (delete_btn.flagged_for_delete == True): self.delete_image(fp) ...
Flag this image as delete or keep.
def on_delete(self, btn): "Flag this image as delete or keep." btn.button_style = "" if btn.flagged_for_delete else "danger" btn.flagged_for_delete = not btn.flagged_for_delete
Create and format widget set.
def get_widgets(self, duplicates): "Create and format widget set." widgets = [] for (img,fp,human_readable_label) in self._all_images[:self._batch_size]: img_widget = self.make_img_widget(img, layout=Layout(height='250px', width='300px')) dropdown = self.make_dropdown_wid...
Check if current batch contains already deleted images.
def batch_contains_deleted(self): "Check if current batch contains already deleted images." if not self._duplicates: return False imgs = [self._all_images[:self._batch_size][0][1], self._all_images[:self._batch_size][1][1]] return any(img in self._deleted_fns for img in imgs)
Re-render Jupyter cell for batch of images.
def render(self): "Re-render Jupyter cell for batch of images." clear_output() self.write_csv() if self.empty() and self._skipped>0: return display(f'No images to show :). {self._skipped} pairs were ' f'skipped since at least one of the images was deleted ...
Shift the line i of `x` by p-i elements to the left, is `mask` puts 0s on the diagonal.
def _line_shift(x:Tensor, mask:bool=False): "Shift the line i of `x` by p-i elements to the left, is `mask` puts 0s on the diagonal." bs,nh,n,p = x.size() x_pad = torch.cat([x.new_zeros(bs,nh,n,1), x], dim=3) x_shift = x_pad.view(bs,nh,p + 1,n)[:,:,1:].view_as(x) if mask: x_shift.mul_(torch.tril(x.n...
Split a RNN `model` in groups for differential learning rates.
def tfmer_lm_split(model:nn.Module) -> List[nn.Module]: "Split a RNN `model` in groups for differential learning rates." encoder = model[0] n = len(encoder.layers)//3 groups = [list(encoder.layers[:n]), list(encoder.layers[n:2*n]), list(encoder.layers[2*n:])] return groups + [[encoder.encoder, model...
Split a RNN `model` in groups for differential learning rates.
def tfmer_clas_split(model:nn.Module) -> List[nn.Module]: "Split a RNN `model` in groups for differential learning rates." encoder = model[0].module n = len(encoder.layers)//3 groups = [[encoder.encoder], list(encoder.layers[:n]), list(encoder.layers[n:2*n]), list(encoder.layers[2*n:])] return group...
Split a RNN `model` in groups for differential learning rates.
def tfmerXL_lm_split(model:nn.Module) -> List[nn.Module]: "Split a RNN `model` in groups for differential learning rates." encoder = model[0] n = len(encoder.layers)//3 groups = [list(encoder.layers[:n]) + [ParameterModule(encoder.u), ParameterModule(encoder.v)]] return groups + [list(encoder.layers...
Reset the internal memory.
def reset(self): "Reset the internal memory." self.hidden = [next(self.parameters()).data.new(0) for i in range(self.n_layers+1)]
Make report in form of two notebooks. Use nbdime diff-web to present the difference between reference cells and test cells.
def make_report(self, outcome): """Make report in form of two notebooks. Use nbdime diff-web to present the difference between reference cells and test cells. """ failures = self.getreports('failed') if not failures: return for rep in failures: ...
BatchNorm layers to have parameters in single precision. Find all layers and convert them back to float. This can't be done with built in .apply as that function will apply fn to all modules, parameters, and buffers. Thus we wouldn't be able to guard the float conversion based on the module type.
def batchnorm_to_fp32(module): ''' BatchNorm layers to have parameters in single precision. Find all layers and convert them back to float. This can't be done with built in .apply as that function will apply fn to all modules, parameters, and buffers. Thus we wouldn't be able to guard the float ...
Creates a fp32 copy of model parameters and sets optimizer parameters
def copy_model_to_fp32(m, optim): """ Creates a fp32 copy of model parameters and sets optimizer parameters """ fp32_params = [m_param.clone().type(torch.cuda.FloatTensor).detach() for m_param in trainable_params_(m)] optim_groups = [group['params'] for group in optim.param_groups] iter_fp32_params...
Start coverage reporting in kernel. Currently supported kernel languages are: - Python
def setup_coverage(config, kernel, floc, output_loc=None): """Start coverage reporting in kernel. Currently supported kernel languages are: - Python """ language = kernel.language if language.startswith('python'): # Get the pytest-cov coverage object cov = get_cov(config) ...
Finish coverage reporting in kernel. The coverage should previously have been started with setup_coverage.
def teardown_coverage(config, kernel, output_loc=None): """Finish coverage reporting in kernel. The coverage should previously have been started with setup_coverage. """ language = kernel.language if language.startswith('python'): # Teardown code does not require any input, simply execu...
Returns the coverage object of pytest-cov.
def get_cov(config): """Returns the coverage object of pytest-cov.""" # Check with hasplugin to avoid getplugin exception in older pytest. if config.pluginmanager.hasplugin('_cov'): plugin = config.pluginmanager.getplugin('_cov') if plugin.cov_controller: return plugin.cov_contr...
Create a suffix for nbval data file depending on pytest-cov config.
def _make_suffix(cov): """Create a suffix for nbval data file depending on pytest-cov config.""" # Check if coverage object has data_suffix: if cov and cov.data_suffix is not None: # If True, the suffix will be autogenerated by coverage.py. # The suffixed data files will be automatically com...
Merge nbval coverage data into pytest-cov data.
def _merge_nbval_coverage_data(cov): """Merge nbval coverage data into pytest-cov data.""" if not cov: return suffix = _make_suffix(cov) if suffix is True: # Note: If suffix is true, we are running in parallel, so several # files will be generated. This will cause some warnings ...
Yield successive `n`-sized chunks from `l`.
def chunks(l:Collection, n:int)->Iterable: "Yield successive `n`-sized chunks from `l`." for i in range(0, len(l), n): yield l[i:i+n]
Convert `b` to an int or list of ints (if `is_listy`); raises exception if not convertible
def to_int(b:Any)->Union[int,List[int]]: "Convert `b` to an int or list of ints (if `is_listy`); raises exception if not convertible" if is_listy(b): return [to_int(x) for x in b] else: return int(b)
Return `True` if `a` is one-dimensional
def is1d(a:Collection)->bool: "Return `True` if `a` is one-dimensional" return len(a.shape) == 1 if hasattr(a, 'shape') else True
Return sorted unique values of `x`.
def uniqueify(x:Series, sort:bool=False)->List: "Return sorted unique values of `x`." res = list(OrderedDict.fromkeys(x).keys()) if sort: res.sort() return res
List of label subdirectories in imagenet-style `folder`.
def find_classes(folder:Path)->FilePathList: "List of label subdirectories in imagenet-style `folder`." classes = [d for d in folder.iterdir() if d.is_dir() and not d.name.startswith('.')] assert(len(classes)>0) return sorted(classes, key=lambda d: d.name)
Given `arrs` is [a,b,...] and `mask`index - return[(a[mask],a[~mask]),(b[mask],b[~mask]),...].
def arrays_split(mask:NPArrayMask, *arrs:NPArrayableList)->SplitArrayList: "Given `arrs` is [a,b,...] and `mask`index - return[(a[mask],a[~mask]),(b[mask],b[~mask]),...]." assert all([len(arr)==len(arrs[0]) for arr in arrs]), 'All arrays should have same length' mask = array(mask) return list(zip(*[(a[m...
Randomly split `arrs` with `valid_pct` ratio. good for creating validation set.
def random_split(valid_pct:float, *arrs:NPArrayableList)->SplitArrayList: "Randomly split `arrs` with `valid_pct` ratio. good for creating validation set." assert (valid_pct>=0 and valid_pct<=1), 'Validation set percentage should be between 0 and 1' is_train = np.random.uniform(size=(len(arrs[0]),)) > valid...
Make `p` listy and the same length as `q`.
def listify(p:OptListOrItem=None, q:OptListOrItem=None): "Make `p` listy and the same length as `q`." if p is None: p=[] elif isinstance(p, str): p = [p] elif not isinstance(p, Iterable): p = [p] #Rank 0 tensors in PyTorch are Iterable but don't have a length. else: try: a = len...
Change `name` from camel to snake style.
def camel2snake(name:str)->str: "Change `name` from camel to snake style." s1 = re.sub(_camel_re1, r'\1_\2', name) return re.sub(_camel_re2, r'\1_\2', s1).lower()
Build log-stepped array from `start` to `stop` in `n` steps.
def even_mults(start:float, stop:float, n:int)->np.ndarray: "Build log-stepped array from `start` to `stop` in `n` steps." mult = stop/start step = mult**(1/(n-1)) return np.array([start*(step**i) for i in range(n)])
Extract the keys in `names` from the `kwargs`.
def extract_kwargs(names:Collection[str], kwargs:KWArgs): "Extract the keys in `names` from the `kwargs`." new_kwargs = {} for arg_name in names: if arg_name in kwargs: arg_val = kwargs.pop(arg_name) new_kwargs[arg_name] = arg_val return new_kwargs, kwargs
Split iterables `a` in equal parts of size `sz`
def partition(a:Collection, sz:int)->List[Collection]: "Split iterables `a` in equal parts of size `sz`" return [a[i:i+sz] for i in range(0, len(a), sz)]
Split data in `a` equally among `n_cpus` cores
def partition_by_cores(a:Collection, n_cpus:int)->List[Collection]: "Split data in `a` equally among `n_cpus` cores" return partition(a, len(a)//n_cpus + 1)
Categorifies the columns `col_names` in `df`.
def series2cat(df:DataFrame, *col_names): "Categorifies the columns `col_names` in `df`." for c in listify(col_names): df[c] = df[c].astype('category').cat.as_ordered()
Download `url` to `dest` unless it exists and not `overwrite`.
def download_url(url:str, dest:str, overwrite:bool=False, pbar:ProgressBar=None, show_progress=True, chunk_size=1024*1024, timeout=4, retries=5)->None: "Download `url` to `dest` unless it exists and not `overwrite`." if os.path.exists(dest) and not overwrite: return s = requests.Session() ...
Return `Path(path)/Path(fname)`, `path` defaults to current dir.
def join_path(fname:PathOrStr, path:PathOrStr='.')->Path: "Return `Path(path)/Path(fname)`, `path` defaults to current dir." return Path(path)/Path(fname)
Join `path` to every file name in `fnames`.
def join_paths(fnames:FilePathList, path:PathOrStr='.')->Collection[Path]: "Join `path` to every file name in `fnames`." path = Path(path) return [join_path(o,path) for o in fnames]
Return `ndarray` of `str` of lines of text from `path`.
def loadtxt_str(path:PathOrStr)->np.ndarray: "Return `ndarray` of `str` of lines of text from `path`." with open(path, 'r') as f: lines = f.readlines() return np.array([l.strip() for l in lines])
Save in `fname` the content of `texts`.
def save_texts(fname:PathOrStr, texts:Collection[str]): "Save in `fname` the content of `texts`." with open(fname, 'w') as f: for t in texts: f.write(f'{t}\n')
Return the column indexes of `names` in `df`.
def df_names_to_idx(names:IntsOrStrs, df:DataFrame): "Return the column indexes of `names` in `df`." if not is_listy(names): names = [names] if isinstance(names[0], int): return names return [df.columns.get_loc(c) for c in names]
One-hot encode `x` with `c` classes.
def one_hot(x:Collection[int], c:int): "One-hot encode `x` with `c` classes." res = np.zeros((c,), np.float32) res[listify(x)] = 1. return res
Return the slice of `a` corresponding to `idxs`.
def index_row(a:Union[Collection,pd.DataFrame,pd.Series], idxs:Collection[int])->Any: "Return the slice of `a` corresponding to `idxs`." if a is None: return a if isinstance(a,(pd.DataFrame,pd.Series)): res = a.iloc[idxs] if isinstance(res,(pd.DataFrame,pd.Series)): return res.copy() ...