insightface-000 of frvt | 97.760 | 93.358 | 98.850 | 99.372 | 99.058 | 87.694 | 97.481 | - | - | - |
+
+
+(MS1M-V2 means MS1M-ArcFace, MS1M-V3 means MS1M-RetinaFace).
+
+Inference time in above table was evaluated on Tesla V100 GPU, using onnxruntime-gpu==1.6.
+
+## Rules
+
+1. We have two tracks, academic and unconstrained.
+2. Please **DO NOT** register the account with messy or random characters(for both username and organization).
+3. **For academic submissions, we recommend to set the username as the name of your proposed paper or method. Orgnization hiding is not allowed(or the score will be banned) for this track but you can set the submission as private. You can also create multiple accounts, one account for one method.**
+4. Right now we only support 112x112 input, so make sure that the submission model accepts the correct input shape(['*',3,112,112]), in RGB order. Add an interpolate operator into the first layer of the submission model if you need a different input resolution.
+5. Participants submit onnx model, then get scores by our online evaluation.
+6. Matching score is measured by cosine similarity.
+7. **Online evaluation server uses onnxruntime-gpu==1.8, cuda==11.1, cudnn==8.0.5, GPU is RTX3090.**
+8. Any float-16 model weights is prohibited, as it will lead to incorrect model size estimiation.
+9. Please use ``onnx_helper.py`` to check whether the model is valid.
+10. Leaderboard is now ordered in terms of highest scores across two datasets: **TAR@Mask** and **TAR@MR-All**, by the formula of ``0.25 * TAR@Mask + 0.75 * TAR@MR-All``.
+
+
+
+## Submission Guide
+
+1. Participants must package the onnx model for submission using ``zip xxx.zip model.onnx``.
+2. Each participant can submit three times a day at most.
+3. Please sign-up with the real organization name. You can hide the organization name in our system if you like(not allowed for academic track).
+4. You can decide which submission to be displayed on the leaderboard by clicking 'Set Public' button.
+5. Please click 'sign-in' on submission server if find you're not logged in.
diff --git a/insightface/detection/README.md b/insightface/detection/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..3139f812660603e612747c1af4d28b7e402ac807
--- /dev/null
+++ b/insightface/detection/README.md
@@ -0,0 +1,42 @@
+## Face Detection
+
+
+
+

+
+
+
+## Introduction
+
+These are the face detection methods of [InsightFace](https://insightface.ai)
+
+
+
+

+
+
+
+### Datasets
+
+ Please refer to [datasets](_datasets_) page for the details of face detection datasets used for training and evaluation.
+
+### Evaluation
+
+ Please refer to [evaluation](_evaluation_) page for the details of face recognition evaluation.
+
+
+## Methods
+
+
+Supported methods:
+
+- [x] [RetinaFace (CVPR'2020)](retinaface)
+- [x] [SCRFD (Arxiv'2021)](scrfd)
+- [x] [blazeface_paddle](blazeface_paddle)
+
+
+## Contributing
+
+We appreciate all contributions to improve the face detection model zoo of InsightFace.
+
+
diff --git a/insightface/detection/_datasets_/README.md b/insightface/detection/_datasets_/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..618227009d54f25c981afaeec007cf30fcdf8e04
--- /dev/null
+++ b/insightface/detection/_datasets_/README.md
@@ -0,0 +1,31 @@
+# Face Detection Datasets
+
+(Updating)
+
+## Training Datasets
+
+### WiderFace
+
+http://shuoyang1213.me/WIDERFACE/
+
+
+
+## Test Datasets
+
+### WiderFace
+
+http://shuoyang1213.me/WIDERFACE/
+
+### FDDB
+
+http://vis-www.cs.umass.edu/fddb/
+
+### AFW
+
+
+### PASCAL FACE
+
+
+### MALF
+
+http://www.cbsr.ia.ac.cn/faceevaluation/
diff --git a/insightface/detection/blazeface_paddle/README.md b/insightface/detection/blazeface_paddle/README.md
new file mode 120000
index 0000000000000000000000000000000000000000..13c4f964bb9063f28d6e08dfb8c6b828a81d2536
--- /dev/null
+++ b/insightface/detection/blazeface_paddle/README.md
@@ -0,0 +1 @@
+README_en.md
\ No newline at end of file
diff --git a/insightface/detection/blazeface_paddle/README_cn.md b/insightface/detection/blazeface_paddle/README_cn.md
new file mode 100644
index 0000000000000000000000000000000000000000..0762058e58ca17356418930ef7fe3643dce0442f
--- /dev/null
+++ b/insightface/detection/blazeface_paddle/README_cn.md
@@ -0,0 +1,355 @@
+简体中文 | [English](README_en.md)
+
+# 人脸检测模型
+
+* [1. 简介](#简介)
+* [2. 模型库](#模型库)
+* [3. 安装](#安装)
+* [4. 数据准备](#数据准备)
+* [5. 参数配置](#参数配置)
+* [6. 训练与评估](#训练与评估)
+ * [6.1 训练](#训练)
+ * [6.2 在WIDER-FACE数据集上评估](#评估)
+ * [6.3 推理部署](#推理部署)
+ * [6.4 推理速度提升](#推理速度提升)
+ * [6.5 人脸检测demo](#人脸检测demo)
+* [7. 参考文献](#参考文献)
+
+
+
+## 1. 简介
+
+`Arcface-Paddle`是基于PaddlePaddle实现的,开源深度人脸检测、识别工具。`Arcface-Paddle`目前提供了三个预训练模型,包括用于人脸检测的 `BlazeFace`、用于人脸识别的 `ArcFace` 和 `MobileFace`。
+
+- 本部分内容为人脸检测部分,基于PaddleDetection进行开发。
+- 人脸识别相关内容可以参考:[人脸识别](../../recognition/arcface_paddle/README_cn.md)。
+- 基于PaddleInference的Whl包预测部署内容可以参考:[Whl包预测部署](https://github.com/littletomatodonkey/insight-face-paddle)。
+
+
+
+
+## 2. 模型库
+
+### WIDER-FACE数据集上的mAP
+
+| 网络结构 | 输入尺寸 | 图片个数/GPU | epoch数量 | Easy/Medium/Hard Set | CPU预测时延 | GPU 预测时延 | 模型大小(MB) | 预训练模型地址 | inference模型地址 | 配置文件 |
+|:------------:|:--------:|:----:|:-------:|:-------:|:-------:|:---------:|:----------:|:---------:|:---------:|:--------:|
+| BlazeFace-FPN-SSH | 640 | 8 | 1000 | 0.9187 / 0.8979 / 0.8168 | 31.7ms | 5.6ms | 0.646 |[下载链接](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [下载链接](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/blazeface_fpn_ssh_1000e_v1.0_infer.tar) | [配置文件](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1/configs/face_detection/blazeface_fpn_ssh_1000e.yml) |
+| RetinaFace | 480x640 | - | - | - / - / 0.8250 | 182.0ms | 17.4ms | 1.680 | - | - | - |
+
+
+**注意:**
+- 我们使用多尺度评估策略得到`Easy/Medium/Hard Set`里的mAP。具体细节请参考[在WIDER-FACE数据集上评估](#评估)。
+- 测量速度时我们使用640*640的分辨,在 Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz cpu,CPU线程数设置为5,更多细节请参考[推理速度提升](#推理速度提升)。
+- `RetinaFace`的速度测试代码参考自:[../retinaface/README.md](../retinaface/README.md).
+- 测试环境为
+ - CPU: Intel(R) Xeon(R) Gold 6184 CPU @ 2.40GHz
+ - GPU: a single NVIDIA Tesla V100
+
+
+
+
+## 3. 安装
+
+请参考[安装教程](../../recognition/arcface_paddle/install_ch.md)安装PaddlePaddle以及PaddleDetection。
+
+
+
+## 4. 数据准备
+我们使用[WIDER-FACE数据集](http://shuoyang1213.me/WIDERFACE/)进行训练和模型测试,官方网站提供了详细的数据介绍。
+- WIDER-Face数据源:
+使用如下目录结构加载`wider_face`类型的数据集:
+
+ ```
+ dataset/wider_face/
+ ├── wider_face_split
+ │ ├── wider_face_train_bbx_gt.txt
+ │ ├── wider_face_val_bbx_gt.txt
+ ├── WIDER_train
+ │ ├── images
+ │ │ ├── 0--Parade
+ │ │ │ ├── 0_Parade_marchingband_1_100.jpg
+ │ │ │ ├── 0_Parade_marchingband_1_381.jpg
+ │ │ │ │ ...
+ │ │ ├── 10--People_Marching
+ │ │ │ ...
+ ├── WIDER_val
+ │ ├── images
+ │ │ ├── 0--Parade
+ │ │ │ ├── 0_Parade_marchingband_1_1004.jpg
+ │ │ │ ├── 0_Parade_marchingband_1_1045.jpg
+ │ │ │ │ ...
+ │ │ ├── 10--People_Marching
+ │ │ │ ...
+ ```
+
+- 手动下载数据集:
+要下载WIDER-FACE数据集,请运行以下命令:
+```
+cd dataset/wider_face && ./download_wider_face.sh
+```
+
+
+
+## 5. 参数配置
+
+我们使用 `configs/face_detection/blazeface_fpn_ssh_1000e.yml`配置进行训练,配置文件摘要如下:
+
+```yaml
+
+_BASE_: [
+ '../datasets/wider_face.yml',
+ '../runtime.yml',
+ '_base_/optimizer_1000e.yml',
+ '_base_/blazeface_fpn.yml',
+ '_base_/face_reader.yml',
+]
+weights: output/blazeface_fpn_ssh_1000e/model_final
+multi_scale_eval: True
+
+```
+
+`blazeface_fpn_ssh_1000e.yml` 配置需要依赖其他的配置文件,在该例子中需要依赖:
+
+```
+wider_face.yml:主要说明了训练数据和验证数据的路径
+
+runtime.yml:主要说明了公共的运行参数,比如是否使用GPU、每多少个epoch存储checkpoint等
+
+optimizer_1000e.yml:主要说明了学习率和优化器的配置
+
+blazeface_fpn.yml:主要说明模型和主干网络的情况
+
+face_reader.yml:主要说明数据读取器配置,如batch size,并发加载子进程数等,同时包含读取后预处理操作,如resize、数据增强等等
+```
+
+根据实际情况,修改上述文件,比如数据集路径、batch size等。
+
+基础模型的配置可以参考`configs/face_detection/_base_/blazeface.yml`;
+改进模型增加FPN和SSH的neck结构,配置文件可以参考`configs/face_detection/_base_/blazeface_fpn.yml`,可以根据需求配置FPN和SSH,具体如下:
+```yaml
+BlazeNet:
+ blaze_filters: [[24, 24], [24, 24], [24, 48, 2], [48, 48], [48, 48]]
+ double_blaze_filters: [[48, 24, 96, 2], [96, 24, 96], [96, 24, 96],
+ [96, 24, 96, 2], [96, 24, 96], [96, 24, 96]]
+ act: hard_swish # 配置backbone中BlazeBlock的激活函数,基础模型为relu,增加FPN和SSH时需使用hard_swish
+
+BlazeNeck:
+ neck_type : fpn_ssh # 可选only_fpn、only_ssh和fpn_ssh
+ in_channel: [96,96]
+```
+
+
+
+## 6. 训练与评估
+
+
+
+### 6.1 训练
+首先,下载预训练模型文件:
+```bash
+wget https://paddledet.bj.bcebos.com/models/pretrained/blazenet_pretrain.pdparams
+```
+PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求
+* GPU单卡训练
+```bash
+export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
+python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
+```
+
+* GPU多卡训练
+```bash
+export CUDA_VISIBLE_DEVICES=0,1,2,3 #windows和Mac下不需要执行该命令
+python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
+```
+* 模型恢复训练
+
+ 在日常训练过程中,有的用户由于一些原因导致训练中断,用户可以使用-r的命令恢复训练
+
+```bash
+export CUDA_VISIBLE_DEVICES=0 #windows和Mac下不需要执行该命令
+python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -r output/blazeface_fan_ssh_1000e/100
+ ```
+* 训练策略
+
+`BlazeFace`训练是以每卡`batch_size=32`在4卡GPU上进行训练(总`batch_size`是128),学习率为0.002,并且训练1000epoch。
+
+
+**注意:** 人脸检测模型目前不支持边训练边评估。
+
+
+
+### 6.2 在WIDER-FACE数据集上评估
+- 步骤一:评估并生成结果文件:
+```shell
+python -u tools/eval.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml \
+ -o weights=output/blazeface_fpn_ssh_1000e/model_final \
+ multi_scale_eval=True BBoxPostProcess.nms.score_threshold=0.1
+```
+设置`multi_scale_eval=True`进行多尺度评估,评估完成后,将在`output/pred`中生成txt格式的测试结果。
+
+- 步骤二:下载官方评估脚本和Ground Truth文件:
+```
+wget http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/eval_script/eval_tools.zip
+unzip eval_tools.zip && rm -f eval_tools.zip
+```
+
+- 步骤三:开始评估
+
+方法一:python评估。
+
+```bash
+git clone https://github.com/wondervictor/WiderFace-Evaluation.git
+cd WiderFace-Evaluation
+# 编译
+python3 setup.py build_ext --inplace
+# 开始评估
+python3 evaluation.py -p /path/to/PaddleDetection/output/pred -g /path/to/eval_tools/ground_truth
+```
+
+方法二:MatLab评估。
+
+```bash
+# 在`eval_tools/wider_eval.m`中修改保存结果路径和绘制曲线的名称:
+pred_dir = './pred';
+legend_name = 'Paddle-BlazeFace';
+
+`wider_eval.m` 是评估模块的主要执行程序。运行命令如下:
+matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;"
+```
+
+
+### 6.3 推理部署
+
+在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。
+在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型
+
+```bash
+python tools/export_model.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml --output_dir=./inference_model \
+ -o weights=output/blazeface_fpn_ssh_1000e/best_model BBoxPostProcess.nms.score_threshold=0.1
+```
+
+预测模型会导出到`inference_model/blazeface_fpn_ssh_1000e`目录下,分别为`infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`,`model.pdmodel` 如果不指定文件夹,模型则会导出在`output_inference`
+
+* 这里将nms后处理`score_threshold`修改为0.1,因为mAP基本没有影响的情况下,GPU预测速度能够大幅提升。更多关于模型导出的文档,请参考[模型导出文档](https://github.com/PaddlePaddle/PaddleDetection/deploy/EXPORT_MODEL.md)
+
+ PaddleDetection提供了PaddleInference、PaddleServing、PaddleLite多种部署形式,支持服务端、移动端、嵌入式等多种平台,提供了完善的Python和C++部署方案。
+* 在这里,我们以Python为例,说明如何使用PaddleInference进行模型部署
+
+```bash
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_file=demo/road554.png --use_gpu=True
+```
+* 同时`infer.py`提供了丰富的接口,用户进行接入视频文件、摄像头进行预测,更多内容请参考[Python端预测部署](https://github.com/PaddlePaddle/PaddleDetection/deploy/python.md)
+
+* 更多关于预测部署的文档,请参考[预测部署文档](https://github.com/PaddlePaddle/PaddleDetection/deploy/README.md) 。
+
+
+
+### 6.4 推理速度提升
+如果想要复现我们提供的速度指标,请修改预测模型配置文件`./inference_model/blazeface_fpn_ssh_1000e/infer_cfg.yml`中的输入尺寸,如下所示:
+```yaml
+mode: fluid
+draw_threshold: 0.5
+metric: WiderFace
+arch: Face
+min_subgraph_size: 3
+Preprocess:
+- is_scale: false
+ mean:
+ - 123
+ - 117
+ - 104
+ std:
+ - 127.502231
+ - 127.502231
+ - 127.502231
+ type: NormalizeImage
+- interp: 1
+ keep_ratio: false
+ target_size:
+ - 640
+ - 640
+ type: Resize
+- type: Permute
+label_list:
+- face
+```
+如果希望模型在cpu环境下更快推理,可安装[paddlepaddle_gpu-0.0.0](https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl) (mkldnn的依赖)可开启mkldnn加速推理。
+
+```bash
+# 使用GPU测速:
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --run_benchmark=True --use_gpu=True
+
+# 使用cpu测速:
+# 下载paddle whl包
+wget https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
+# 安装paddlepaddle_gpu-0.0.0
+pip install paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
+# 推理
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --enable_mkldnn=True --run_benchmark=True --cpu_threads=5
+```
+
+
+
+### 6.5 人脸检测demo
+
+本节介绍基于提供的BlazeFace模型进行人脸检测。
+
+先下载待检测图像与字体文件。
+
+```bash
+# 下载用于人脸检测的示例图像
+wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/demo/friends/query/friends1.jpg
+# 下载字体,用于可视化
+wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/SourceHanSansCN-Medium.otf
+```
+
+示例图像如下所示。
+
+
+

+
+
+
+检测的示例命令如下。
+
+```shell
+python3.7 test_blazeface.py --input=friends1.jpg --output="./output"
+```
+
+最终可视化结果保存在`output`目录下,可视化结果如下所示。
+
+
+

+
+
+
+更多关于参数解释,索引库构建、人脸识别、whl包预测部署的内容可以参考:[Whl包预测部署](https://github.com/littletomatodonkey/insight-face-paddle)。
+
+
+
+## 7. 参考文献
+
+```
+@misc{long2020ppyolo,
+title={PP-YOLO: An Effective and Efficient Implementation of Object Detector},
+author={Xiang Long and Kaipeng Deng and Guanzhong Wang and Yang Zhang and Qingqing Dang and Yuan Gao and Hui Shen and Jianguo Ren and Shumin Han and Errui Ding and Shilei Wen},
+year={2020},
+eprint={2007.12099},
+archivePrefix={arXiv},
+primaryClass={cs.CV}
+}
+@misc{ppdet2019,
+title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
+author={PaddlePaddle Authors},
+howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
+year={2019}
+}
+@article{bazarevsky2019blazeface,
+title={BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs},
+author={Valentin Bazarevsky and Yury Kartynnik and Andrey Vakunov and Karthik Raveendran and Matthias Grundmann},
+year={2019},
+eprint={1907.05047},
+ archivePrefix={arXiv}
+}
+```
diff --git a/insightface/detection/blazeface_paddle/README_en.md b/insightface/detection/blazeface_paddle/README_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..24fd17863fab2907465960087c6c905f3dfc78ec
--- /dev/null
+++ b/insightface/detection/blazeface_paddle/README_en.md
@@ -0,0 +1,354 @@
+[简体中文](README_cn.md) | English
+
+# FaceDetection
+
+* [1. Introduction](#Introduction)
+* [2. Model Zoo](#Model_Zoo)
+* [3. Installation](#Installation)
+* [4. Data Pipline](#Data_Pipline)
+* [5. Configuration File](#Configuration_File)
+* [6. Training and Inference](#Training_and_Inference)
+ * [6.1 Training](#Training)
+ * [6.2 Evaluate on the WIDER FACE](#Evaluation)
+ * [6.3 Inference deployment](#Inference_deployment)
+ * [6.4 Improvement of inference speed](#Increase_in_inference_speed)
+ * [6.4 Face detection demo](#Face_detection_demo)
+* [7. Citations](#Citations)
+
+
+
+## 1. Introduction
+
+`Arcface-Paddle` is an open source deep face detection and recognition toolkit, powered by PaddlePaddle. `Arcface-Paddle` provides three related pretrained models now, include `BlazeFace` for face detection, `ArcFace` and `MobileFace` for face recognition.
+
+- This tutorial is mainly about face detection based on `PaddleDetection`.
+- For face recognition task, please refer to: [Face recognition tuturial](../../recognition/arcface_paddle/README_en.md).
+- For Whl package inference using PaddleInference, please refer to [whl package inference](https://github.com/littletomatodonkey/insight-face-paddle).
+
+
+
+## 2. Model Zoo
+
+### mAP in WIDER FACE
+
+| Model | input size | images/GPU | epochs | Easy/Medium/Hard Set | CPU time cost | GPU time cost| Model Size(MB) | Pretrained model | Inference model | Config |
+|:------------:|:--------:|:----:|:-------:|:-------:|:---------:|:---------:|:----------:|:---------:|:--------:|:--------:|
+| BlazeFace-FPN-SSH | 640×640 | 8 | 1000 | 0.9187 / 0.8979 / 0.8168 | 31.7ms | 5.6ms | 0.646 |[download link](https://paddledet.bj.bcebos.com/models/blazeface_fpn_ssh_1000e.pdparams) | [download link](https://paddle-model-ecology.bj.bcebos.com/model/insight-face/blazeface_fpn_ssh_1000e_v1.0_infer.tar) | [config](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.1/configs/face_detection/blazeface_fpn_ssh_1000e.yml) |
+| RetinaFace | 480x640 | - | - | - / - / 0.8250 | 182.0ms | 17.4ms | 1.680 | - | - | - |
+
+
+**NOTE:**
+- Get mAP in `Easy/Medium/Hard Set` by multi-scale evaluation. For details can refer to [Evaluation](#Evaluate-on-the-WIDER-FACE).
+- Measuring the speed, we use the resolution of `640×640`, in Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz environment, cpu-threads are set as 5. For more details, you can refer to [Improvement of inference speed](#Increase_in_inference_speed).
+- Benchmark code for `RetinaFace` is from: [../retinaface/README.md](../retinaface/README.md).
+- The benchmark environment is
+ - CPU: Intel(R) Xeon(R) Gold 6184 CPU @ 2.40GHz
+ - GPU: a single NVIDIA Tesla V100
+
+
+
+## 3. Installation
+
+Please refer to [installation tutorial](../../recognition/arcface_paddle/install_en.md) to install PaddlePaddle and PaddleDetection.
+
+
+
+
+## 4. Data Pipline
+We use the [WIDER FACE dataset](http://shuoyang1213.me/WIDERFACE/) to carry out the training
+and testing of the model, the official website gives detailed data introduction.
+- WIDER Face data source:
+Loads `wider_face` type dataset with directory structures like this:
+
+ ```
+ dataset/wider_face/
+ ├── wider_face_split
+ │ ├── wider_face_train_bbx_gt.txt
+ │ ├── wider_face_val_bbx_gt.txt
+ ├── WIDER_train
+ │ ├── images
+ │ │ ├── 0--Parade
+ │ │ │ ├── 0_Parade_marchingband_1_100.jpg
+ │ │ │ ├── 0_Parade_marchingband_1_381.jpg
+ │ │ │ │ ...
+ │ │ ├── 10--People_Marching
+ │ │ │ ...
+ ├── WIDER_val
+ │ ├── images
+ │ │ ├── 0--Parade
+ │ │ │ ├── 0_Parade_marchingband_1_1004.jpg
+ │ │ │ ├── 0_Parade_marchingband_1_1045.jpg
+ │ │ │ │ ...
+ │ │ ├── 10--People_Marching
+ │ │ │ ...
+ ```
+
+- Download dataset manually:
+To download the WIDER FACE dataset, run the following commands:
+```
+cd dataset/wider_face && ./download_wider_face.sh
+```
+
+
+
+## 5. Configuration file
+
+We use the `configs/face_detection/blazeface_fpn_ssh_1000e.yml` configuration for training. The summary of the configuration file is as follows:
+
+```yaml
+_BASE_: [
+ '../datasets/wider_face.yml',
+ '../runtime.yml',
+ '_base_/optimizer_1000e.yml',
+ '_base_/blazeface_fpn.yml',
+ '_base_/face_reader.yml',
+]
+weights: output/blazeface_fpn_ssh_1000e/model_final
+multi_scale_eval: True
+```
+
+`blazeface_fpn_ssh_1000e.yml` The configuration needs to rely on other configuration files, in this example it needs to rely on:
+
+```
+wider_face.yml:Mainly explains the path of training data and verification data
+
+runtime.yml:Mainly describes the common operating parameters, such as whether to use GPU, how many epochs to store checkpoints, etc.
+
+optimizer_1000e.yml:Mainly explains the configuration of learning rate and optimizer
+
+blazeface_fpn.yml:Mainly explain the situation of the model and the backbone network
+
+face_reader.yml:It mainly describes the configuration of the data reader, such as batch size, the number of concurrent loading subprocesses, etc., and also includes post-reading preprocessing operations, such as resize, data enhancement, etc.
+```
+
+According to the actual situation, modify the above files, such as the data set path, batch size, etc.
+
+For the configuration of the base model, please refer to `configs/face_detection/_base_/blazeface.yml`.
+The improved model adds the neck structure of FPN and SSH. For the configuration file, please refer to `configs/face_detection/_base_/blazeface_fpn.yml`. You can configure FPN and SSH if needed, which is as follows:
+
+```yaml
+BlazeNet:
+ blaze_filters: [[24, 24], [24, 24], [24, 48, 2], [48, 48], [48, 48]]
+ double_blaze_filters: [[48, 24, 96, 2], [96, 24, 96], [96, 24, 96],
+ [96, 24, 96, 2], [96, 24, 96], [96, 24, 96]]
+ act: hard_swish # Configure the activation function of BlazeBlock in backbone, the basic model is relu, hard_swish is required when adding FPN and SSH
+
+BlazeNeck:
+ neck_type : fpn_ssh # Optional only_fpn, only_ssh and fpn_ssh
+ in_channel: [96,96]
+```
+
+
+
+## 6. Training_and_Inference
+
+
+
+### 6.1 Training
+Firstly, download the pretrained model.
+```bash
+wget https://paddledet.bj.bcebos.com/models/pretrained/blazenet_pretrain.pdparams
+```
+PaddleDetection provides a single-GPU/multi-GPU training mode to meet the various training needs of users.
+* single-GPU training
+```bash
+export CUDA_VISIBLE_DEVICES=0 # Do not need to execute this command under windows and Mac
+python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
+```
+
+* multi-GPU training
+```bash
+export CUDA_VISIBLE_DEVICES=0,1,2,3 # Do not need to execute this command under windows and Mac
+python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -o pretrain_weight=blazenet_pretrain
+```
+* Resume training from Checkpoint
+
+ In the daily training process, if the training was be interrupted, using the -r command to resume training:
+
+```bash
+export CUDA_VISIBLE_DEVICES=0 # Do not need to execute this command under windows and Mac
+python tools/train.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml -r output/blazeface_fan_ssh_1000e/100
+ ```
+* Training hyperparameters
+
+`BlazeFace` training is based on each GPU `batch_size=32` training on 4 GPUs (total `batch_size` is 128), the learning rate is 0.002, and the total training epoch is set as 1000.
+
+
+**NOTE:** Not support evaluation during train.
+
+
+
+### 6.2 Evaluate on the WIDER FACE
+- Evaluate and generate results files:
+```shell
+python -u tools/eval.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml \
+ -o weights=output/blazeface_fpn_ssh_1000e/model_final \
+ multi_scale_eval=True BBoxPostProcess.nms.score_threshold=0.1
+```
+Set `multi_scale_eval=True` for multi-scale evaluation,after the evaluation is completed, the test result in txt format will be generated in `output/pred`.
+
+- Download the official evaluation script to evaluate the AP metrics:
+
+```bash
+wget http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/eval_script/eval_tools.zip
+unzip eval_tools.zip && rm -f eval_tools.zip
+```
+
+- Start evaluation:
+
+Method One: Python evaluation:
+
+```bash
+git clone https://github.com/wondervictor/WiderFace-Evaluation.git
+cd WiderFace-Evaluation
+# Compile
+python3 setup.py build_ext --inplace
+# Start evaluation
+python3 evaluation.py -p /path/to/PaddleDetection/output/pred -g /path/to/eval_tools/ground_truth
+```
+
+Method Two: MatLab evaluation:
+
+```bash
+# Modify the result path and the name of the curve to be drawn in `eval_tools/wider_eval.m`:
+pred_dir = './pred';
+legend_name = 'Paddle-BlazeFace';
+
+`wider_eval.m` is the main execution program of the evaluation module. The run command is as follows:
+matlab -nodesktop -nosplash -nojvm -r "run wider_eval.m;quit;"
+```
+
+
+### 6.3 Inference deployment
+
+The model file saved in the model training process includes forward prediction and back propagation. In actual industrial deployment, back propagation is not required. Therefore, the model needs to be exported into the model format required for deployment.
+The `tools/export_model.py` script is provided in PaddleDetection to export the model:
+
+```bash
+python tools/export_model.py -c configs/face_detection/blazeface_fpn_ssh_1000e.yml --output_dir=./inference_model \
+ -o weights=output/blazeface_fpn_ssh_1000e/best_model BBoxPostProcess.nms.score_threshold=0.1
+```
+The inference model will be exported to the `inference_model/blazeface_fpn_ssh_1000e` directory, which are `infer_cfg.yml`, `model.pdiparams`, `model.pdiparams.info`, `model.pdmodel` If no folder is specified, the model will be exported In `output_inference`.
+
+* `score_threshold` for nms is modified as 0.1 for inference, because it takes great speed performance improvement while has little effect on mAP. For more documentation about model export, please refer to: [export doc](https://github.com/PaddlePaddle/PaddleDetection/deploy/EXPORT_MODEL.md)
+
+ PaddleDetection provides multiple deployment forms of PaddleInference, PaddleServing, and PaddleLite, supports multiple platforms such as server, mobile, and embedded, and provides a complete deployment plan for Python and C++.
+* Here, we take Python as an example to illustrate how to use PaddleInference for model deployment:
+```bash
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_file=demo/road554.png --use_gpu=True
+```
+* `infer.py` provides a rich interface for users to access video files and cameras for prediction. For more information, please refer to: [Python deployment](https://github.com/PaddlePaddle/PaddleDetection/deploy/python.md).
+
+* For more documentation on deployment, please refer to: [deploy doc](https://github.com/PaddlePaddle/PaddleDetection/deploy/README.md).
+
+
+
+### 6.4 Improvement of inference speed
+
+If you want to reproduce our speed indicators, you need to modify the input size of inference model in the `./inference_model/blazeface_fpn_ssh_1000e/infer_cfg.yml` configuration file. As follows:
+```yaml
+mode: fluid
+draw_threshold: 0.5
+metric: WiderFace
+arch: Face
+min_subgraph_size: 3
+Preprocess:
+- is_scale: false
+ mean:
+ - 123
+ - 117
+ - 104
+ std:
+ - 127.502231
+ - 127.502231
+ - 127.502231
+ type: NormalizeImage
+- interp: 1
+ keep_ratio: false
+ target_size:
+ - 640
+ - 640
+ type: Resize
+- type: Permute
+label_list:
+- face
+```
+
+If you want the model to be inferred faster in the CPU environment, install [paddlepaddle_gpu-0.0.0](https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl) (dependency of mkldnn) and enable_mkldnn is set to True, when predicting acceleration.
+
+```bash
+# use GPU:
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --run_benchmark=True --use_gpu=True
+
+# inference with mkldnn use CPU
+# downdoad whl package
+wget https://paddle-wheel.bj.bcebos.com/develop-cpu-mkl/paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
+#install paddlepaddle_gpu-0.0.0
+pip install paddlepaddle-0.0.0-cp37-cp37m-linux_x86_64.whl
+python deploy/python/infer.py --model_dir=./inference_model/blazeface_fpn_ssh_1000e --image_dir=./path/images --enable_mkldnn=True --run_benchmark=True --cpu_threads=5
+```
+
+
+
+### 6.5 Face detection demo
+
+This part talks about how to detect faces using BlazeFace model.
+
+Firstly, use the following commands to download the demo image and font file for visualization.
+
+
+```bash
+# Demo image
+wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/demo/friends/query/friends1.jpg
+# Font file for visualization
+wget https://raw.githubusercontent.com/littletomatodonkey/insight-face-paddle/main/SourceHanSansCN-Medium.otf
+```
+
+The demo image is shown as follows.
+
+
+

+
+
+
+Use the following command to run the face detection process.
+
+```shell
+python3.7 test_blazeface.py --input=friends1.jpg --output="./output"
+```
+
+The final result is save in folder `output/`, which is shown as follows.
+
+
+

+
+
+
+For more details about parameter explanations, face recognition, index gallery construction and whl package inference, please refer to [Whl package inference tutorial](https://github.com/littletomatodonkey/insight-face-paddle).
+
+
+## 7. Citations
+
+```
+@misc{long2020ppyolo,
+title={PP-YOLO: An Effective and Efficient Implementation of Object Detector},
+author={Xiang Long and Kaipeng Deng and Guanzhong Wang and Yang Zhang and Qingqing Dang and Yuan Gao and Hui Shen and Jianguo Ren and Shumin Han and Errui Ding and Shilei Wen},
+year={2020},
+eprint={2007.12099},
+archivePrefix={arXiv},
+primaryClass={cs.CV}
+}
+@misc{ppdet2019,
+title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
+author={PaddlePaddle Authors},
+howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
+year={2019}
+}
+@article{bazarevsky2019blazeface,
+title={BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs},
+author={Valentin Bazarevsky and Yury Kartynnik and Andrey Vakunov and Karthik Raveendran and Matthias Grundmann},
+year={2019},
+eprint={1907.05047},
+ archivePrefix={arXiv}
+}
+```
diff --git a/insightface/detection/blazeface_paddle/test_blazeface.py b/insightface/detection/blazeface_paddle/test_blazeface.py
new file mode 100644
index 0000000000000000000000000000000000000000..fa8a8f103be3e18c5de6e9d699b4bfdbb891d29a
--- /dev/null
+++ b/insightface/detection/blazeface_paddle/test_blazeface.py
@@ -0,0 +1,593 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import argparse
+import requests
+import logging
+import imghdr
+import pickle
+import tarfile
+from functools import partial
+
+import cv2
+import numpy as np
+from sklearn.metrics.pairwise import cosine_similarity
+from tqdm import tqdm
+from prettytable import PrettyTable
+from PIL import Image, ImageDraw, ImageFont
+import paddle
+from paddle.inference import Config
+from paddle.inference import create_predictor
+
+__all__ = ["parser"]
+BASE_INFERENCE_MODEL_DIR = os.path.expanduser("~/.insightface/ppmodels/")
+BASE_DOWNLOAD_URL = "https://paddle-model-ecology.bj.bcebos.com/model/insight-face/{}.tar"
+
+
+def parser(add_help=True):
+ def str2bool(v):
+ return v.lower() in ("true", "t", "1")
+
+ parser = argparse.ArgumentParser(add_help=add_help)
+
+ parser.add_argument(
+ "--det_model",
+ type=str,
+ default="BlazeFace",
+ help="The detection model.")
+ parser.add_argument(
+ "--use_gpu",
+ type=str2bool,
+ default=True,
+ help="Whether use GPU to predict. Default by True.")
+ parser.add_argument(
+ "--enable_mkldnn",
+ type=str2bool,
+ default=True,
+ help="Whether use MKLDNN to predict, valid only when --use_gpu is False. Default by False."
+ )
+ parser.add_argument(
+ "--cpu_threads",
+ type=int,
+ default=1,
+ help="The num of threads with CPU, valid only when --use_gpu is False. Default by 1."
+ )
+ parser.add_argument(
+ "--input",
+ type=str,
+ help="The path or directory of image(s) or video to be predicted.")
+ parser.add_argument(
+ "--output", type=str, default="./output/", help="The directory of prediction result.")
+ parser.add_argument(
+ "--det_thresh",
+ type=float,
+ default=0.8,
+ help="The threshold of detection postprocess. Default by 0.8.")
+ return parser
+
+
+def print_config(args):
+ args = vars(args)
+ table = PrettyTable(['Param', 'Value'])
+ for param in args:
+ table.add_row([param, args[param]])
+ width = len(str(table).split("\n")[0])
+ print("{}".format("-" * width))
+ print("PaddleFace".center(width))
+ print(table)
+ print("Powered by PaddlePaddle!".rjust(width))
+ print("{}".format("-" * width))
+
+
+def download_with_progressbar(url, save_path):
+ """Download from url with progressbar.
+ """
+ if os.path.isfile(save_path):
+ os.remove(save_path)
+ response = requests.get(url, stream=True)
+ total_size_in_bytes = int(response.headers.get("content-length", 0))
+ block_size = 1024 # 1 Kibibyte
+ progress_bar = tqdm(total=total_size_in_bytes, unit="iB", unit_scale=True)
+ with open(save_path, "wb") as file:
+ for data in response.iter_content(block_size):
+ progress_bar.update(len(data))
+ file.write(data)
+ progress_bar.close()
+ if total_size_in_bytes == 0 or progress_bar.n != total_size_in_bytes or not os.path.isfile(
+ save_path):
+ raise Exception(
+ f"Something went wrong while downloading model/image from {url}")
+
+
+def check_model_file(model):
+ """Check the model files exist and download and untar when no exist.
+ """
+ model_map = {
+ "ArcFace": "arcface_iresnet50_v1.0_infer",
+ "BlazeFace": "blazeface_fpn_ssh_1000e_v1.0_infer",
+ "MobileFace": "mobileface_v1.0_infer"
+ }
+
+ if os.path.isdir(model):
+ model_file_path = os.path.join(model, "inference.pdmodel")
+ params_file_path = os.path.join(model, "inference.pdiparams")
+ if not os.path.exists(model_file_path) or not os.path.exists(
+ params_file_path):
+ raise Exception(
+ f"The specifed model directory error. The drectory must include 'inference.pdmodel' and 'inference.pdiparams'."
+ )
+
+ elif model in model_map:
+ storage_directory = partial(os.path.join, BASE_INFERENCE_MODEL_DIR,
+ model)
+ url = BASE_DOWNLOAD_URL.format(model_map[model])
+
+ tar_file_name_list = [
+ "inference.pdiparams", "inference.pdiparams.info",
+ "inference.pdmodel"
+ ]
+ model_file_path = storage_directory("inference.pdmodel")
+ params_file_path = storage_directory("inference.pdiparams")
+ if not os.path.exists(model_file_path) or not os.path.exists(
+ params_file_path):
+ tmp_path = storage_directory(url.split("/")[-1])
+ logging.info(f"Download {url} to {tmp_path}")
+ os.makedirs(storage_directory(), exist_ok=True)
+ download_with_progressbar(url, tmp_path)
+ with tarfile.open(tmp_path, "r") as tarObj:
+ for member in tarObj.getmembers():
+ filename = None
+ for tar_file_name in tar_file_name_list:
+ if tar_file_name in member.name:
+ filename = tar_file_name
+ if filename is None:
+ continue
+ file = tarObj.extractfile(member)
+ with open(storage_directory(filename), "wb") as f:
+ f.write(file.read())
+ os.remove(tmp_path)
+ if not os.path.exists(model_file_path) or not os.path.exists(
+ params_file_path):
+ raise Exception(
+ f"Something went wrong while downloading and unzip the model[{model}] files!"
+ )
+ else:
+ raise Exception(
+ f"The specifed model name error. Support 'BlazeFace' for detection. And support local directory that include model files ('inference.pdmodel' and 'inference.pdiparams')."
+ )
+
+ return model_file_path, params_file_path
+
+
+def normalize_image(img, scale=None, mean=None, std=None, order='chw'):
+ if isinstance(scale, str):
+ scale = eval(scale)
+ scale = np.float32(scale if scale is not None else 1.0 / 255.0)
+ mean = mean if mean is not None else [0.485, 0.456, 0.406]
+ std = std if std is not None else [0.229, 0.224, 0.225]
+
+ shape = (3, 1, 1) if order == 'chw' else (1, 1, 3)
+ mean = np.array(mean).reshape(shape).astype('float32')
+ std = np.array(std).reshape(shape).astype('float32')
+
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+
+ assert isinstance(img, np.ndarray), "invalid input 'img' in NormalizeImage"
+ return (img.astype('float32') * scale - mean) / std
+
+
+def to_CHW_image(img):
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+ return img.transpose((2, 0, 1))
+
+
+class ColorMap(object):
+ def __init__(self, num):
+ super().__init__()
+ self.get_color_map_list(num)
+ self.color_map = {}
+ self.ptr = 0
+
+ def __getitem__(self, key):
+ return self.color_map[key]
+
+ def update(self, keys):
+ for key in keys:
+ if key not in self.color_map:
+ i = self.ptr % len(self.color_list)
+ self.color_map[key] = self.color_list[i]
+ self.ptr += 1
+
+ def get_color_map_list(self, num_classes):
+ color_map = num_classes * [0, 0, 0]
+ for i in range(0, num_classes):
+ j = 0
+ lab = i
+ while lab:
+ color_map[i * 3] |= (((lab >> 0) & 1) << (7 - j))
+ color_map[i * 3 + 1] |= (((lab >> 1) & 1) << (7 - j))
+ color_map[i * 3 + 2] |= (((lab >> 2) & 1) << (7 - j))
+ j += 1
+ lab >>= 3
+ self.color_list = [
+ color_map[i:i + 3] for i in range(0, len(color_map), 3)
+ ]
+
+
+class ImageReader(object):
+ def __init__(self, inputs):
+ super().__init__()
+ self.idx = 0
+ if isinstance(inputs, np.ndarray):
+ self.image_list = [inputs]
+ else:
+ imgtype_list = {'jpg', 'bmp', 'png', 'jpeg', 'rgb', 'tif', 'tiff'}
+ self.image_list = []
+ if os.path.isfile(inputs):
+ if imghdr.what(inputs) not in imgtype_list:
+ raise Exception(
+ f"Error type of input path, only support: {imgtype_list}"
+ )
+ self.image_list.append(inputs)
+ elif os.path.isdir(inputs):
+ tmp_file_list = os.listdir(inputs)
+ warn_tag = False
+ for file_name in tmp_file_list:
+ file_path = os.path.join(inputs, file_name)
+ if not os.path.isfile(file_path):
+ warn_tag = True
+ continue
+ if imghdr.what(file_path) in imgtype_list:
+ self.image_list.append(file_path)
+ else:
+ warn_tag = True
+ if warn_tag:
+ logging.warning(
+ f"The directory of input contine directory or not supported file type, only support: {imgtype_list}"
+ )
+ else:
+ raise Exception(
+ f"The file of input path not exist! Please check input: {inputs}"
+ )
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ if self.idx >= len(self.image_list):
+ raise StopIteration
+
+ data = self.image_list[self.idx]
+ if isinstance(data, np.ndarray):
+ self.idx += 1
+ return data, "tmp.png"
+ path = data
+ _, file_name = os.path.split(path)
+ img = cv2.imread(path)
+ if img is None:
+ logging.warning(f"Error in reading image: {path}! Ignored.")
+ self.idx += 1
+ return self.__next__()
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+ self.idx += 1
+ return img, file_name
+
+ def __len__(self):
+ return len(self.image_list)
+
+
+class VideoReader(object):
+ def __init__(self, inputs):
+ super().__init__()
+ videotype_list = {"mp4"}
+ if os.path.splitext(inputs)[-1][1:] not in videotype_list:
+ raise Exception(
+ f"The input file is not supported, only support: {videotype_list}"
+ )
+ if not os.path.isfile(inputs):
+ raise Exception(
+ f"The file of input path not exist! Please check input: {inputs}"
+ )
+ self.capture = cv2.VideoCapture(inputs)
+ self.file_name = os.path.split(inputs)[-1]
+
+ def get_info(self):
+ info = {}
+ width = int(self.capture.get(cv2.CAP_PROP_FRAME_WIDTH))
+ height = int(self.capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
+ fourcc = cv2.VideoWriter_fourcc(* 'mp4v')
+ info["file_name"] = self.file_name
+ info["fps"] = 30
+ info["shape"] = (width, height)
+ info["fourcc"] = cv2.VideoWriter_fourcc(* 'mp4v')
+ return info
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ ret, frame = self.capture.read()
+ if not ret:
+ raise StopIteration
+ return frame, self.file_name
+
+
+class ImageWriter(object):
+ def __init__(self, output_dir):
+ super().__init__()
+ if output_dir is None:
+ raise Exception(
+ "Please specify the directory of saving prediction results by --output."
+ )
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ self.output_dir = output_dir
+
+ def write(self, image, file_name):
+ path = os.path.join(self.output_dir, file_name)
+ cv2.imwrite(path, cv2.cvtColor(image, cv2.COLOR_RGB2BGR))
+
+
+class VideoWriter(object):
+ def __init__(self, output_dir, video_info):
+ super().__init__()
+ if output_dir is None:
+ raise Exception(
+ "Please specify the directory of saving prediction results by --output."
+ )
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ output_path = os.path.join(output_dir, video_info["file_name"])
+ fourcc = cv2.VideoWriter_fourcc(* 'mp4v')
+ self.writer = cv2.VideoWriter(output_path, video_info["fourcc"],
+ video_info["fps"], video_info["shape"])
+
+ def write(self, frame, file_name):
+ self.writer.write(frame)
+
+ def __del__(self):
+ if hasattr(self, "writer"):
+ self.writer.release()
+
+
+class BasePredictor(object):
+ def __init__(self, predictor_config):
+ super().__init__()
+ self.predictor_config = predictor_config
+ self.predictor, self.input_names, self.output_names = self.load_predictor(
+ predictor_config["model_file"], predictor_config["params_file"])
+
+ def load_predictor(self, model_file, params_file):
+ config = Config(model_file, params_file)
+ if self.predictor_config["use_gpu"]:
+ config.enable_use_gpu(200, 0)
+ config.switch_ir_optim(True)
+ else:
+ config.disable_gpu()
+ config.set_cpu_math_library_num_threads(self.predictor_config[
+ "cpu_threads"])
+
+ if self.predictor_config["enable_mkldnn"]:
+ try:
+ # cache 10 different shapes for mkldnn to avoid memory leak
+ config.set_mkldnn_cache_capacity(10)
+ config.enable_mkldnn()
+ except Exception as e:
+ logging.error(
+ "The current environment does not support `mkldnn`, so disable mkldnn."
+ )
+ config.disable_glog_info()
+ config.enable_memory_optim()
+ # use zero copy
+ config.switch_use_feed_fetch_ops(False)
+ predictor = create_predictor(config)
+ input_names = predictor.get_input_names()
+ output_names = predictor.get_output_names()
+ return predictor, input_names, output_names
+
+ def preprocess(self):
+ raise NotImplementedError
+
+ def postprocess(self):
+ raise NotImplementedError
+
+ def predict(self, img):
+ raise NotImplementedError
+
+
+class Detector(BasePredictor):
+ def __init__(self, det_config, predictor_config):
+ super().__init__(predictor_config)
+ self.det_config = det_config
+ self.target_size = self.det_config["target_size"]
+ self.thresh = self.det_config["thresh"]
+
+ def preprocess(self, img):
+ resize_h, resize_w = self.target_size
+ img_shape = img.shape
+ img_scale_x = resize_w / img_shape[1]
+ img_scale_y = resize_h / img_shape[0]
+ img = cv2.resize(
+ img, None, None, fx=img_scale_x, fy=img_scale_y, interpolation=1)
+ img = normalize_image(
+ img,
+ scale=1. / 255.,
+ mean=[0.485, 0.456, 0.406],
+ std=[0.229, 0.224, 0.225],
+ order='hwc')
+ img_info = {}
+ img_info["im_shape"] = np.array(
+ img.shape[:2], dtype=np.float32)[np.newaxis, :]
+ img_info["scale_factor"] = np.array(
+ [img_scale_y, img_scale_x], dtype=np.float32)[np.newaxis, :]
+
+ img = img.transpose((2, 0, 1)).copy()
+ img_info["image"] = img[np.newaxis, :, :, :]
+ return img_info
+
+ def postprocess(self, np_boxes):
+ expect_boxes = (np_boxes[:, 1] > self.thresh) & (np_boxes[:, 0] > -1)
+ return np_boxes[expect_boxes, :]
+
+ def predict(self, img):
+ inputs = self.preprocess(img)
+ for input_name in self.input_names:
+ input_tensor = self.predictor.get_input_handle(input_name)
+ input_tensor.copy_from_cpu(inputs[input_name])
+ self.predictor.run()
+ output_tensor = self.predictor.get_output_handle(self.output_names[0])
+ np_boxes = output_tensor.copy_to_cpu()
+ # boxes_num = self.detector.get_output_handle(self.detector_output_names[1])
+ # np_boxes_num = boxes_num.copy_to_cpu()
+ box_list = self.postprocess(np_boxes)
+ return box_list
+
+class FaceDetector(object):
+ def __init__(self, args, print_info=True):
+ super().__init__()
+ if print_info:
+ print_config(args)
+
+ self.font_path = os.path.join(
+ os.path.abspath(os.path.dirname(__file__)),
+ "SourceHanSansCN-Medium.otf")
+ self.args = args
+
+ predictor_config = {
+ "use_gpu": args.use_gpu,
+ "enable_mkldnn": args.enable_mkldnn,
+ "cpu_threads": args.cpu_threads
+ }
+
+ model_file_path, params_file_path = check_model_file(
+ args.det_model)
+ det_config = {"thresh": args.det_thresh, "target_size": [640, 640]}
+ predictor_config["model_file"] = model_file_path
+ predictor_config["params_file"] = params_file_path
+ self.det_predictor = Detector(det_config, predictor_config)
+ self.color_map = ColorMap(100)
+
+ def preprocess(self, img):
+ img = img.astype(np.float32, copy=False)
+ return img
+
+ def draw(self, img, box_list, labels):
+ self.color_map.update(labels)
+ im = Image.fromarray(img)
+ draw = ImageDraw.Draw(im)
+
+ for i, dt in enumerate(box_list):
+ bbox, score = dt[2:], dt[1]
+ label = labels[i]
+ color = tuple(self.color_map[label])
+
+ xmin, ymin, xmax, ymax = bbox
+
+ font_size = max(int((xmax - xmin) // 6), 10)
+ font = ImageFont.truetype(self.font_path, font_size)
+
+ text = "{} {:.4f}".format(label, score)
+ th = sum(font.getmetrics())
+ tw = font.getsize(text)[0]
+ start_y = max(0, ymin - th)
+
+ draw.rectangle(
+ [(xmin, start_y), (xmin + tw + 1, start_y + th)], fill=color)
+ draw.text(
+ (xmin + 1, start_y),
+ text,
+ fill=(255, 255, 255),
+ font=font,
+ anchor="la")
+ draw.rectangle(
+ [(xmin, ymin), (xmax, ymax)], width=2, outline=color)
+ return np.array(im)
+
+ def predict_np_img(self, img):
+ input_img = self.preprocess(img)
+ box_list = None
+ np_feature = None
+ if hasattr(self, "det_predictor"):
+ box_list = self.det_predictor.predict(input_img)
+ return box_list, np_feature
+
+ def init_reader_writer(self, input_data):
+ if isinstance(input_data, np.ndarray):
+ self.input_reader = ImageReader(input_data)
+ if hasattr(self, "det_predictor"):
+ self.output_writer = ImageWriter(self.args.output)
+ elif isinstance(input_data, str):
+ if input_data.endswith('mp4'):
+ self.input_reader = VideoReader(input_data)
+ info = self.input_reader.get_info()
+ self.output_writer = VideoWriter(self.args.output, info)
+ else:
+ self.input_reader = ImageReader(input_data)
+ if hasattr(self, "det_predictor"):
+ self.output_writer = ImageWriter(self.args.output)
+ else:
+ raise Exception(
+ f"The input data error. Only support path of image or video(.mp4) and dirctory that include images."
+ )
+
+ def predict(self, input_data, print_info=False):
+ """Predict input_data.
+
+ Args:
+ input_data (str | NumPy.array): The path of image, or the derectory including images, or the image data in NumPy.array format.
+ print_info (bool, optional): Wheather to print the prediction results. Defaults to False.
+
+ Yields:
+ dict: {
+ "box_list": The prediction results of detection.
+ "features": The output of recognition.
+ "labels": The results of retrieval.
+ }
+ """
+ self.init_reader_writer(input_data)
+ for img, file_name in self.input_reader:
+ if img is None:
+ logging.warning(f"Error in reading img {file_name}! Ignored.")
+ continue
+ box_list, np_feature = self.predict_np_img(img)
+ labels = ["face"] * len(box_list)
+ if box_list is not None:
+ result = self.draw(img, box_list, labels=labels)
+ self.output_writer.write(result, file_name)
+ if print_info:
+ logging.info(f"File: {file_name}, predict label(s): {labels}")
+ yield {
+ "box_list": box_list,
+ "features": np_feature,
+ "labels": labels
+ }
+ logging.info(f"Predict complete!")
+
+
+# for CLI
+def main(args=None):
+ logging.basicConfig(level=logging.INFO)
+
+ args = parser().parse_args()
+ predictor = FaceDetector(args)
+ res = predictor.predict(args.input, print_info=True)
+ for _ in res:
+ pass
+
+
+if __name__ == "__main__":
+ main()
diff --git a/insightface/detection/retinaface/Makefile b/insightface/detection/retinaface/Makefile
new file mode 100644
index 0000000000000000000000000000000000000000..66a3ed047a49124b921548dbc337946202fedbf7
--- /dev/null
+++ b/insightface/detection/retinaface/Makefile
@@ -0,0 +1,6 @@
+all:
+ cd rcnn/cython/; python setup.py build_ext --inplace; rm -rf build; cd ../../
+ cd rcnn/pycocotools/; python setup.py build_ext --inplace; rm -rf build; cd ../../
+clean:
+ cd rcnn/cython/; rm *.so *.c *.cpp; cd ../../
+ cd rcnn/pycocotools/; rm *.so; cd ../../
diff --git a/insightface/detection/retinaface/README.md b/insightface/detection/retinaface/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..6665d3b12fe6c5d500de9272c4ac564347c46da7
--- /dev/null
+++ b/insightface/detection/retinaface/README.md
@@ -0,0 +1,86 @@
+# RetinaFace Face Detector
+
+## Introduction
+
+RetinaFace is a practical single-stage [SOTA](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html) face detector which is initially introduced in [arXiv technical report](https://arxiv.org/abs/1905.00641) and then accepted by [CVPR 2020](https://openaccess.thecvf.com/content_CVPR_2020/html/Deng_RetinaFace_Single-Shot_Multi-Level_Face_Localisation_in_the_Wild_CVPR_2020_paper.html).
+
+
+
+
+
+## Data
+
+1. Download our annotations (face bounding boxes & five facial landmarks) from [baidu cloud](https://pan.baidu.com/s/1Laby0EctfuJGgGMgRRgykA) or [gdrive](https://drive.google.com/file/d/1BbXxIiY-F74SumCNG6iwmJJ5K3heoemT/view?usp=sharing)
+
+2. Download the [WIDERFACE](http://shuoyang1213.me/WIDERFACE/WiderFace_Results.html) dataset.
+
+3. Organise the dataset directory under ``insightface/RetinaFace/`` as follows:
+
+```Shell
+ data/retinaface/
+ train/
+ images/
+ label.txt
+ val/
+ images/
+ label.txt
+ test/
+ images/
+ label.txt
+```
+
+## Install
+
+1. Install MXNet with GPU support.
+2. Install Deformable Convolution V2 operator from [Deformable-ConvNets](https://github.com/msracver/Deformable-ConvNets) if you use the DCN based backbone.
+3. Type ``make`` to build cxx tools.
+
+## Training
+
+Please check ``train.py`` for training.
+
+1. Copy ``rcnn/sample_config.py`` to ``rcnn/config.py``
+2. Download ImageNet pretrained models and put them into ``model/``(these models are not for detection testing/inferencing but training and parameters initialization).
+
+ ImageNet ResNet50 ([baidu cloud](https://pan.baidu.com/s/1WAkU9ZA_j-OmzO-sdk9whA) and [googledrive](https://drive.google.com/file/d/1ibQOCG4eJyTrlKAJdnioQ3tyGlnbSHjy/view?usp=sharing)).
+
+ ImageNet ResNet152 ([baidu cloud](https://pan.baidu.com/s/1nzQ6CzmdKFzg8bM8ChZFQg) and [googledrive](https://drive.google.com/file/d/1FEjeiIB4u-XBYdASgkyx78pFybrlKUA4/view?usp=sharing)).
+
+3. Start training with ``CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --prefix ./model/retina --network resnet``.
+Before training, you can check the ``resnet`` network configuration (e.g. pretrained model path, anchor setting and learning rate policy etc..) in ``rcnn/config.py``.
+4. We have two predefined network settings named ``resnet``(for medium and large models) and ``mnet``(for lightweight models).
+
+## Testing
+
+Please check ``test.py`` for testing.
+
+## RetinaFace Pretrained Models
+
+Pretrained Model: RetinaFace-R50 ([baidu cloud](https://pan.baidu.com/s/1C6nKq122gJxRhb37vK0_LQ) or [googledrive](https://drive.google.com/file/d/1_DKgGxQWqlTqe78pw0KavId9BIMNUWfu/view?usp=sharing)) is a medium size model with ResNet50 backbone.
+It can output face bounding boxes and five facial landmarks in a single forward pass.
+
+WiderFace validation mAP: Easy 96.5, Medium 95.6, Hard 90.4.
+
+To avoid the confliction with the WiderFace Challenge (ICCV 2019), we postpone the release time of our best model.
+
+## Third-party
+
+[yangfly](https://github.com/yangfly): RetinaFace-MobileNet0.25 ([baidu cloud](https://pan.baidu.com/s/1P1ypO7VYUbNAezdvLm2m9w):nzof).
+WiderFace validation mAP: Hard 82.5. (model size: 1.68Mb)
+
+[clancylian](https://github.com/clancylian/retinaface): C++ version
+
+RetinaFace in [modelscope](https://modelscope.cn/models/damo/cv_resnet50_face-detection_retinaface/summary)
+
+## References
+
+```
+@inproceedings{Deng2020CVPR,
+title = {RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild},
+author = {Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos},
+booktitle = {CVPR},
+year = {2020}
+}
+```
+
+
diff --git a/insightface/detection/retinaface/rcnn/PY_OP/__init__.py b/insightface/detection/retinaface/rcnn/PY_OP/__init__.py
new file mode 100755
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/PY_OP/cascade_refine.py b/insightface/detection/retinaface/rcnn/PY_OP/cascade_refine.py
new file mode 100644
index 0000000000000000000000000000000000000000..e3c6556fab6f8ea13e4a214d88a04503d7c51540
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/PY_OP/cascade_refine.py
@@ -0,0 +1,518 @@
+from __future__ import print_function
+import sys
+import mxnet as mx
+import numpy as np
+import datetime
+from distutils.util import strtobool
+from ..config import config, generate_config
+from ..processing.generate_anchor import generate_anchors, anchors_plane
+from ..processing.bbox_transform import bbox_overlaps, bbox_transform, landmark_transform
+
+STAT = {0: 0}
+STEP = 28800
+
+
+class CascadeRefineOperator(mx.operator.CustomOp):
+ def __init__(self, stride=0, network='', dataset='', prefix=''):
+ super(CascadeRefineOperator, self).__init__()
+ self.stride = int(stride)
+ self.prefix = prefix
+ generate_config(network, dataset)
+ self.mode = config.TRAIN.OHEM_MODE #0 for random 10:245, 1 for 10:246, 2 for 10:30, mode 1 for default
+ stride = self.stride
+ sstride = str(stride)
+ base_size = config.RPN_ANCHOR_CFG[sstride]['BASE_SIZE']
+ allowed_border = config.RPN_ANCHOR_CFG[sstride]['ALLOWED_BORDER']
+ ratios = config.RPN_ANCHOR_CFG[sstride]['RATIOS']
+ scales = config.RPN_ANCHOR_CFG[sstride]['SCALES']
+ base_anchors = generate_anchors(base_size=base_size,
+ ratios=list(ratios),
+ scales=np.array(scales,
+ dtype=np.float32),
+ stride=stride,
+ dense_anchor=config.DENSE_ANCHOR)
+ num_anchors = base_anchors.shape[0]
+ feat_height, feat_width = config.SCALES[0][
+ 0] // self.stride, config.SCALES[0][0] // self.stride
+ feat_stride = self.stride
+
+ A = num_anchors
+ K = feat_height * feat_width
+ self.A = A
+
+ all_anchors = anchors_plane(feat_height, feat_width, feat_stride,
+ base_anchors)
+ all_anchors = all_anchors.reshape((K * A, 4))
+ self.ori_anchors = all_anchors
+ self.nbatch = 0
+ global STAT
+ for k in config.RPN_FEAT_STRIDE:
+ STAT[k] = [0, 0, 0]
+
+ def apply_bbox_pred(self, bbox_pred, ind=None):
+ box_deltas = bbox_pred
+ box_deltas[:, 0::4] = box_deltas[:, 0::4] * config.TRAIN.BBOX_STDS[0]
+ box_deltas[:, 1::4] = box_deltas[:, 1::4] * config.TRAIN.BBOX_STDS[1]
+ box_deltas[:, 2::4] = box_deltas[:, 2::4] * config.TRAIN.BBOX_STDS[2]
+ box_deltas[:, 3::4] = box_deltas[:, 3::4] * config.TRAIN.BBOX_STDS[3]
+ if ind is None:
+ boxes = self.ori_anchors
+ else:
+ boxes = self.ori_anchors[ind]
+ #print('in apply',self.stride, box_deltas.shape, boxes.shape)
+
+ widths = boxes[:, 2] - boxes[:, 0] + 1.0
+ heights = boxes[:, 3] - boxes[:, 1] + 1.0
+ ctr_x = boxes[:, 0] + 0.5 * (widths - 1.0)
+ ctr_y = boxes[:, 1] + 0.5 * (heights - 1.0)
+
+ dx = box_deltas[:, 0:1]
+ dy = box_deltas[:, 1:2]
+ dw = box_deltas[:, 2:3]
+ dh = box_deltas[:, 3:4]
+
+ pred_ctr_x = dx * widths[:, np.newaxis] + ctr_x[:, np.newaxis]
+ pred_ctr_y = dy * heights[:, np.newaxis] + ctr_y[:, np.newaxis]
+ pred_w = np.exp(dw) * widths[:, np.newaxis]
+ pred_h = np.exp(dh) * heights[:, np.newaxis]
+
+ pred_boxes = np.zeros(box_deltas.shape)
+ # x1
+ pred_boxes[:, 0:1] = pred_ctr_x - 0.5 * (pred_w - 1.0)
+ # y1
+ pred_boxes[:, 1:2] = pred_ctr_y - 0.5 * (pred_h - 1.0)
+ # x2
+ pred_boxes[:, 2:3] = pred_ctr_x + 0.5 * (pred_w - 1.0)
+ # y2
+ pred_boxes[:, 3:4] = pred_ctr_y + 0.5 * (pred_h - 1.0)
+ return pred_boxes
+
+ def assign_anchor_fpn(self,
+ gt_label,
+ anchors,
+ landmark=False,
+ prefix='face'):
+ IOU = config.TRAIN.CASCADE_OVERLAP
+
+ gt_boxes = gt_label['gt_boxes']
+ #_label = gt_label['gt_label']
+ # clean up boxes
+ #nonneg = np.where(_label[:] != -1)[0]
+ #gt_boxes = gt_boxes[nonneg]
+ if landmark:
+ gt_landmarks = gt_label['gt_landmarks']
+ #gt_landmarks = gt_landmarks[nonneg]
+ assert gt_boxes.shape[0] == gt_landmarks.shape[0]
+ #scales = np.array(scales, dtype=np.float32)
+ feat_strides = config.RPN_FEAT_STRIDE
+ bbox_pred_len = 4
+ landmark_pred_len = 10
+ num_anchors = anchors.shape[0]
+ A = self.A
+ total_anchors = num_anchors
+ feat_height, feat_width = config.SCALES[0][
+ 0] // self.stride, config.SCALES[0][0] // self.stride
+
+ #print('total_anchors', anchors.shape[0], len(inds_inside), file=sys.stderr)
+
+ # label: 1 is positive, 0 is negative, -1 is dont care
+ labels = np.empty((num_anchors, ), dtype=np.float32)
+ labels.fill(-1)
+ #print('BB', anchors.shape, len(inds_inside))
+ #print('gt_boxes', gt_boxes.shape, file=sys.stderr)
+ #tb = datetime.datetime.now()
+ #self._times[0] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+
+ if gt_boxes.size > 0:
+ # overlap between the anchors and the gt boxes
+ # overlaps (ex, gt)
+ overlaps = bbox_overlaps(anchors.astype(np.float),
+ gt_boxes.astype(np.float))
+ argmax_overlaps = overlaps.argmax(axis=1)
+ #print('AAA', argmax_overlaps.shape)
+ max_overlaps = overlaps[np.arange(num_anchors), argmax_overlaps]
+ gt_argmax_overlaps = overlaps.argmax(axis=0)
+ gt_max_overlaps = overlaps[gt_argmax_overlaps,
+ np.arange(overlaps.shape[1])]
+ gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]
+
+ if not config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels first so that positive labels can clobber them
+ labels[max_overlaps < IOU[0]] = 0
+
+ # fg label: for each gt, anchor with highest overlap
+ if config.TRAIN.RPN_FORCE_POSITIVE:
+ labels[gt_argmax_overlaps] = 1
+
+ # fg label: above threshold IoU
+ labels[max_overlaps >= IOU[1]] = 1
+
+ if config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels last so that negative labels can clobber positives
+ labels[max_overlaps < IOU[0]] = 0
+ else:
+ labels[:] = 0
+ fg_inds = np.where(labels == 1)[0]
+ #print('fg count', len(fg_inds))
+
+ # subsample positive labels if we have too many
+ if config.TRAIN.RPN_ENABLE_OHEM == 0:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = int(config.TRAIN.RPN_FG_FRACTION *
+ config.TRAIN.RPN_BATCH_SIZE)
+ if len(fg_inds) > num_fg:
+ disable_inds = npr.choice(fg_inds,
+ size=(len(fg_inds) - num_fg),
+ replace=False)
+ if DEBUG:
+ disable_inds = fg_inds[:(len(fg_inds) - num_fg)]
+ labels[disable_inds] = -1
+
+ # subsample negative labels if we have too many
+ num_bg = config.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1)
+ bg_inds = np.where(labels == 0)[0]
+ if len(bg_inds) > num_bg:
+ disable_inds = npr.choice(bg_inds,
+ size=(len(bg_inds) - num_bg),
+ replace=False)
+ if DEBUG:
+ disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ labels[disable_inds] = -1
+
+ #fg_inds = np.where(labels == 1)[0]
+ #num_fg = len(fg_inds)
+ #num_bg = num_fg*int(1.0/config.TRAIN.RPN_FG_FRACTION-1)
+
+ #bg_inds = np.where(labels == 0)[0]
+ #if len(bg_inds) > num_bg:
+ # disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
+ # if DEBUG:
+ # disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ # labels[disable_inds] = -1
+ else:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = len(fg_inds)
+ bg_inds = np.where(labels == 0)[0]
+ num_bg = len(bg_inds)
+
+ #print('anchor stat', num_fg, num_bg)
+
+ bbox_targets = np.zeros((num_anchors, bbox_pred_len), dtype=np.float32)
+ if gt_boxes.size > 0:
+ #print('GT', gt_boxes.shape, gt_boxes[argmax_overlaps, :4].shape)
+ bbox_targets[:, :] = bbox_transform(anchors,
+ gt_boxes[argmax_overlaps, :])
+ #bbox_targets[:,4] = gt_blur
+ #tb = datetime.datetime.now()
+ #self._times[1] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+
+ bbox_weights = np.zeros((num_anchors, bbox_pred_len), dtype=np.float32)
+ #bbox_weights[labels == 1, :] = np.array(config.TRAIN.RPN_BBOX_WEIGHTS)
+ bbox_weights[labels == 1, 0:4] = 1.0
+ if bbox_pred_len > 4:
+ bbox_weights[labels == 1, 4:bbox_pred_len] = 0.1
+
+ if landmark:
+ landmark_targets = np.zeros((num_anchors, landmark_pred_len),
+ dtype=np.float32)
+ landmark_weights = np.zeros((num_anchors, landmark_pred_len),
+ dtype=np.float32)
+ #landmark_weights[labels == 1, :] = np.array(config.TRAIN.RPN_LANDMARK_WEIGHTS)
+ if landmark_pred_len == 10:
+ landmark_weights[labels == 1, :] = 1.0
+ elif landmark_pred_len == 15:
+ v = [1.0, 1.0, 0.1] * 5
+ assert len(v) == 15
+ landmark_weights[labels == 1, :] = np.array(v)
+ else:
+ assert False
+ #TODO here
+ if gt_landmarks.size > 0:
+ #print('AAA',argmax_overlaps)
+ a_landmarks = gt_landmarks[argmax_overlaps, :, :]
+ landmark_targets[:] = landmark_transform(anchors, a_landmarks)
+ invalid = np.where(a_landmarks[:, 0, 2] < 0.0)[0]
+ #assert len(invalid)==0
+ #landmark_weights[invalid, :] = np.array(config.TRAIN.RPN_INVALID_LANDMARK_WEIGHTS)
+ landmark_weights[invalid, :] = 0.0
+ #tb = datetime.datetime.now()
+ #self._times[2] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+ bbox_targets[:,
+ 0::4] = bbox_targets[:, 0::4] / config.TRAIN.BBOX_STDS[0]
+ bbox_targets[:,
+ 1::4] = bbox_targets[:, 1::4] / config.TRAIN.BBOX_STDS[1]
+ bbox_targets[:,
+ 2::4] = bbox_targets[:, 2::4] / config.TRAIN.BBOX_STDS[2]
+ bbox_targets[:,
+ 3::4] = bbox_targets[:, 3::4] / config.TRAIN.BBOX_STDS[3]
+
+ #print('CC', anchors.shape, len(inds_inside))
+ label = {}
+ _label = labels.reshape(
+ (1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)
+ _label = _label.reshape((1, A * feat_height * feat_width))
+ bbox_target = bbox_targets.reshape(
+ (1, feat_height * feat_width,
+ A * bbox_pred_len)).transpose(0, 2, 1)
+ bbox_weight = bbox_weights.reshape(
+ (1, feat_height * feat_width, A * bbox_pred_len)).transpose(
+ (0, 2, 1))
+ label['%s_label' % prefix] = _label[0]
+ label['%s_bbox_target' % prefix] = bbox_target[0]
+ label['%s_bbox_weight' % prefix] = bbox_weight[0]
+ if landmark:
+ landmark_target = landmark_target.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose(0, 2, 1)
+ landmark_target /= config.TRAIN.LANDMARK_STD
+ landmark_weight = landmark_weight.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose((0, 2, 1))
+ label['%s_landmark_target' % prefix] = landmark_target[0]
+ label['%s_landmark_weight' % prefix] = landmark_weight[0]
+
+ return label
+
+ def forward(self, is_train, req, in_data, out_data, aux):
+ self.nbatch += 1
+ ta = datetime.datetime.now()
+ global STAT
+ A = config.NUM_ANCHORS
+
+ cls_label_t0 = in_data[0].asnumpy() #BS, AHW
+ cls_score_t0 = in_data[1].asnumpy() #BS, C, AHW
+ cls_score = in_data[2].asnumpy() #BS, C, AHW
+ #labels_raw = in_data[1].asnumpy() #BS, ANCHORS
+ bbox_pred_t0 = in_data[3].asnumpy() #BS, AC, HW
+ bbox_target_t0 = in_data[4].asnumpy() #BS, AC, HW
+ cls_label_raw = in_data[5].asnumpy() #BS, AHW
+ gt_boxes = in_data[6].asnumpy() #BS, N, C=4+1
+ #imgs = in_data[7].asnumpy().astype(np.uint8)
+
+ batch_size = cls_score.shape[0]
+ num_anchors = cls_score.shape[2]
+ #print('in cas', cls_score.shape, bbox_target.shape)
+
+ labels_out = np.zeros(shape=(batch_size, num_anchors),
+ dtype=np.float32)
+ bbox_target_out = np.zeros(shape=bbox_target_t0.shape,
+ dtype=np.float32)
+ anchor_weight = np.zeros((batch_size, num_anchors, 1),
+ dtype=np.float32)
+ valid_count = np.zeros((batch_size, 1), dtype=np.float32)
+
+ bbox_pred_t0 = bbox_pred_t0.transpose((0, 2, 1))
+ bbox_pred_t0 = bbox_pred_t0.reshape(
+ (bbox_pred_t0.shape[0], -1, 4)) #BS, H*W*A, C
+ bbox_target_t0 = bbox_target_t0.transpose((0, 2, 1))
+ bbox_target_t0 = bbox_target_t0.reshape(
+ (bbox_target_t0.shape[0], -1, 4))
+
+ #print('anchor_weight', anchor_weight.shape)
+
+ #assert labels.shape[0]==1
+ #assert cls_score.shape[0]==1
+ #assert bbox_weight.shape[0]==1
+ #print('shape', cls_score.shape, labels.shape, file=sys.stderr)
+ #print('bbox_weight 0', bbox_weight.shape, file=sys.stderr)
+ #bbox_weight = np.zeros( (labels_raw.shape[0], labels_raw.shape[1], 4), dtype=np.float32)
+ _stat = [0, 0, 0]
+ SEL_TOPK = config.TRAIN.RPN_BATCH_SIZE
+ FAST = False
+ for ibatch in range(batch_size):
+ #bgr = imgs[ibatch].transpose( (1,2,0) )[:,:,::-1]
+
+ if not FAST:
+ _gt_boxes = gt_boxes[ibatch] #N, 4+1
+ _gtind = len(np.where(_gt_boxes[:, 4] >= 0)[0])
+ #print('gt num', _gtind)
+ _gt_boxes = _gt_boxes[0:_gtind, :]
+
+ #anchors_t1 = self.ori_anchors.copy()
+ #_cls_label_raw = cls_label_raw[ibatch] #AHW
+ #_cls_label_raw = _cls_label_raw.reshape( (A, -1) ).transpose( (1,0) ).reshape( (-1,) ) #HWA
+ #fg_ind_raw = np.where(_cls_label_raw>0)[0]
+ #_bbox_target_t0 = bbox_target_t0[ibatch][fg_ind_raw]
+ #_bbox_pred_t0 = bbox_pred_t0[ibatch][fg_ind_raw]
+ #anchors_t1_pos = self.apply_bbox_pred(_bbox_pred_t0, ind=fg_ind_raw)
+ #anchors_t1[fg_ind_raw,:] = anchors_t1_pos
+
+ anchors_t1 = self.apply_bbox_pred(bbox_pred_t0[ibatch])
+ assert anchors_t1.shape[0] == self.ori_anchors.shape[0]
+
+ #for i in range(_gt_boxes.shape[0]):
+ # box = _gt_boxes[i].astype(np.int)
+ # print('%d: gt%d'%(self.nbatch, i), box)
+ # #color = (0,0,255)
+ # #cv2.rectangle(img, (box[0], box[1]), (box[2], box[3]), color, 2)
+ #for i in range(anchors_t1.shape[0]):
+ # box1 = self.ori_anchors[i].astype(np.int)
+ # box2 = anchors_t1[i].astype(np.int)
+ # print('%d %d: anchorscompare %d'%(self.nbatch, self.stride, i), box1, box2)
+ #color = (255,255,0)
+ #cv2.rectangle(img, (box[0], box[1]), (box[2], box[3]), color, 2)
+ #filename = "./debug/%d_%d_%d.jpg"%(self.nbatch, ibatch, stride)
+ #cv2.imwrite(filename, img)
+ #print(filename)
+ #gt_label = {'gt_boxes': gt_anchors, 'gt_label' : labels_raw[ibatch]}
+ gt_label = {'gt_boxes': _gt_boxes}
+ new_label_dict = self.assign_anchor_fpn(gt_label,
+ anchors_t1,
+ False,
+ prefix=self.prefix)
+ labels = new_label_dict['%s_label' % self.prefix] #AHW
+ new_bbox_target = new_label_dict['%s_bbox_target' %
+ self.prefix] #AC,HW
+ #print('assign ret', labels.shape, new_bbox_target.shape)
+ _anchor_weight = np.zeros((num_anchors, 1), dtype=np.float32)
+ fg_score = cls_score[ibatch, 1, :] - cls_score[ibatch, 0, :]
+ fg_inds = np.where(labels > 0)[0]
+ num_fg = int(config.TRAIN.RPN_FG_FRACTION *
+ config.TRAIN.RPN_BATCH_SIZE)
+ origin_num_fg = len(fg_inds)
+ #continue
+ #print('cas fg', len(fg_inds), num_fg, file=sys.stderr)
+ if len(fg_inds) > num_fg:
+ if self.mode == 0:
+ disable_inds = np.random.choice(fg_inds,
+ size=(len(fg_inds) -
+ num_fg),
+ replace=False)
+ labels[disable_inds] = -1
+ else:
+ pos_ohem_scores = fg_score[fg_inds]
+ order_pos_ohem_scores = pos_ohem_scores.ravel(
+ ).argsort()
+ sampled_inds = fg_inds[order_pos_ohem_scores[:num_fg]]
+ labels[fg_inds] = -1
+ labels[sampled_inds] = 1
+
+ n_fg = np.sum(labels > 0)
+ fg_inds = np.where(labels > 0)[0]
+ num_bg = config.TRAIN.RPN_BATCH_SIZE - n_fg
+ if self.mode == 2:
+ num_bg = max(
+ 48, n_fg * int(1.0 / config.TRAIN.RPN_FG_FRACTION - 1))
+
+ bg_inds = np.where(labels == 0)[0]
+ origin_num_bg = len(bg_inds)
+ if num_bg == 0:
+ labels[bg_inds] = -1
+ elif len(bg_inds) > num_bg:
+ # sort ohem scores
+
+ if self.mode == 0:
+ disable_inds = np.random.choice(bg_inds,
+ size=(len(bg_inds) -
+ num_bg),
+ replace=False)
+ labels[disable_inds] = -1
+ else:
+ neg_ohem_scores = fg_score[bg_inds]
+ order_neg_ohem_scores = neg_ohem_scores.ravel(
+ ).argsort()[::-1]
+ sampled_inds = bg_inds[order_neg_ohem_scores[:num_bg]]
+ #print('sampled_inds_bg', sampled_inds, file=sys.stderr)
+ labels[bg_inds] = -1
+ labels[sampled_inds] = 0
+
+ if n_fg > 0:
+ order0_labels = labels.reshape((1, A, -1)).transpose(
+ (0, 2, 1)).reshape((-1, ))
+ bbox_fg_inds = np.where(order0_labels > 0)[0]
+ #print('bbox_fg_inds, order0 ', bbox_fg_inds, file=sys.stderr)
+ _anchor_weight[bbox_fg_inds, :] = 1.0
+ anchor_weight[ibatch] = _anchor_weight
+ valid_count[ibatch][0] = n_fg
+ labels_out[ibatch] = labels
+ #print('labels_out', self.stride, ibatch, labels)
+ bbox_target_out[ibatch] = new_bbox_target
+ #print('cascade stat', self.stride, ibatch, len(labels), len(np.where(labels==1)[0]), len(np.where(labels==0)[0]))
+ else: #FAST MODE
+ fg_score_t0 = cls_score_t0[ibatch, 1, :] - cls_score_t0[ibatch,
+ 0, :]
+ sort_idx_t0 = np.argsort(
+ fg_score_t0.flatten())[::-1][0:SEL_TOPK]
+ _bbox_pred_t0 = bbox_pred_t0[ibatch][sort_idx_t0]
+ _bbox_target_t0 = bbox_target_t0[ibatch][sort_idx_t0]
+ #print('SEL fg score:', fg_score_t0[sort_idx[-1]], fg_score_t0[sort_idx[0]])
+ anchors_t0 = self.apply_bbox_pred(_bbox_pred_t0)
+ gt_anchors = self.apply_bbox_pred(_bbox_target_t0)
+ #gt_label = {'gt_boxes': gt_anchors, 'gt_label' : labels_raw[ibatch]}
+ gt_label = {'gt_boxes': gt_anchors}
+ new_label_dict = self.assign_anchor_fpn(gt_label,
+ anchors_t0,
+ False,
+ prefix=self.prefix)
+ labels = new_label_dict['%s_label' % self.prefix]
+ new_bbox_target = new_label_dict['%s_bbox_target' %
+ self.prefix]
+ #print('assign ret', labels.shape, new_bbox_target.shape)
+ _anchor_weight = np.zeros((num_anchors, 1), dtype=np.float32)
+ fg_score = cls_score[ibatch, 1, :] - cls_score[ibatch, 0, :]
+ fg_inds = np.where(labels > 0)[0]
+ _labels = np.empty(shape=labels.shape, dtype=np.float32)
+ _labels.fill(-1)
+ _labels[sort_idx_idx] = labels
+
+ anchor_weight[ibatch] = _anchor_weight
+ valid_count[ibatch][0] = len(fg_inds)
+ labels_out[ibatch] = _labels
+ #print('labels_out', self.stride, ibatch, labels)
+ bbox_target_out[ibatch] = new_bbox_target
+
+ #print('cascade pos stat', self.stride, batch_size, np.sum(valid_count))
+ for ind, val in enumerate(
+ [labels_out, bbox_target_out, anchor_weight, valid_count]):
+ val = mx.nd.array(val)
+ self.assign(out_data[ind], req[ind], val)
+ tb = datetime.datetime.now()
+ #print('cascade forward cost', self.stride, (tb-ta).total_seconds())
+
+ def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+ for i in range(len(in_grad)):
+ self.assign(in_grad[i], req[i], 0)
+
+
+@mx.operator.register('cascade_refine')
+class CascadeRefineProp(mx.operator.CustomOpProp):
+ def __init__(self, stride=0, network='', dataset='', prefix=''):
+ super(CascadeRefineProp, self).__init__(need_top_grad=False)
+ self.stride = stride
+ self.network = network
+ self.dataset = dataset
+ self.prefix = prefix
+
+ def list_arguments(self):
+ #return ['cls_label_t0', 'cls_pred_t0', 'cls_pred', 'bbox_pred_t0', 'bbox_label_t0', 'cls_label_raw', 'cas_gt_boxes', 'cas_img']
+ return [
+ 'cls_label_t0', 'cls_pred_t0', 'cls_pred', 'bbox_pred_t0',
+ 'bbox_label_t0', 'cls_label_raw', 'cas_gt_boxes'
+ ]
+
+ def list_outputs(self):
+ return [
+ 'cls_label_out', 'bbox_label_out', 'anchor_weight_out',
+ 'pos_count_out'
+ ]
+
+ def infer_shape(self, in_shape):
+ cls_pred_shape = in_shape[1]
+ bs = cls_pred_shape[0]
+ num_anchors = cls_pred_shape[2]
+ #print('in_rpn_ohem', in_shape[0], in_shape[1], in_shape[2], file=sys.stderr)
+ #print('in_rpn_ohem', labels_shape, anchor_weight_shape)
+ cls_label_shape = [bs, num_anchors]
+
+ return in_shape, \
+ [cls_label_shape, in_shape[4], [bs,num_anchors,1], [bs,1]]
+
+ def create_operator(self, ctx, shapes, dtypes):
+ return CascadeRefineOperator(self.stride, self.network, self.dataset,
+ self.prefix)
+
+ def declare_backward_dependency(self, out_grad, in_data, out_data):
+ return []
diff --git a/insightface/detection/retinaface/rcnn/PY_OP/rpn_fpn_ohem3.py b/insightface/detection/retinaface/rcnn/PY_OP/rpn_fpn_ohem3.py
new file mode 100644
index 0000000000000000000000000000000000000000..b8f7d462ec9aec245852338b392fb4d8afd3311c
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/PY_OP/rpn_fpn_ohem3.py
@@ -0,0 +1,175 @@
+from __future__ import print_function
+import sys
+import mxnet as mx
+import numpy as np
+from distutils.util import strtobool
+from ..config import config, generate_config
+
+STAT = {0: 0}
+STEP = 28800
+
+
+class RPNFPNOHEM3Operator(mx.operator.CustomOp):
+ def __init__(self, stride=0, network='', dataset='', prefix=''):
+ super(RPNFPNOHEM3Operator, self).__init__()
+ self.stride = int(stride)
+ self.prefix = prefix
+ generate_config(network, dataset)
+ self.mode = config.TRAIN.OHEM_MODE #0 for random 10:245, 1 for 10:246, 2 for 10:30, mode 1 for default
+ global STAT
+ for k in config.RPN_FEAT_STRIDE:
+ STAT[k] = [0, 0, 0]
+
+ def forward(self, is_train, req, in_data, out_data, aux):
+ global STAT
+
+ cls_score = in_data[0].asnumpy() #BS, 2, ANCHORS
+ labels_raw = in_data[1].asnumpy() # BS, ANCHORS
+
+ A = config.NUM_ANCHORS
+ anchor_weight = np.zeros((labels_raw.shape[0], labels_raw.shape[1], 1),
+ dtype=np.float32)
+ valid_count = np.zeros((labels_raw.shape[0], 1), dtype=np.float32)
+ #print('anchor_weight', anchor_weight.shape)
+
+ #assert labels.shape[0]==1
+ #assert cls_score.shape[0]==1
+ #assert bbox_weight.shape[0]==1
+ #print('shape', cls_score.shape, labels.shape, file=sys.stderr)
+ #print('bbox_weight 0', bbox_weight.shape, file=sys.stderr)
+ #bbox_weight = np.zeros( (labels_raw.shape[0], labels_raw.shape[1], 4), dtype=np.float32)
+ _stat = [0, 0, 0]
+ for ibatch in range(labels_raw.shape[0]):
+ _anchor_weight = np.zeros((labels_raw.shape[1], 1),
+ dtype=np.float32)
+ labels = labels_raw[ibatch]
+ fg_score = cls_score[ibatch, 1, :] - cls_score[ibatch, 0, :]
+
+ fg_inds = np.where(labels > 0)[0]
+ num_fg = int(config.TRAIN.RPN_FG_FRACTION *
+ config.TRAIN.RPN_BATCH_SIZE)
+ origin_num_fg = len(fg_inds)
+ #print(len(fg_inds), num_fg, file=sys.stderr)
+ if len(fg_inds) > num_fg:
+ if self.mode == 0:
+ disable_inds = np.random.choice(fg_inds,
+ size=(len(fg_inds) -
+ num_fg),
+ replace=False)
+ labels[disable_inds] = -1
+ else:
+ pos_ohem_scores = fg_score[fg_inds]
+ order_pos_ohem_scores = pos_ohem_scores.ravel().argsort()
+ sampled_inds = fg_inds[order_pos_ohem_scores[:num_fg]]
+ labels[fg_inds] = -1
+ labels[sampled_inds] = 1
+
+ n_fg = np.sum(labels > 0)
+ fg_inds = np.where(labels > 0)[0]
+ num_bg = config.TRAIN.RPN_BATCH_SIZE - n_fg
+ if self.mode == 2:
+ num_bg = max(
+ 48, n_fg * int(1.0 / config.TRAIN.RPN_FG_FRACTION - 1))
+
+ bg_inds = np.where(labels == 0)[0]
+ origin_num_bg = len(bg_inds)
+ if num_bg == 0:
+ labels[bg_inds] = -1
+ elif len(bg_inds) > num_bg:
+ # sort ohem scores
+
+ if self.mode == 0:
+ disable_inds = np.random.choice(bg_inds,
+ size=(len(bg_inds) -
+ num_bg),
+ replace=False)
+ labels[disable_inds] = -1
+ else:
+ neg_ohem_scores = fg_score[bg_inds]
+ order_neg_ohem_scores = neg_ohem_scores.ravel().argsort(
+ )[::-1]
+ sampled_inds = bg_inds[order_neg_ohem_scores[:num_bg]]
+ #print('sampled_inds_bg', sampled_inds, file=sys.stderr)
+ labels[bg_inds] = -1
+ labels[sampled_inds] = 0
+
+ if n_fg > 0:
+ order0_labels = labels.reshape((1, A, -1)).transpose(
+ (0, 2, 1)).reshape((-1, ))
+ bbox_fg_inds = np.where(order0_labels > 0)[0]
+ #print('bbox_fg_inds, order0 ', bbox_fg_inds, file=sys.stderr)
+ _anchor_weight[bbox_fg_inds, :] = 1.0
+ anchor_weight[ibatch] = _anchor_weight
+ valid_count[ibatch][0] = n_fg
+
+ #if self.prefix=='face':
+ # #print('fg-bg', self.stride, n_fg, num_bg)
+ # STAT[0]+=1
+ # STAT[self.stride][0] += config.TRAIN.RPN_BATCH_SIZE
+ # STAT[self.stride][1] += n_fg
+ # STAT[self.stride][2] += np.sum(fg_score[fg_inds]>=0)
+ # #_stat[0] += config.TRAIN.RPN_BATCH_SIZE
+ # #_stat[1] += n_fg
+ # #_stat[2] += np.sum(fg_score[fg_inds]>=0)
+ # #print('stride num_fg', self.stride, n_fg, file=sys.stderr)
+ # #ACC[self.stride] += np.sum(fg_score[fg_inds]>=0)
+ # #x = float(labels_raw.shape[0]*len(config.RPN_FEAT_STRIDE))
+ # x = 1.0
+ # if STAT[0]%STEP==0:
+ # _str = ['STAT']
+ # STAT[0] = 0
+ # for k in config.RPN_FEAT_STRIDE:
+ # acc = float(STAT[k][2])/STAT[k][1]
+ # acc0 = float(STAT[k][1])/STAT[k][0]
+ # #_str.append("%d: all-fg(%d, %d, %.4f), fg-fgcorrect(%d, %d, %.4f)"%(k,STAT[k][0], STAT[k][1], acc0, STAT[k][1], STAT[k][2], acc))
+ # _str.append("%d: (%d, %d, %.4f)"%(k, STAT[k][1], STAT[k][2], acc))
+ # STAT[k] = [0,0,0]
+ # _str = ' | '.join(_str)
+ # print(_str, file=sys.stderr)
+ #if self.stride==4 and num_fg>0:
+ # print('_stat_', self.stride, num_fg, num_bg, file=sys.stderr)
+
+ #labels_ohem = mx.nd.array(labels_raw)
+ #anchor_weight = mx.nd.array(anchor_weight)
+ #print('valid_count', self.stride, np.sum(valid_count))
+ #print('_stat', _stat, valid_count)
+
+ for ind, val in enumerate([labels_raw, anchor_weight, valid_count]):
+ val = mx.nd.array(val)
+ self.assign(out_data[ind], req[ind], val)
+
+ def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
+ for i in range(len(in_grad)):
+ self.assign(in_grad[i], req[i], 0)
+
+
+@mx.operator.register('rpn_fpn_ohem3')
+class RPNFPNOHEM3Prop(mx.operator.CustomOpProp):
+ def __init__(self, stride=0, network='', dataset='', prefix=''):
+ super(RPNFPNOHEM3Prop, self).__init__(need_top_grad=False)
+ self.stride = stride
+ self.network = network
+ self.dataset = dataset
+ self.prefix = prefix
+
+ def list_arguments(self):
+ return ['cls_score', 'labels']
+
+ def list_outputs(self):
+ return ['labels_ohem', 'anchor_weight', 'valid_count']
+
+ def infer_shape(self, in_shape):
+ labels_shape = in_shape[1]
+ #print('in_rpn_ohem', in_shape[0], in_shape[1], in_shape[2], file=sys.stderr)
+ anchor_weight_shape = [labels_shape[0], labels_shape[1], 1]
+ #print('in_rpn_ohem', labels_shape, anchor_weight_shape)
+
+ return in_shape, \
+ [labels_shape, anchor_weight_shape, [labels_shape[0], 1]]
+
+ def create_operator(self, ctx, shapes, dtypes):
+ return RPNFPNOHEM3Operator(self.stride, self.network, self.dataset,
+ self.prefix)
+
+ def declare_backward_dependency(self, out_grad, in_data, out_data):
+ return []
diff --git a/insightface/detection/retinaface/rcnn/__init__.py b/insightface/detection/retinaface/rcnn/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/core/__init__.py b/insightface/detection/retinaface/rcnn/core/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/core/callback.py b/insightface/detection/retinaface/rcnn/core/callback.py
new file mode 100644
index 0000000000000000000000000000000000000000..729a3920ed9f08a79db315b7823a9c8213b4a72d
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/callback.py
@@ -0,0 +1,16 @@
+import mxnet as mx
+
+
+def do_checkpoint(prefix, means, stds):
+ def _callback(iter_no, sym, arg, aux):
+ if 'bbox_pred_weight' in arg:
+ arg['bbox_pred_weight_test'] = (arg['bbox_pred_weight'].T *
+ mx.nd.array(stds)).T
+ arg['bbox_pred_bias_test'] = arg['bbox_pred_bias'] * mx.nd.array(
+ stds) + mx.nd.array(means)
+ mx.model.save_checkpoint(prefix, iter_no + 1, sym, arg, aux)
+ if 'bbox_pred_weight' in arg:
+ arg.pop('bbox_pred_weight_test')
+ arg.pop('bbox_pred_bias_test')
+
+ return _callback
diff --git a/insightface/detection/retinaface/rcnn/core/loader.py b/insightface/detection/retinaface/rcnn/core/loader.py
new file mode 100644
index 0000000000000000000000000000000000000000..1d34d2eb3149333dcbbbd7058b5a52e8878c6450
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/loader.py
@@ -0,0 +1,549 @@
+from __future__ import print_function
+import sys
+import mxnet as mx
+import numpy as np
+import random
+import datetime
+import multiprocessing
+import cv2
+from mxnet.executor_manager import _split_input_slice
+
+from rcnn.config import config
+from rcnn.io.image import tensor_vstack
+from rcnn.io.rpn import get_rpn_testbatch, get_rpn_batch, assign_anchor_fpn, get_crop_batch, AA
+
+
+class CropLoader(mx.io.DataIter):
+ def __init__(self,
+ feat_sym,
+ roidb,
+ batch_size=1,
+ shuffle=False,
+ ctx=None,
+ work_load_list=None,
+ aspect_grouping=False):
+ """
+ This Iter will provide roi data to Fast R-CNN network
+ :param feat_sym: to infer shape of assign_output
+ :param roidb: must be preprocessed
+ :param batch_size: must divide BATCH_SIZE(128)
+ :param shuffle: bool
+ :param ctx: list of contexts
+ :param work_load_list: list of work load
+ :param aspect_grouping: group images with similar aspects
+ :return: AnchorLoader
+ """
+ super(CropLoader, self).__init__()
+
+ # save parameters as properties
+ self.feat_sym = feat_sym
+ self.roidb = roidb
+ self.batch_size = batch_size
+ self.shuffle = shuffle
+ self.ctx = ctx
+ if self.ctx is None:
+ self.ctx = [mx.cpu()]
+ self.work_load_list = work_load_list
+ #self.feat_stride = feat_stride
+ #self.anchor_scales = anchor_scales
+ #self.anchor_ratios = anchor_ratios
+ #self.allowed_border = allowed_border
+ self.aspect_grouping = aspect_grouping
+ self.feat_stride = config.RPN_FEAT_STRIDE
+
+ # infer properties from roidb
+ self.size = len(roidb)
+ self.index = np.arange(self.size)
+
+ # decide data and label names
+ #self.data_name = ['data']
+ #self.label_name = []
+ #self.label_name.append('label')
+ #self.label_name.append('bbox_target')
+ #self.label_name.append('bbox_weight')
+
+ self.data_name = ['data']
+ #self.label_name = ['label', 'bbox_target', 'bbox_weight']
+ self.label_name = []
+ prefixes = ['face']
+ if config.HEAD_BOX:
+ prefixes.append('head')
+ names = []
+ for prefix in prefixes:
+ names += [
+ prefix + '_label', prefix + '_bbox_target',
+ prefix + '_bbox_weight'
+ ]
+ if prefix == 'face' and config.FACE_LANDMARK:
+ names += [
+ prefix + '_landmark_target', prefix + '_landmark_weight'
+ ]
+ #names = ['label', 'bbox_weight']
+ for stride in self.feat_stride:
+ for n in names:
+ k = "%s_stride%d" % (n, stride)
+ self.label_name.append(k)
+ if config.CASCADE > 0:
+ self.label_name.append('gt_boxes')
+
+ # status variable for synchronization between get_data and get_label
+ self.cur = 0
+ self.batch = None
+ self.data = None
+ self.label = None
+ # infer shape
+ feat_shape_list = []
+ _data_shape = [('data', (1, 3, max([v[1] for v in config.SCALES]),
+ max([v[1] for v in config.SCALES])))]
+ _data_shape = dict(_data_shape)
+ for i in range(len(self.feat_stride)):
+ _, feat_shape, _ = self.feat_sym[i].infer_shape(**_data_shape)
+ feat_shape = [int(i) for i in feat_shape[0]]
+ feat_shape_list.append(feat_shape)
+ self.aa = AA(feat_shape_list)
+
+ self._debug = False
+ self._debug_id = 0
+ self._times = [0.0, 0.0, 0.0, 0.0]
+
+ # get first batch to fill in provide_data and provide_label
+ self.reset()
+ self.get_batch()
+
+ @property
+ def provide_data(self):
+ return [(k, v.shape) for k, v in zip(self.data_name, self.data)]
+
+ @property
+ def provide_label(self):
+ return [(k, v.shape) for k, v in zip(self.label_name, self.label)]
+
+ def reset(self):
+ self.cur = 0
+ if self.shuffle:
+ np.random.shuffle(self.index)
+
+ def iter_next(self):
+ return self.cur + self.batch_size <= self.size
+
+ def next(self):
+ if self.iter_next():
+ self.get_batch()
+ self.cur += self.batch_size
+ return mx.io.DataBatch(data=self.data,
+ label=self.label,
+ pad=self.getpad(),
+ index=self.getindex(),
+ provide_data=self.provide_data,
+ provide_label=self.provide_label)
+ else:
+ raise StopIteration
+
+ def getindex(self):
+ return self.cur / self.batch_size
+
+ def getpad(self):
+ if self.cur + self.batch_size > self.size:
+ return self.cur + self.batch_size - self.size
+ else:
+ return 0
+
+ def infer_shape(self, max_data_shape=None, max_label_shape=None):
+ """ Return maximum data and label shape for single gpu """
+ if max_data_shape is None:
+ max_data_shape = []
+ if max_label_shape is None:
+ max_label_shape = []
+ max_shapes = dict(max_data_shape + max_label_shape)
+ input_batch_size = max_shapes['data'][0]
+ dummy_boxes = np.zeros((0, 5))
+ dummy_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]]
+ dummy_label = {'gt_boxes': dummy_boxes}
+ dummy_blur = np.zeros((0, ))
+ dummy_label['gt_blur'] = dummy_blur
+
+ label_dict = {}
+ if config.HEAD_BOX:
+ head_label_dict = self.aa.assign_anchor_fpn(dummy_label,
+ dummy_info,
+ False,
+ prefix='head')
+ label_dict.update(head_label_dict)
+
+ if config.FACE_LANDMARK:
+ dummy_landmarks = np.zeros((0, 5, 3))
+ dummy_label['gt_landmarks'] = dummy_landmarks
+ face_label_dict = self.aa.assign_anchor_fpn(dummy_label,
+ dummy_info,
+ config.FACE_LANDMARK,
+ prefix='face')
+ label_dict.update(face_label_dict)
+ if config.CASCADE > 0:
+ label_dict['gt_boxes'] = np.zeros(
+ (0, config.TRAIN.MAX_BBOX_PER_IMAGE, 5), dtype=np.float32)
+
+ label_list = []
+ for k in self.label_name:
+ label_list.append(label_dict[k])
+ label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:])))
+ for k, v in zip(self.label_name, label_list)]
+ return max_data_shape, label_shape
+
+ def get_batch(self):
+ # slice roidb
+ cur_from = self.cur
+ cur_to = min(cur_from + self.batch_size, self.size)
+ assert cur_to == cur_from + self.batch_size
+ roidb = [self.roidb[self.index[i]] for i in range(cur_from, cur_to)]
+
+ # decide multi device slice
+ work_load_list = self.work_load_list
+ ctx = self.ctx
+ if work_load_list is None:
+ work_load_list = [1] * len(ctx)
+ assert isinstance(work_load_list, list) and len(work_load_list) == len(ctx), \
+ "Invalid settings for work load. "
+ slices = _split_input_slice(self.batch_size, work_load_list)
+
+ # get testing data for multigpu
+ data_list = []
+ label_list = []
+ for islice in slices:
+ iroidb = [roidb[i] for i in range(islice.start, islice.stop)]
+ data, label = get_crop_batch(iroidb)
+ data_list += data
+ label_list += label
+ #data_list.append(data)
+ #label_list.append(label)
+
+ # pad data first and then assign anchor (read label)
+ #data_tensor = tensor_vstack([batch['data'] for batch in data_list])
+ #for i_card in range(len(data_list)):
+ # data_list[i_card]['data'] = data_tensor[
+ # i_card * config.TRAIN.BATCH_IMAGES:(1 + i_card) * config.TRAIN.BATCH_IMAGES]
+
+ #iiddxx = 0
+ select_stride = 0
+ if config.RANDOM_FEAT_STRIDE:
+ select_stride = random.choice(config.RPN_FEAT_STRIDE)
+
+ for data, label in zip(data_list, label_list):
+ data_shape = {k: v.shape for k, v in data.items()}
+ del data_shape['im_info']
+ feat_shape_list = []
+ for s in range(len(self.feat_stride)):
+ _, feat_shape, _ = self.feat_sym[s].infer_shape(**data_shape)
+ feat_shape = [int(i) for i in feat_shape[0]]
+ feat_shape_list.append(feat_shape)
+ im_info = data['im_info']
+ gt_boxes = label['gt_boxes']
+ gt_label = {'gt_boxes': gt_boxes}
+ if config.USE_BLUR:
+ gt_blur = label['gt_blur']
+ gt_label['gt_blur'] = gt_blur
+ if self._debug:
+ img = data['data'].copy()[0].transpose(
+ (1, 2, 0))[:, :, ::-1].copy()
+ print('DEBUG SHAPE', data['data'].shape,
+ label['gt_boxes'].shape)
+
+ box = label['gt_boxes'].copy()[0][0:4].astype(np.int)
+ cv2.rectangle(img, (box[0], box[1]), (box[2], box[3]),
+ (0, 255, 0), 2)
+ filename = './debugout/%d.png' % (self._debug_id)
+ print('debug write', filename)
+ cv2.imwrite(filename, img)
+ self._debug_id += 1
+ #print('DEBUG', img.shape, bbox.shape)
+ label_dict = {}
+ if config.HEAD_BOX:
+ head_label_dict = self.aa.assign_anchor_fpn(
+ gt_label,
+ im_info,
+ False,
+ prefix='head',
+ select_stride=select_stride)
+ label_dict.update(head_label_dict)
+ if config.FACE_LANDMARK:
+ gt_landmarks = label['gt_landmarks']
+ gt_label['gt_landmarks'] = gt_landmarks
+ #ta = datetime.datetime.now()
+ #face_label_dict = assign_anchor_fpn(feat_shape_list, gt_label, im_info, config.FACE_LANDMARK, prefix='face', select_stride = select_stride)
+ face_label_dict = self.aa.assign_anchor_fpn(
+ gt_label,
+ im_info,
+ config.FACE_LANDMARK,
+ prefix='face',
+ select_stride=select_stride)
+ #tb = datetime.datetime.now()
+ #self._times[0] += (tb-ta).total_seconds()
+ label_dict.update(face_label_dict)
+ #for k in label_dict:
+ # print(k, label_dict[k].shape)
+
+ if config.CASCADE > 0:
+ pad_gt_boxes = np.empty(
+ (1, config.TRAIN.MAX_BBOX_PER_IMAGE, 5), dtype=np.float32)
+ pad_gt_boxes.fill(-1)
+ pad_gt_boxes[0, 0:gt_boxes.shape[0], :] = gt_boxes
+ label_dict['gt_boxes'] = pad_gt_boxes
+ #print('im_info', im_info.shape)
+ #print(gt_boxes.shape)
+ for k in self.label_name:
+ label[k] = label_dict[k]
+
+ all_data = dict()
+ for key in self.data_name:
+ all_data[key] = tensor_vstack([batch[key] for batch in data_list])
+
+ all_label = dict()
+ for key in self.label_name:
+ pad = 0 if key.startswith('bbox_') else -1
+ #print('label vstack', key, pad, len(label_list), file=sys.stderr)
+ all_label[key] = tensor_vstack(
+ [batch[key] for batch in label_list], pad=pad)
+
+ self.data = [mx.nd.array(all_data[key]) for key in self.data_name]
+ self.label = [mx.nd.array(all_label[key]) for key in self.label_name]
+ #for _label in self.label:
+ # print('LABEL SHAPE', _label.shape)
+ #print(self._times)
+
+
+class CropLoader2(mx.io.DataIter):
+ def __init__(self,
+ feat_sym,
+ roidb,
+ batch_size=1,
+ shuffle=False,
+ ctx=None,
+ work_load_list=None,
+ aspect_grouping=False):
+ """
+ This Iter will provide roi data to Fast R-CNN network
+ :param feat_sym: to infer shape of assign_output
+ :param roidb: must be preprocessed
+ :param batch_size: must divide BATCH_SIZE(128)
+ :param shuffle: bool
+ :param ctx: list of contexts
+ :param work_load_list: list of work load
+ :param aspect_grouping: group images with similar aspects
+ :return: AnchorLoader
+ """
+ super(CropLoader2, self).__init__()
+
+ # save parameters as properties
+ self.feat_sym = feat_sym
+ self.roidb = roidb
+ self.batch_size = batch_size
+ self.shuffle = shuffle
+ self.ctx = ctx
+ if self.ctx is None:
+ self.ctx = [mx.cpu()]
+ self.work_load_list = work_load_list
+ #self.feat_stride = feat_stride
+ #self.anchor_scales = anchor_scales
+ #self.anchor_ratios = anchor_ratios
+ #self.allowed_border = allowed_border
+ self.aspect_grouping = aspect_grouping
+ self.feat_stride = config.RPN_FEAT_STRIDE
+
+ # infer properties from roidb
+ self.size = len(roidb)
+
+ # decide data and label names
+ #self.data_name = ['data']
+ #self.label_name = []
+ #self.label_name.append('label')
+ #self.label_name.append('bbox_target')
+ #self.label_name.append('bbox_weight')
+
+ self.data_name = ['data']
+ #self.label_name = ['label', 'bbox_target', 'bbox_weight']
+ self.label_name = []
+ prefixes = ['face']
+ if config.HEAD_BOX:
+ prefixes.append('head')
+ names = []
+ for prefix in prefixes:
+ names += [
+ prefix + '_label', prefix + '_bbox_target',
+ prefix + '_bbox_weight'
+ ]
+ if prefix == 'face' and config.FACE_LANDMARK:
+ names += [
+ prefix + '_landmark_target', prefix + '_landmark_weight'
+ ]
+ #names = ['label', 'bbox_weight']
+ for stride in self.feat_stride:
+ for n in names:
+ k = "%s_stride%d" % (n, stride)
+ self.label_name.append(k)
+ # status variable for synchronization between get_data and get_label
+ self.cur = 0
+ self.batch = None
+ self.data = None
+ self.label = None
+
+ # get first batch to fill in provide_data and provide_label
+ self.reset()
+ self.q_in = [
+ multiprocessing.Queue(1024) for i in range(config.NUM_CPU)
+ ]
+ #self.q_in = multiprocessing.Queue(1024)
+ self.q_out = multiprocessing.Queue(1024)
+ self.start()
+ self.get_batch()
+
+ @property
+ def provide_data(self):
+ return [(k, v.shape) for k, v in zip(self.data_name, self.data)]
+
+ @property
+ def provide_label(self):
+ return [(k, v.shape) for k, v in zip(self.label_name, self.label)]
+
+ def reset(self):
+ pass
+
+ @staticmethod
+ def input_worker(q_in, roidb, batch_size):
+ index = np.arange(len(roidb))
+ np.random.shuffle(index)
+ cur_from = 0
+ while True:
+ cur_to = cur_from + batch_size
+ if cur_to > len(roidb):
+ np.random.shuffle(index)
+ cur_from = 0
+ continue
+ _roidb = [roidb[index[i]] for i in range(cur_from, cur_to)]
+ istart = index[cur_from]
+ q_in[istart % len(q_in)].put(_roidb)
+ cur_from = cur_to
+
+ @staticmethod
+ def gen_worker(q_in, q_out):
+ while True:
+ deq = q_in.get()
+ if deq is None:
+ break
+ _roidb = deq
+ data, label = get_crop_batch(_roidb)
+ print('generated')
+ q_out.put((data, label))
+
+ def start(self):
+ input_process = multiprocessing.Process(
+ target=CropLoader2.input_worker,
+ args=(self.q_in, self.roidb, self.batch_size))
+ #gen_process = multiprocessing.Process(target=gen_worker, args=(q_in, q_out))
+ gen_process = [multiprocessing.Process(target=CropLoader2.gen_worker, args=(self.q_in[i], self.q_out)) \
+ for i in range(config.NUM_CPU)]
+ input_process.start()
+ for p in gen_process:
+ p.start()
+
+ def next(self):
+ self.get_batch()
+ return mx.io.DataBatch(data=self.data,
+ label=self.label,
+ provide_data=self.provide_data,
+ provide_label=self.provide_label)
+
+ def infer_shape(self, max_data_shape=None, max_label_shape=None):
+ """ Return maximum data and label shape for single gpu """
+ if max_data_shape is None:
+ max_data_shape = []
+ if max_label_shape is None:
+ max_label_shape = []
+ max_shapes = dict(max_data_shape + max_label_shape)
+ input_batch_size = max_shapes['data'][0]
+ dummy_boxes = np.zeros((0, 5))
+ dummy_info = [[max_shapes['data'][2], max_shapes['data'][3], 1.0]]
+ dummy_label = {'gt_boxes': dummy_boxes}
+
+ # infer shape
+ feat_shape_list = []
+ for i in range(len(self.feat_stride)):
+ _, feat_shape, _ = self.feat_sym[i].infer_shape(**max_shapes)
+ feat_shape = [int(i) for i in feat_shape[0]]
+ feat_shape_list.append(feat_shape)
+
+ label_dict = {}
+ if config.HEAD_BOX:
+ head_label_dict = assign_anchor_fpn(feat_shape_list,
+ dummy_label,
+ dummy_info,
+ False,
+ prefix='head')
+ label_dict.update(head_label_dict)
+
+ if config.FACE_LANDMARK:
+ dummy_landmarks = np.zeros((0, 11))
+ dummy_label['gt_landmarks'] = dummy_landmarks
+ face_label_dict = assign_anchor_fpn(feat_shape_list,
+ dummy_label,
+ dummy_info,
+ config.FACE_LANDMARK,
+ prefix='face')
+ label_dict.update(face_label_dict)
+
+ label_list = []
+ for k in self.label_name:
+ label_list.append(label_dict[k])
+ label_shape = [(k, tuple([input_batch_size] + list(v.shape[1:])))
+ for k, v in zip(self.label_name, label_list)]
+ return max_data_shape, label_shape
+
+ def get_batch(self):
+ deq = self.q_out.get()
+ print('q_out got')
+ data_list, label_list = deq
+
+ for data, label in zip(data_list, label_list):
+ data_shape = {k: v.shape for k, v in data.items()}
+ del data_shape['im_info']
+ feat_shape_list = []
+ for s in range(len(self.feat_stride)):
+ _, feat_shape, _ = self.feat_sym[s].infer_shape(**data_shape)
+ feat_shape = [int(i) for i in feat_shape[0]]
+ feat_shape_list.append(feat_shape)
+ #for k in self.label_name:
+ # label[k] = [0 for i in range(config.TRAIN.BATCH_IMAGES)]
+ im_info = data['im_info']
+ gt_boxes = label['gt_boxes']
+ gt_label = {'gt_boxes': gt_boxes}
+ label_dict = {}
+ head_label_dict = assign_anchor_fpn(feat_shape_list,
+ gt_label,
+ im_info,
+ False,
+ prefix='head')
+ label_dict.update(head_label_dict)
+ if config.FACE_LANDMARK:
+ gt_landmarks = label['gt_landmarks']
+ gt_label['gt_landmarks'] = gt_landmarks
+ face_label_dict = assign_anchor_fpn(feat_shape_list,
+ gt_label,
+ im_info,
+ config.FACE_LANDMARK,
+ prefix='face')
+ label_dict.update(face_label_dict)
+ #print('im_info', im_info.shape)
+ #print(gt_boxes.shape)
+ for k in self.label_name:
+ label[k] = label_dict[k]
+
+ all_data = dict()
+ for key in self.data_name:
+ all_data[key] = tensor_vstack([batch[key] for batch in data_list])
+
+ all_label = dict()
+ for key in self.label_name:
+ pad = 0 if key.startswith('bbox_') else -1
+ #print('label vstack', key, pad, len(label_list), file=sys.stderr)
+ all_label[key] = tensor_vstack(
+ [batch[key] for batch in label_list], pad=pad)
+ self.data = [mx.nd.array(all_data[key]) for key in self.data_name]
+ self.label = [mx.nd.array(all_label[key]) for key in self.label_name]
diff --git a/insightface/detection/retinaface/rcnn/core/metric.py b/insightface/detection/retinaface/rcnn/core/metric.py
new file mode 100644
index 0000000000000000000000000000000000000000..afdc92522b045d4bd8d44cddb19a279a45f644b4
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/metric.py
@@ -0,0 +1,166 @@
+from __future__ import print_function
+import sys
+import mxnet as mx
+import numpy as np
+
+from rcnn.config import config
+
+
+def get_rpn_names():
+ pred = ['rpn_cls_prob', 'rpn_bbox_loss', 'rpn_label', 'rpn_bbox_weight']
+ label = ['rpn_label', 'rpn_bbox_target', 'rpn_bbox_weight']
+ return pred, label
+
+
+class RPNAccMetric(mx.metric.EvalMetric):
+ def __init__(self, pred_idx=-1, label_idx=-1, name='RPNAcc'):
+ super(RPNAccMetric, self).__init__(name)
+ self.pred, self.label = get_rpn_names()
+ #self.name = 'RPNAcc'
+ self.name = [name, name + '_BG', name + '_FG']
+ self.pred_idx = pred_idx
+ self.label_idx = label_idx
+ self.STAT = [0, 0, 0]
+
+ def reset(self):
+ """Clear the internal statistics to initial state."""
+ if isinstance(self.name, str):
+ self.num_inst = 0
+ self.sum_metric = 0.0
+ else:
+ #print('reset to ',len(self.name), self.name, file=sys.stderr)
+ self.num_inst = [0] * len(self.name)
+ self.sum_metric = [0.0] * len(self.name)
+
+ def get(self):
+ if isinstance(self.name, str):
+ if self.num_inst == 0:
+ return (self.name, float('nan'))
+ else:
+ return (self.name, self.sum_metric / self.num_inst)
+ else:
+ names = ['%s' % (self.name[i]) for i in range(len(self.name))]
+ values = [x / y if y != 0 else float('nan') \
+ for x, y in zip(self.sum_metric, self.num_inst)]
+ return (names, values)
+
+ def update(self, labels, preds):
+ if self.pred_idx >= 0 and self.label_idx >= 0:
+ pred = preds[self.pred_idx]
+ label = preds[self.label_idx]
+ else:
+ pred = preds[self.pred.index('rpn_cls_prob')]
+ label = labels[self.label.index('rpn_label')]
+ #label = preds[self.pred.index('rpn_label')]
+
+ num_images = pred.shape[0]
+ #print(pred.shape, label.shape, file=sys.stderr)
+ # pred (b, c, p) or (b, c, h, w)
+ pred_label = mx.ndarray.argmax_channel(pred).asnumpy().astype('int32')
+ #pred_label = pred_label.reshape((pred_label.shape[0], -1))
+ pred_label = pred_label.reshape(-1, )
+ # label (b, p)
+ label = label.asnumpy().astype('int32').reshape(-1, )
+ #print(pred_label.shape, label.shape)
+
+ # filter with keep_inds
+ keep_inds = np.where(label != -1)[0]
+ #print('in_metric acc', pred_label.shape, label.shape, len(keep_inds), file=sys.stderr)
+ #print(keep_inds, file=sys.stderr)
+ _pred_label = pred_label[keep_inds]
+ _label = label[keep_inds]
+ #print('in_metric2', pred_label.shape, label.shape, len(keep_inds), file=sys.stderr)
+ if isinstance(self.name, str):
+ self.sum_metric += np.sum(_pred_label.flat == _label.flat)
+ self.num_inst += len(_pred_label.flat)
+ else:
+ self.sum_metric[0] += np.sum(_pred_label.flat == _label.flat)
+ self.num_inst[0] += len(_pred_label.flat)
+
+ keep_inds = np.where(label == 0)[0]
+ _pred_label = pred_label[keep_inds]
+ _label = label[keep_inds]
+ self.sum_metric[1] += np.sum(_pred_label.flat == _label.flat)
+ self.num_inst[1] += len(_pred_label.flat)
+
+ keep_inds = np.where(label == 1)[0]
+ _pred_label = pred_label[keep_inds]
+ _label = label[keep_inds]
+ a = np.sum(_pred_label.flat == _label.flat)
+ b = len(_pred_label.flat)
+ self.sum_metric[2] += a
+ self.num_inst[2] += b
+
+ #self.STAT[0]+=a
+ #self.STAT[1]+=b
+ #self.STAT[2]+=num_images
+ #if self.STAT[2]%400==0:
+ # print('FG_ACC', self.pred_idx, self.STAT[2], self.STAT[0], self.STAT[1], float(self.STAT[0])/self.STAT[1], file=sys.stderr)
+ # self.STAT = [0,0,0]
+
+
+class RPNLogLossMetric(mx.metric.EvalMetric):
+ def __init__(self, pred_idx=-1, label_idx=-1):
+ super(RPNLogLossMetric, self).__init__('RPNLogLoss')
+ self.pred, self.label = get_rpn_names()
+ self.pred_idx = pred_idx
+ self.label_idx = label_idx
+
+ def update(self, labels, preds):
+ if self.pred_idx >= 0 and self.label_idx >= 0:
+ pred = preds[self.pred_idx]
+ label = preds[self.label_idx]
+ else:
+ pred = preds[self.pred.index('rpn_cls_prob')]
+ label = labels[self.label.index('rpn_label')]
+ #label = preds[self.pred.index('rpn_label')]
+
+ # label (b, p)
+ label = label.asnumpy().astype('int32').reshape((-1))
+ # pred (b, c, p) or (b, c, h, w) --> (b, p, c) --> (b*p, c)
+ pred = pred.asnumpy().reshape(
+ (pred.shape[0], pred.shape[1], -1)).transpose((0, 2, 1))
+ pred = pred.reshape((label.shape[0], -1))
+
+ # filter with keep_inds
+ keep_inds = np.where(label != -1)[0]
+ label = label[keep_inds]
+ cls = pred[keep_inds, label]
+ #print('in_metric log', label.shape, cls.shape, file=sys.stderr)
+
+ cls += 1e-14
+ cls_loss = -1 * np.log(cls)
+ cls_loss = np.sum(cls_loss)
+ self.sum_metric += cls_loss
+ self.num_inst += label.shape[0]
+
+
+class RPNL1LossMetric(mx.metric.EvalMetric):
+ def __init__(self, loss_idx=-1, weight_idx=-1, name='RPNL1Loss'):
+ super(RPNL1LossMetric, self).__init__(name)
+ self.pred, self.label = get_rpn_names()
+ self.loss_idx = loss_idx
+ self.weight_idx = weight_idx
+ self.name = name
+
+ def update(self, labels, preds):
+ if self.loss_idx >= 0 and self.weight_idx >= 0:
+ bbox_loss = preds[self.loss_idx].asnumpy()
+ bbox_weight = preds[self.weight_idx].asnumpy()
+ else:
+ bbox_loss = preds[self.pred.index('rpn_bbox_loss')].asnumpy()
+ bbox_weight = labels[self.label.index('rpn_bbox_weight')].asnumpy()
+ #bbox_weight = preds[self.pred.index('rpn_bbox_weight')].asnumpy()
+
+ #print('in_metric', self.name, bbox_weight.shape, bbox_loss.shape)
+
+ # calculate num_inst (average on those fg anchors)
+ if config.LR_MODE == 0:
+ num_inst = np.sum(bbox_weight > 0) / (bbox_weight.shape[1] /
+ config.NUM_ANCHORS)
+ else:
+ num_inst = 1
+ #print('in_metric log', bbox_loss.shape, num_inst, file=sys.stderr)
+
+ self.sum_metric += np.sum(bbox_loss)
+ self.num_inst += num_inst
diff --git a/insightface/detection/retinaface/rcnn/core/module.py b/insightface/detection/retinaface/rcnn/core/module.py
new file mode 100644
index 0000000000000000000000000000000000000000..731a4ff1b40c8a1d132b053a3aaede86ec8aa19f
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/module.py
@@ -0,0 +1,259 @@
+"""A `MutableModule` implement the `BaseModule` API, and allows input shape
+varying with training iterations. If shapes vary, executors will rebind,
+using shared arrays from the initial module binded with maximum shape.
+"""
+
+import logging
+
+from mxnet import context as ctx
+from mxnet.initializer import Uniform
+from mxnet.module.base_module import BaseModule
+from mxnet.module.module import Module
+
+
+class MutableModule(BaseModule):
+ """A mutable module is a module that supports variable input data.
+
+ Parameters
+ ----------
+ symbol : Symbol
+ data_names : list of str
+ label_names : list of str
+ logger : Logger
+ context : Context or list of Context
+ work_load_list : list of number
+ max_data_shapes : list of (name, shape) tuple, designating inputs whose shape vary
+ max_label_shapes : list of (name, shape) tuple, designating inputs whose shape vary
+ fixed_param_prefix : list of str, indicating fixed parameters
+ """
+ def __init__(self,
+ symbol,
+ data_names,
+ label_names,
+ logger=logging,
+ context=ctx.cpu(),
+ work_load_list=None,
+ max_data_shapes=None,
+ max_label_shapes=None,
+ fixed_param_prefix=None):
+ super(MutableModule, self).__init__(logger=logger)
+ self._symbol = symbol
+ self._data_names = data_names
+ self._label_names = label_names
+ self._context = context
+ self._work_load_list = work_load_list
+
+ self._curr_module = None
+ self._max_data_shapes = max_data_shapes
+ self._max_label_shapes = max_label_shapes
+ self._fixed_param_prefix = fixed_param_prefix
+
+ fixed_param_names = list()
+ if fixed_param_prefix is not None:
+ for name in self._symbol.list_arguments():
+ for prefix in self._fixed_param_prefix:
+ if prefix in name:
+ fixed_param_names.append(name)
+ self._fixed_param_names = fixed_param_names
+
+ def _reset_bind(self):
+ self.binded = False
+ self._curr_module = None
+
+ @property
+ def data_names(self):
+ return self._data_names
+
+ @property
+ def output_names(self):
+ return self._symbol.list_outputs()
+
+ @property
+ def data_shapes(self):
+ assert self.binded
+ return self._curr_module.data_shapes
+
+ @property
+ def label_shapes(self):
+ assert self.binded
+ return self._curr_module.label_shapes
+
+ @property
+ def output_shapes(self):
+ assert self.binded
+ return self._curr_module.output_shapes
+
+ def get_params(self):
+ assert self.binded and self.params_initialized
+ return self._curr_module.get_params()
+
+ def init_params(self,
+ initializer=Uniform(0.01),
+ arg_params=None,
+ aux_params=None,
+ allow_missing=False,
+ force_init=False,
+ allow_extra=False):
+ if self.params_initialized and not force_init:
+ return
+ assert self.binded, 'call bind before initializing the parameters'
+ self._curr_module.init_params(initializer=initializer,
+ arg_params=arg_params,
+ aux_params=aux_params,
+ allow_missing=allow_missing,
+ force_init=force_init,
+ allow_extra=allow_extra)
+ self.params_initialized = True
+
+ def bind(self,
+ data_shapes,
+ label_shapes=None,
+ for_training=True,
+ inputs_need_grad=False,
+ force_rebind=False,
+ shared_module=None):
+ # in case we already initialized params, keep it
+ if self.params_initialized:
+ arg_params, aux_params = self.get_params()
+
+ # force rebinding is typically used when one want to switch from
+ # training to prediction phase.
+ if force_rebind:
+ self._reset_bind()
+
+ if self.binded:
+ self.logger.warning('Already binded, ignoring bind()')
+ return
+
+ assert shared_module is None, 'shared_module for MutableModule is not supported'
+
+ self.for_training = for_training
+ self.inputs_need_grad = inputs_need_grad
+ self.binded = True
+
+ max_shapes_dict = dict()
+ if self._max_data_shapes is not None:
+ max_shapes_dict.update(dict(self._max_data_shapes))
+ if self._max_label_shapes is not None:
+ max_shapes_dict.update(dict(self._max_label_shapes))
+
+ max_data_shapes = list()
+ for name, shape in data_shapes:
+ if name in max_shapes_dict:
+ max_data_shapes.append((name, max_shapes_dict[name]))
+ else:
+ max_data_shapes.append((name, shape))
+
+ max_label_shapes = list()
+ if label_shapes is not None:
+ for name, shape in label_shapes:
+ if name in max_shapes_dict:
+ max_label_shapes.append((name, max_shapes_dict[name]))
+ else:
+ max_label_shapes.append((name, shape))
+
+ if len(max_label_shapes) == 0:
+ max_label_shapes = None
+
+ module = Module(self._symbol,
+ self._data_names,
+ self._label_names,
+ logger=self.logger,
+ context=self._context,
+ work_load_list=self._work_load_list,
+ fixed_param_names=self._fixed_param_names)
+ module.bind(max_data_shapes,
+ max_label_shapes,
+ for_training,
+ inputs_need_grad,
+ force_rebind=False,
+ shared_module=None)
+ self._curr_module = module
+
+ # copy back saved params, if already initialized
+ if self.params_initialized:
+ self.set_params(arg_params, aux_params)
+
+ def init_optimizer(self,
+ kvstore='local',
+ optimizer='sgd',
+ optimizer_params=(('learning_rate', 0.01), ),
+ force_init=False):
+ assert self.binded and self.params_initialized
+ if self.optimizer_initialized and not force_init:
+ self.logger.warning('optimizer already initialized, ignoring.')
+ return
+
+ self._curr_module.init_optimizer(kvstore,
+ optimizer,
+ optimizer_params,
+ force_init=force_init)
+ self.optimizer_initialized = True
+
+ def forward(self, data_batch, is_train=None):
+ assert self.binded and self.params_initialized
+
+ # get current_shapes
+ if self._curr_module.label_shapes is not None:
+ current_shapes = dict(self._curr_module.data_shapes +
+ self._curr_module.label_shapes)
+ else:
+ current_shapes = dict(self._curr_module.data_shapes)
+
+ # get input_shapes
+ if data_batch.provide_label is not None:
+ input_shapes = dict(data_batch.provide_data +
+ data_batch.provide_label)
+ else:
+ input_shapes = dict(data_batch.provide_data)
+
+ # decide if shape changed
+ shape_changed = False
+ for k, v in current_shapes.items():
+ if v != input_shapes[k]:
+ shape_changed = True
+
+ if shape_changed:
+ module = Module(self._symbol,
+ self._data_names,
+ self._label_names,
+ logger=self.logger,
+ context=self._context,
+ work_load_list=self._work_load_list,
+ fixed_param_names=self._fixed_param_names)
+ module.bind(data_batch.provide_data,
+ data_batch.provide_label,
+ self._curr_module.for_training,
+ self._curr_module.inputs_need_grad,
+ force_rebind=False,
+ shared_module=self._curr_module)
+ self._curr_module = module
+
+ self._curr_module.forward(data_batch, is_train=is_train)
+
+ def backward(self, out_grads=None):
+ assert self.binded and self.params_initialized
+ self._curr_module.backward(out_grads=out_grads)
+
+ def update(self):
+ assert self.binded and self.params_initialized and self.optimizer_initialized
+ self._curr_module.update()
+
+ def get_outputs(self, merge_multi_context=True):
+ assert self.binded and self.params_initialized
+ return self._curr_module.get_outputs(
+ merge_multi_context=merge_multi_context)
+
+ def get_input_grads(self, merge_multi_context=True):
+ assert self.binded and self.params_initialized and self.inputs_need_grad
+ return self._curr_module.get_input_grads(
+ merge_multi_context=merge_multi_context)
+
+ def update_metric(self, eval_metric, labels):
+ assert self.binded and self.params_initialized
+ self._curr_module.update_metric(eval_metric, labels)
+
+ def install_monitor(self, mon):
+ """ Install monitor on all executors """
+ assert self.binded
+ self._curr_module.install_monitor(mon)
diff --git a/insightface/detection/retinaface/rcnn/core/module_bak.py b/insightface/detection/retinaface/rcnn/core/module_bak.py
new file mode 100644
index 0000000000000000000000000000000000000000..2c819025100e4f0c25acf0ca7d9bd68de143483c
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/module_bak.py
@@ -0,0 +1,260 @@
+"""A `MutableModule` implement the `BaseModule` API, and allows input shape
+varying with training iterations. If shapes vary, executors will rebind,
+using shared arrays from the initial module binded with maximum shape.
+"""
+
+import logging
+
+from mxnet import context as ctx
+from mxnet.initializer import Uniform
+from mxnet.module.base_module import BaseModule
+from mxnet.module.module import Module
+
+
+class MutableModule(BaseModule):
+ """A mutable module is a module that supports variable input data.
+
+ Parameters
+ ----------
+ symbol : Symbol
+ data_names : list of str
+ label_names : list of str
+ logger : Logger
+ context : Context or list of Context
+ work_load_list : list of number
+ max_data_shapes : list of (name, shape) tuple, designating inputs whose shape vary
+ max_label_shapes : list of (name, shape) tuple, designating inputs whose shape vary
+ fixed_param_prefix : list of str, indicating fixed parameters
+ """
+ def __init__(self,
+ symbol,
+ data_names,
+ label_names,
+ logger=logging,
+ context=ctx.cpu(),
+ work_load_list=None,
+ max_data_shapes=None,
+ max_label_shapes=None,
+ fixed_param_prefix=None):
+ super(MutableModule, self).__init__(logger=logger)
+ self._symbol = symbol
+ self._data_names = data_names
+ self._label_names = label_names
+ self._context = context
+ self._work_load_list = work_load_list
+
+ self._curr_module = None
+ self._max_data_shapes = max_data_shapes
+ self._max_label_shapes = max_label_shapes
+ self._fixed_param_prefix = fixed_param_prefix
+
+ fixed_param_names = list()
+ if fixed_param_prefix is not None:
+ for name in self._symbol.list_arguments():
+ for prefix in self._fixed_param_prefix:
+ if prefix in name:
+ fixed_param_names.append(name)
+ self._fixed_param_names = fixed_param_names
+
+ def _reset_bind(self):
+ self.binded = False
+ self._curr_module = None
+
+ @property
+ def data_names(self):
+ return self._data_names
+
+ @property
+ def output_names(self):
+ return self._symbol.list_outputs()
+
+ @property
+ def data_shapes(self):
+ assert self.binded
+ return self._curr_module.data_shapes
+
+ @property
+ def label_shapes(self):
+ assert self.binded
+ return self._curr_module.label_shapes
+
+ @property
+ def output_shapes(self):
+ assert self.binded
+ return self._curr_module.output_shapes
+
+ def get_params(self):
+ assert self.binded and self.params_initialized
+ return self._curr_module.get_params()
+
+ def init_params(self,
+ initializer=Uniform(0.01),
+ arg_params=None,
+ aux_params=None,
+ allow_missing=False,
+ force_init=False,
+ allow_extra=False):
+ if self.params_initialized and not force_init:
+ return
+ assert self.binded, 'call bind before initializing the parameters'
+ self._curr_module.init_params(initializer=initializer,
+ arg_params=arg_params,
+ aux_params=aux_params,
+ allow_missing=allow_missing,
+ force_init=force_init,
+ allow_extra=allow_extra)
+ self.params_initialized = True
+
+ def bind(self,
+ data_shapes,
+ label_shapes=None,
+ for_training=True,
+ inputs_need_grad=False,
+ force_rebind=False,
+ shared_module=None,
+ grad_req='write'):
+ # in case we already initialized params, keep it
+ if self.params_initialized:
+ arg_params, aux_params = self.get_params()
+
+ # force rebinding is typically used when one want to switch from
+ # training to prediction phase.
+ if force_rebind:
+ self._reset_bind()
+
+ if self.binded:
+ self.logger.warning('Already binded, ignoring bind()')
+ return
+
+ assert shared_module is None, 'shared_module for MutableModule is not supported'
+
+ self.for_training = for_training
+ self.inputs_need_grad = inputs_need_grad
+ self.binded = True
+
+ max_shapes_dict = dict()
+ if self._max_data_shapes is not None:
+ max_shapes_dict.update(dict(self._max_data_shapes))
+ if self._max_label_shapes is not None:
+ max_shapes_dict.update(dict(self._max_label_shapes))
+
+ max_data_shapes = list()
+ for name, shape in data_shapes:
+ if name in max_shapes_dict:
+ max_data_shapes.append((name, max_shapes_dict[name]))
+ else:
+ max_data_shapes.append((name, shape))
+
+ max_label_shapes = list()
+ if label_shapes is not None:
+ for name, shape in label_shapes:
+ if name in max_shapes_dict:
+ max_label_shapes.append((name, max_shapes_dict[name]))
+ else:
+ max_label_shapes.append((name, shape))
+
+ if len(max_label_shapes) == 0:
+ max_label_shapes = None
+
+ module = Module(self._symbol,
+ self._data_names,
+ self._label_names,
+ logger=self.logger,
+ context=self._context,
+ work_load_list=self._work_load_list,
+ fixed_param_names=self._fixed_param_names)
+ module.bind(max_data_shapes,
+ max_label_shapes,
+ for_training,
+ inputs_need_grad,
+ force_rebind=False,
+ shared_module=None)
+ self._curr_module = module
+
+ # copy back saved params, if already initialized
+ if self.params_initialized:
+ self.set_params(arg_params, aux_params)
+
+ def init_optimizer(self,
+ kvstore='local',
+ optimizer='sgd',
+ optimizer_params=(('learning_rate', 0.01), ),
+ force_init=False):
+ assert self.binded and self.params_initialized
+ if self.optimizer_initialized and not force_init:
+ self.logger.warning('optimizer already initialized, ignoring.')
+ return
+
+ self._curr_module.init_optimizer(kvstore,
+ optimizer,
+ optimizer_params,
+ force_init=force_init)
+ self.optimizer_initialized = True
+
+ def forward(self, data_batch, is_train=None):
+ assert self.binded and self.params_initialized
+
+ # get current_shapes
+ if self._curr_module.label_shapes is not None:
+ current_shapes = dict(self._curr_module.data_shapes +
+ self._curr_module.label_shapes)
+ else:
+ current_shapes = dict(self._curr_module.data_shapes)
+
+ # get input_shapes
+ if data_batch.provide_label is not None:
+ input_shapes = dict(data_batch.provide_data +
+ data_batch.provide_label)
+ else:
+ input_shapes = dict(data_batch.provide_data)
+
+ # decide if shape changed
+ shape_changed = False
+ for k, v in current_shapes.items():
+ if v != input_shapes[k]:
+ shape_changed = True
+
+ if shape_changed:
+ module = Module(self._symbol,
+ self._data_names,
+ self._label_names,
+ logger=self.logger,
+ context=self._context,
+ work_load_list=self._work_load_list,
+ fixed_param_names=self._fixed_param_names)
+ module.bind(data_batch.provide_data,
+ data_batch.provide_label,
+ self._curr_module.for_training,
+ self._curr_module.inputs_need_grad,
+ force_rebind=False,
+ shared_module=self._curr_module)
+ self._curr_module = module
+
+ self._curr_module.forward(data_batch, is_train=is_train)
+
+ def backward(self, out_grads=None):
+ assert self.binded and self.params_initialized
+ self._curr_module.backward(out_grads=out_grads)
+
+ def update(self):
+ assert self.binded and self.params_initialized and self.optimizer_initialized
+ self._curr_module.update()
+
+ def get_outputs(self, merge_multi_context=True):
+ assert self.binded and self.params_initialized
+ return self._curr_module.get_outputs(
+ merge_multi_context=merge_multi_context)
+
+ def get_input_grads(self, merge_multi_context=True):
+ assert self.binded and self.params_initialized and self.inputs_need_grad
+ return self._curr_module.get_input_grads(
+ merge_multi_context=merge_multi_context)
+
+ def update_metric(self, eval_metric, labels):
+ assert self.binded and self.params_initialized
+ self._curr_module.update_metric(eval_metric, labels)
+
+ def install_monitor(self, mon):
+ """ Install monitor on all executors """
+ assert self.binded
+ self._curr_module.install_monitor(mon)
diff --git a/insightface/detection/retinaface/rcnn/core/tester.py b/insightface/detection/retinaface/rcnn/core/tester.py
new file mode 100644
index 0000000000000000000000000000000000000000..d9d5d32b1c7fdec0f1ad4da3d84f53f627b8bf8a
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/core/tester.py
@@ -0,0 +1,527 @@
+from __future__ import print_function
+try:
+ import cPickle as pickle
+except ImportError:
+ import pickle
+import os
+import sys
+import time
+import mxnet as mx
+import numpy as np
+from builtins import range
+
+from mxnet.module import Module
+from .module import MutableModule
+from rcnn.logger import logger
+from rcnn.config import config
+from rcnn.io import image
+from rcnn.processing.bbox_transform import bbox_pred, clip_boxes
+from rcnn.processing.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper
+from rcnn.processing.bbox_transform import bbox_overlaps
+
+
+def IOU(Reframe, GTframe):
+ x1 = Reframe[0]
+ y1 = Reframe[1]
+ width1 = Reframe[2] - Reframe[0]
+ height1 = Reframe[3] - Reframe[1]
+
+ x2 = GTframe[0]
+ y2 = GTframe[1]
+ width2 = GTframe[2] - GTframe[0]
+ height2 = GTframe[3] - GTframe[1]
+
+ endx = max(x1 + width1, x2 + width2)
+ startx = min(x1, x2)
+ width = width1 + width2 - (endx - startx)
+
+ endy = max(y1 + height1, y2 + height2)
+ starty = min(y1, y2)
+ height = height1 + height2 - (endy - starty)
+
+ if width <= 0 or height <= 0:
+ ratio = 0
+ else:
+ Area = width * height
+ Area1 = width1 * height1
+ Area2 = width2 * height2
+ ratio = Area * 1. / (Area1 + Area2 - Area)
+ return ratio
+
+
+class Predictor(object):
+ def __init__(self,
+ symbol,
+ data_names,
+ label_names,
+ context=mx.cpu(),
+ max_data_shapes=None,
+ provide_data=None,
+ provide_label=None,
+ arg_params=None,
+ aux_params=None):
+ #self._mod = MutableModule(symbol, data_names, label_names,
+ # context=context, max_data_shapes=max_data_shapes)
+ self._mod = Module(symbol, data_names, label_names, context=context)
+ self._mod.bind(provide_data, provide_label, for_training=False)
+ self._mod.init_params(arg_params=arg_params, aux_params=aux_params)
+
+ def predict(self, data_batch):
+ self._mod.forward(data_batch)
+ return dict(zip(self._mod.output_names,
+ self._mod.get_outputs())) #TODO
+ #return self._mod.get_outputs()
+
+
+def im_proposal(predictor, data_batch, data_names, scale):
+ data_dict = dict(zip(data_names, data_batch.data))
+ output = predictor.predict(data_batch)
+
+ # drop the batch index
+ boxes = output['rois_output'].asnumpy()[:, 1:]
+ scores = output['rois_score'].asnumpy()
+
+ # transform to original scale
+ boxes = boxes / scale
+
+ return scores, boxes, data_dict
+
+
+def _im_proposal(predictor, data_batch, data_names, scale):
+ data_dict = dict(zip(data_names, data_batch.data))
+ output = predictor.predict(data_batch)
+ print('output', output)
+
+ # drop the batch index
+ boxes = output['rois_output'].asnumpy()[:, 1:]
+ scores = output['rois_score'].asnumpy()
+
+ # transform to original scale
+ boxes = boxes / scale
+
+ return scores, boxes, data_dict
+
+
+def generate_proposals(predictor, test_data, imdb, vis=False, thresh=0.):
+ """
+ Generate detections results using RPN.
+ :param predictor: Predictor
+ :param test_data: data iterator, must be non-shuffled
+ :param imdb: image database
+ :param vis: controls visualization
+ :param thresh: thresh for valid detections
+ :return: list of detected boxes
+ """
+ assert vis or not test_data.shuffle
+ data_names = [k[0] for k in test_data.provide_data]
+
+ i = 0
+ t = time.time()
+ imdb_boxes = list()
+ original_boxes = list()
+ for im_info, data_batch in test_data:
+ t1 = time.time() - t
+ t = time.time()
+
+ scale = im_info[0, 2]
+ scores, boxes, data_dict = im_proposal(predictor, data_batch,
+ data_names, scale)
+ print(scores.shape, boxes.shape, file=sys.stderr)
+ t2 = time.time() - t
+ t = time.time()
+
+ # assemble proposals
+ dets = np.hstack((boxes, scores))
+ original_boxes.append(dets)
+
+ # filter proposals
+ keep = np.where(dets[:, 4:] > thresh)[0]
+ dets = dets[keep, :]
+ imdb_boxes.append(dets)
+
+ if vis:
+ vis_all_detection(data_dict['data'].asnumpy(), [dets], ['obj'],
+ scale)
+
+ logger.info('generating %d/%d ' % (i + 1, imdb.num_images) +
+ 'proposal %d ' % (dets.shape[0]) + 'data %.4fs net %.4fs' %
+ (t1, t2))
+ i += 1
+
+ assert len(imdb_boxes) == imdb.num_images, 'calculations not complete'
+
+ # save results
+ rpn_folder = os.path.join(imdb.root_path, 'rpn_data')
+ if not os.path.exists(rpn_folder):
+ os.mkdir(rpn_folder)
+
+ rpn_file = os.path.join(rpn_folder, imdb.name + '_rpn.pkl')
+ with open(rpn_file, 'wb') as f:
+ pickle.dump(imdb_boxes, f, pickle.HIGHEST_PROTOCOL)
+
+ if thresh > 0:
+ full_rpn_file = os.path.join(rpn_folder, imdb.name + '_full_rpn.pkl')
+ with open(full_rpn_file, 'wb') as f:
+ pickle.dump(original_boxes, f, pickle.HIGHEST_PROTOCOL)
+
+ logger.info('wrote rpn proposals to %s' % rpn_file)
+ return imdb_boxes
+
+
+def test_proposals(predictor, test_data, imdb, roidb, vis=False):
+ """
+ Test detections results using RPN.
+ :param predictor: Predictor
+ :param test_data: data iterator, must be non-shuffled
+ :param imdb: image database
+ :param roidb: roidb
+ :param vis: controls visualization
+ :return: recall, mAP
+ """
+ assert vis or not test_data.shuffle
+ data_names = [k[0] for k in test_data.provide_data]
+
+ #bbox_file = os.path.join(rpn_folder, imdb.name + '_bbox.txt')
+ #bbox_f = open(bbox_file, 'w')
+
+ i = 0
+ t = time.time()
+ output_folder = os.path.join(imdb.root_path, 'output')
+ if not os.path.exists(output_folder):
+ os.mkdir(output_folder)
+ imdb_boxes = list()
+ original_boxes = list()
+ gt_overlaps = np.zeros(0)
+ overall = [0.0, 0.0]
+ gt_max = np.array((0.0, 0.0))
+ num_pos = 0
+ #apply scale, for SSH
+ #_, roidb = image.get_image(roidb)
+ for im_info, data_batch in test_data:
+ t1 = time.time() - t
+ t = time.time()
+
+ oscale = im_info[0, 2]
+ #print('scale', scale, file=sys.stderr)
+ scale = 1.0 #fix scale=1.0 for SSH face detector
+ scores, boxes, data_dict = im_proposal(predictor, data_batch,
+ data_names, scale)
+ #print(scores.shape, boxes.shape, file=sys.stderr)
+ t2 = time.time() - t
+ t = time.time()
+
+ # assemble proposals
+ dets = np.hstack((boxes, scores))
+ original_boxes.append(dets)
+
+ # filter proposals
+ keep = np.where(dets[:, 4:] > config.TEST.SCORE_THRESH)[0]
+ dets = dets[keep, :]
+ imdb_boxes.append(dets)
+
+ logger.info('generating %d/%d ' % (i + 1, imdb.num_images) +
+ 'proposal %d ' % (dets.shape[0]) + 'data %.4fs net %.4fs' %
+ (t1, t2))
+
+ #if dets.shape[0]==0:
+ # continue
+ if vis:
+ vis_all_detection(data_dict['data'].asnumpy(), [dets], ['obj'],
+ scale)
+ boxes = dets
+ #max_gt_overlaps = roidb[i]['gt_overlaps'].max(axis=1)
+ #gt_inds = np.where((roidb[i]['gt_classes'] > 0) & (max_gt_overlaps == 1))[0]
+ #gt_boxes = roidb[i]['boxes'][gt_inds, :]
+ gt_boxes = roidb[i]['boxes'].copy(
+ ) * oscale # as roidb is the original one, need to scale GT for SSH
+ gt_areas = (gt_boxes[:, 2] - gt_boxes[:, 0] + 1) * (gt_boxes[:, 3] -
+ gt_boxes[:, 1] + 1)
+ num_pos += gt_boxes.shape[0]
+
+ overlaps = bbox_overlaps(boxes.astype(np.float),
+ gt_boxes.astype(np.float))
+ #print(im_info, gt_boxes.shape, boxes.shape, overlaps.shape, file=sys.stderr)
+
+ _gt_overlaps = np.zeros((gt_boxes.shape[0]))
+ # choose whatever is smaller to iterate
+
+ #for j in range(gt_boxes.shape[0]):
+ # print('gt %d,%d,%d,%d'% (gt_boxes[j][0], gt_boxes[j][1], gt_boxes[j][2]-gt_boxes[j][0], gt_boxes[j][3]-gt_boxes[j][1]), file=sys.stderr)
+ # gt_max = np.maximum( gt_max, np.array( (gt_boxes[j][2], gt_boxes[j][3]) ) )
+ #print('gt max', gt_max, file=sys.stderr)
+ #for j in range(boxes.shape[0]):
+ # print('anchor_box %.2f,%.2f,%.2f,%.2f'% (boxes[j][0], boxes[j][1], boxes[j][2]-boxes[j][0], boxes[j][3]-boxes[j][1]), file=sys.stderr)
+
+ #rounds = min(boxes.shape[0], gt_boxes.shape[0])
+ #for j in range(rounds):
+ # # find which proposal maximally covers each gt box
+ # argmax_overlaps = overlaps.argmax(axis=0)
+ # print(j, 'argmax_overlaps', argmax_overlaps, file=sys.stderr)
+ # # get the IoU amount of coverage for each gt box
+ # max_overlaps = overlaps.max(axis=0)
+ # print(j, 'max_overlaps', max_overlaps, file=sys.stderr)
+ # # find which gt box is covered by most IoU
+ # gt_ind = max_overlaps.argmax()
+ # gt_ovr = max_overlaps.max()
+ # assert (gt_ovr >= 0), '%s\n%s\n%s' % (boxes, gt_boxes, overlaps)
+ # # find the proposal box that covers the best covered gt box
+ # box_ind = argmax_overlaps[gt_ind]
+ # print('max box', gt_ind, box_ind, (boxes[box_ind][0], boxes[box_ind][1], boxes[box_ind][2]-boxes[box_ind][0], boxes[box_ind][3]-boxes[box_ind][1], boxes[box_ind][4]), file=sys.stderr)
+ # # record the IoU coverage of this gt box
+ # _gt_overlaps[j] = overlaps[box_ind, gt_ind]
+ # assert (_gt_overlaps[j] == gt_ovr)
+ # # mark the proposal box and the gt box as used
+ # overlaps[box_ind, :] = -1
+ # overlaps[:, gt_ind] = -1
+
+ if boxes.shape[0] > 0:
+ _gt_overlaps = overlaps.max(axis=0)
+ #print('max_overlaps', _gt_overlaps, file=sys.stderr)
+ for j in range(len(_gt_overlaps)):
+ if _gt_overlaps[j] > config.TEST.IOU_THRESH:
+ continue
+ print(j,
+ 'failed',
+ gt_boxes[j],
+ 'max_overlap:',
+ _gt_overlaps[j],
+ file=sys.stderr)
+ #_idx = np.where(overlaps[:,j]>0.4)[0]
+ #print(j, _idx, file=sys.stderr)
+ #print(overlaps[_idx,j], file=sys.stderr)
+ #for __idx in _idx:
+ # print(gt_boxes[j], boxes[__idx], overlaps[__idx,j], IOU(gt_boxes[j], boxes[__idx,0:4]), file=sys.stderr)
+
+ # append recorded IoU coverage level
+ found = (_gt_overlaps > config.TEST.IOU_THRESH).sum()
+ _recall = found / float(gt_boxes.shape[0])
+ print('recall',
+ _recall,
+ gt_boxes.shape[0],
+ boxes.shape[0],
+ gt_areas,
+ file=sys.stderr)
+ overall[0] += found
+ overall[1] += gt_boxes.shape[0]
+ #gt_overlaps = np.hstack((gt_overlaps, _gt_overlaps))
+ #_recall = (gt_overlaps >= threshold).sum() / float(num_pos)
+ _recall = float(overall[0]) / overall[1]
+ print('recall_all', _recall, file=sys.stderr)
+
+ boxes[:, 0:4] /= oscale
+ _vec = roidb[i]['image'].split('/')
+ out_dir = os.path.join(output_folder, _vec[-2])
+ if not os.path.exists(out_dir):
+ os.mkdir(out_dir)
+ out_file = os.path.join(out_dir, _vec[-1].replace('jpg', 'txt'))
+ with open(out_file, 'w') as f:
+ name = '/'.join(roidb[i]['image'].split('/')[-2:])
+ f.write("%s\n" % (name))
+ f.write("%d\n" % (boxes.shape[0]))
+ for b in range(boxes.shape[0]):
+ box = boxes[b]
+ f.write(
+ "%d %d %d %d %g \n" %
+ (box[0], box[1], box[2] - box[0], box[3] - box[1], box[4]))
+ i += 1
+
+ #bbox_f.close()
+ return
+ gt_overlaps = np.sort(gt_overlaps)
+ recalls = np.zeros_like(thresholds)
+
+ # compute recall for each IoU threshold
+ for i, t in enumerate(thresholds):
+ recalls[i] = (gt_overlaps >= t).sum() / float(num_pos)
+ ar = recalls.mean()
+
+ # print results
+ print('average recall for {}: {:.3f}'.format(area_name, ar))
+ for threshold, recall in zip(thresholds, recalls):
+ print('recall @{:.2f}: {:.3f}'.format(threshold, recall))
+
+ assert len(imdb_boxes) == imdb.num_images, 'calculations not complete'
+
+ # save results
+
+ rpn_file = os.path.join(rpn_folder, imdb.name + '_rpn.pkl')
+ with open(rpn_file, 'wb') as f:
+ pickle.dump(imdb_boxes, f, pickle.HIGHEST_PROTOCOL)
+
+ logger.info('wrote rpn proposals to %s' % rpn_file)
+ return imdb_boxes
+
+
+def im_detect(predictor, data_batch, data_names, scale):
+ output = predictor.predict(data_batch)
+
+ data_dict = dict(zip(data_names, data_batch.data))
+ if config.TEST.HAS_RPN:
+ rois = output['rois_output'].asnumpy()[:, 1:]
+ else:
+ rois = data_dict['rois'].asnumpy().reshape((-1, 5))[:, 1:]
+ im_shape = data_dict['data'].shape
+
+ # save output
+ scores = output['cls_prob_reshape_output'].asnumpy()[0]
+ bbox_deltas = output['bbox_pred_reshape_output'].asnumpy()[0]
+
+ # post processing
+ pred_boxes = bbox_pred(rois, bbox_deltas)
+ pred_boxes = clip_boxes(pred_boxes, im_shape[-2:])
+
+ # we used scaled image & roi to train, so it is necessary to transform them back
+ pred_boxes = pred_boxes / scale
+
+ return scores, pred_boxes, data_dict
+
+
+def pred_eval(predictor, test_data, imdb, vis=False, thresh=1e-3):
+ """
+ wrapper for calculating offline validation for faster data analysis
+ in this example, all threshold are set by hand
+ :param predictor: Predictor
+ :param test_data: data iterator, must be non-shuffle
+ :param imdb: image database
+ :param vis: controls visualization
+ :param thresh: valid detection threshold
+ :return:
+ """
+ assert vis or not test_data.shuffle
+ data_names = [k[0] for k in test_data.provide_data]
+
+ nms = py_nms_wrapper(config.TEST.NMS)
+
+ # limit detections to max_per_image over all classes
+ max_per_image = -1
+
+ num_images = imdb.num_images
+ # all detections are collected into:
+ # all_boxes[cls][image] = N x 5 array of detections in
+ # (x1, y1, x2, y2, score)
+ all_boxes = [[[] for _ in range(num_images)]
+ for _ in range(imdb.num_classes)]
+
+ i = 0
+ t = time.time()
+ for im_info, data_batch in test_data:
+ t1 = time.time() - t
+ t = time.time()
+
+ scale = im_info[0, 2]
+ scores, boxes, data_dict = im_detect(predictor, data_batch, data_names,
+ scale)
+
+ t2 = time.time() - t
+ t = time.time()
+
+ for j in range(1, imdb.num_classes):
+ indexes = np.where(scores[:, j] > thresh)[0]
+ cls_scores = scores[indexes, j, np.newaxis]
+ cls_boxes = boxes[indexes, j * 4:(j + 1) * 4]
+ cls_dets = np.hstack((cls_boxes, cls_scores))
+ keep = nms(cls_dets)
+ all_boxes[j][i] = cls_dets[keep, :]
+
+ if max_per_image > 0:
+ image_scores = np.hstack(
+ [all_boxes[j][i][:, -1] for j in range(1, imdb.num_classes)])
+ if len(image_scores) > max_per_image:
+ image_thresh = np.sort(image_scores)[-max_per_image]
+ for j in range(1, imdb.num_classes):
+ keep = np.where(all_boxes[j][i][:, -1] >= image_thresh)[0]
+ all_boxes[j][i] = all_boxes[j][i][keep, :]
+
+ if vis:
+ boxes_this_image = [[]] + [
+ all_boxes[j][i] for j in range(1, imdb.num_classes)
+ ]
+ vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image,
+ imdb.classes, scale)
+
+ t3 = time.time() - t
+ t = time.time()
+ logger.info('testing %d/%d data %.4fs net %.4fs post %.4fs' %
+ (i, imdb.num_images, t1, t2, t3))
+ i += 1
+
+ det_file = os.path.join(imdb.cache_path, imdb.name + '_detections.pkl')
+ with open(det_file, 'wb') as f:
+ pickle.dump(all_boxes, f, protocol=pickle.HIGHEST_PROTOCOL)
+
+ imdb.evaluate_detections(all_boxes)
+
+
+def vis_all_detection(im_array, detections, class_names, scale):
+ """
+ visualize all detections in one image
+ :param im_array: [b=1 c h w] in rgb
+ :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]
+ :param class_names: list of names in imdb
+ :param scale: visualize the scaled image
+ :return:
+ """
+ import matplotlib.pyplot as plt
+ import random
+ im = image.transform_inverse(im_array, config.PIXEL_MEANS)
+ plt.imshow(im)
+ for j, name in enumerate(class_names):
+ if name == '__background__':
+ continue
+ color = (random.random(), random.random(), random.random()
+ ) # generate a random color
+ dets = detections[j]
+ for det in dets:
+ bbox = det[:4] * scale
+ score = det[-1]
+ rect = plt.Rectangle((bbox[0], bbox[1]),
+ bbox[2] - bbox[0],
+ bbox[3] - bbox[1],
+ fill=False,
+ edgecolor=color,
+ linewidth=3.5)
+ plt.gca().add_patch(rect)
+ plt.gca().text(bbox[0],
+ bbox[1] - 2,
+ '{:s} {:.3f}'.format(name, score),
+ bbox=dict(facecolor=color, alpha=0.5),
+ fontsize=12,
+ color='white')
+ plt.show()
+
+
+def draw_all_detection(im_array, detections, class_names, scale):
+ """
+ visualize all detections in one image
+ :param im_array: [b=1 c h w] in rgb
+ :param detections: [ numpy.ndarray([[x1 y1 x2 y2 score]]) for j in classes ]
+ :param class_names: list of names in imdb
+ :param scale: visualize the scaled image
+ :return:
+ """
+ import cv2
+ import random
+ color_white = (255, 255, 255)
+ im = image.transform_inverse(im_array, config.PIXEL_MEANS)
+ # change to bgr
+ im = cv2.cvtColor(im, cv2.cv.CV_RGB2BGR)
+ for j, name in enumerate(class_names):
+ if name == '__background__':
+ continue
+ color = (random.randint(0, 256), random.randint(0, 256),
+ random.randint(0, 256)) # generate a random color
+ dets = detections[j]
+ for det in dets:
+ bbox = det[:4] * scale
+ score = det[-1]
+ bbox = map(int, bbox)
+ cv2.rectangle(im, (bbox[0], bbox[1]), (bbox[2], bbox[3]),
+ color=color,
+ thickness=2)
+ cv2.putText(im,
+ '%s %.3f' % (class_names[j], score),
+ (bbox[0], bbox[1] + 10),
+ color=color_white,
+ fontFace=cv2.FONT_HERSHEY_COMPLEX,
+ fontScale=0.5)
+ return im
diff --git a/insightface/detection/retinaface/rcnn/cython/.gitignore b/insightface/detection/retinaface/rcnn/cython/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..15a165d427164752e6ca66d787cf8dbf21a87cd5
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/.gitignore
@@ -0,0 +1,3 @@
+*.c
+*.cpp
+*.so
diff --git a/insightface/detection/retinaface/rcnn/cython/__init__.py b/insightface/detection/retinaface/rcnn/cython/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/cython/anchors.pyx b/insightface/detection/retinaface/rcnn/cython/anchors.pyx
new file mode 100755
index 0000000000000000000000000000000000000000..7005199125c8c82a59d662cdebcfe8c0117e3ffd
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/anchors.pyx
@@ -0,0 +1,35 @@
+cimport cython
+import numpy as np
+cimport numpy as np
+
+DTYPE = np.float32
+ctypedef np.float32_t DTYPE_t
+
+def anchors_cython(int height, int width, int stride, np.ndarray[DTYPE_t, ndim=2] base_anchors):
+ """
+ Parameters
+ ----------
+ height: height of plane
+ width: width of plane
+ stride: stride ot the original image
+ anchors_base: (A, 4) a base set of anchors
+ Returns
+ -------
+ all_anchors: (height, width, A, 4) ndarray of anchors spreading over the plane
+ """
+ cdef unsigned int A = base_anchors.shape[0]
+ cdef np.ndarray[DTYPE_t, ndim=4] all_anchors = np.zeros((height, width, A, 4), dtype=DTYPE)
+ cdef unsigned int iw, ih
+ cdef unsigned int k
+ cdef unsigned int sh
+ cdef unsigned int sw
+ for iw in range(width):
+ sw = iw * stride
+ for ih in range(height):
+ sh = ih * stride
+ for k in range(A):
+ all_anchors[ih, iw, k, 0] = base_anchors[k, 0] + sw
+ all_anchors[ih, iw, k, 1] = base_anchors[k, 1] + sh
+ all_anchors[ih, iw, k, 2] = base_anchors[k, 2] + sw
+ all_anchors[ih, iw, k, 3] = base_anchors[k, 3] + sh
+ return all_anchors
\ No newline at end of file
diff --git a/insightface/detection/retinaface/rcnn/cython/bbox.pyx b/insightface/detection/retinaface/rcnn/cython/bbox.pyx
new file mode 100644
index 0000000000000000000000000000000000000000..0c49e120e5ab177e53c318709f23f06372367911
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/bbox.pyx
@@ -0,0 +1,55 @@
+# --------------------------------------------------------
+# Fast R-CNN
+# Copyright (c) 2015 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Sergey Karayev
+# --------------------------------------------------------
+
+cimport cython
+import numpy as np
+cimport numpy as np
+
+DTYPE = np.float
+ctypedef np.float_t DTYPE_t
+
+def bbox_overlaps_cython(
+ np.ndarray[DTYPE_t, ndim=2] boxes,
+ np.ndarray[DTYPE_t, ndim=2] query_boxes):
+ """
+ Parameters
+ ----------
+ boxes: (N, 4) ndarray of float
+ query_boxes: (K, 4) ndarray of float
+ Returns
+ -------
+ overlaps: (N, K) ndarray of overlap between boxes and query_boxes
+ """
+ cdef unsigned int N = boxes.shape[0]
+ cdef unsigned int K = query_boxes.shape[0]
+ cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)
+ cdef DTYPE_t iw, ih, box_area
+ cdef DTYPE_t ua
+ cdef unsigned int k, n
+ for k in range(K):
+ box_area = (
+ (query_boxes[k, 2] - query_boxes[k, 0] + 1) *
+ (query_boxes[k, 3] - query_boxes[k, 1] + 1)
+ )
+ for n in range(N):
+ iw = (
+ min(boxes[n, 2], query_boxes[k, 2]) -
+ max(boxes[n, 0], query_boxes[k, 0]) + 1
+ )
+ if iw > 0:
+ ih = (
+ min(boxes[n, 3], query_boxes[k, 3]) -
+ max(boxes[n, 1], query_boxes[k, 1]) + 1
+ )
+ if ih > 0:
+ ua = float(
+ (boxes[n, 2] - boxes[n, 0] + 1) *
+ (boxes[n, 3] - boxes[n, 1] + 1) +
+ box_area - iw * ih
+ )
+ overlaps[n, k] = iw * ih / ua
+ return overlaps
diff --git a/insightface/detection/retinaface/rcnn/cython/cpu_nms.pyx b/insightface/detection/retinaface/rcnn/cython/cpu_nms.pyx
new file mode 100644
index 0000000000000000000000000000000000000000..1d0bef3321d78fc73556906649ab61eaaea60d86
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/cpu_nms.pyx
@@ -0,0 +1,68 @@
+# --------------------------------------------------------
+# Fast R-CNN
+# Copyright (c) 2015 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Ross Girshick
+# --------------------------------------------------------
+
+import numpy as np
+cimport numpy as np
+
+cdef inline np.float32_t max(np.float32_t a, np.float32_t b):
+ return a if a >= b else b
+
+cdef inline np.float32_t min(np.float32_t a, np.float32_t b):
+ return a if a <= b else b
+
+def cpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh):
+ cdef np.ndarray[np.float32_t, ndim=1] x1 = dets[:, 0]
+ cdef np.ndarray[np.float32_t, ndim=1] y1 = dets[:, 1]
+ cdef np.ndarray[np.float32_t, ndim=1] x2 = dets[:, 2]
+ cdef np.ndarray[np.float32_t, ndim=1] y2 = dets[:, 3]
+ cdef np.ndarray[np.float32_t, ndim=1] scores = dets[:, 4]
+
+ cdef np.ndarray[np.float32_t, ndim=1] areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ cdef np.ndarray[np.int_t, ndim=1] order = scores.argsort()[::-1]
+
+ cdef int ndets = dets.shape[0]
+ cdef np.ndarray[np.int_t, ndim=1] suppressed = \
+ np.zeros((ndets), dtype=np.int)
+
+ # nominal indices
+ cdef int _i, _j
+ # sorted indices
+ cdef int i, j
+ # temp variables for box i's (the box currently under consideration)
+ cdef np.float32_t ix1, iy1, ix2, iy2, iarea
+ # variables for computing overlap with box j (lower scoring box)
+ cdef np.float32_t xx1, yy1, xx2, yy2
+ cdef np.float32_t w, h
+ cdef np.float32_t inter, ovr
+
+ keep = []
+ for _i in range(ndets):
+ i = order[_i]
+ if suppressed[i] == 1:
+ continue
+ keep.append(i)
+ ix1 = x1[i]
+ iy1 = y1[i]
+ ix2 = x2[i]
+ iy2 = y2[i]
+ iarea = areas[i]
+ for _j in range(_i + 1, ndets):
+ j = order[_j]
+ if suppressed[j] == 1:
+ continue
+ xx1 = max(ix1, x1[j])
+ yy1 = max(iy1, y1[j])
+ xx2 = min(ix2, x2[j])
+ yy2 = min(iy2, y2[j])
+ w = max(0.0, xx2 - xx1 + 1)
+ h = max(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (iarea + areas[j] - inter)
+ if ovr >= thresh:
+ suppressed[j] = 1
+
+ return keep
diff --git a/insightface/detection/retinaface/rcnn/cython/gpu_nms.hpp b/insightface/detection/retinaface/rcnn/cython/gpu_nms.hpp
new file mode 100644
index 0000000000000000000000000000000000000000..68b6d42cd88b59496b22a9e77919abe529b09014
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/gpu_nms.hpp
@@ -0,0 +1,2 @@
+void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num,
+ int boxes_dim, float nms_overlap_thresh, int device_id);
diff --git a/insightface/detection/retinaface/rcnn/cython/gpu_nms.pyx b/insightface/detection/retinaface/rcnn/cython/gpu_nms.pyx
new file mode 100644
index 0000000000000000000000000000000000000000..59d84afe94e42de3c456b73580ed83358a2b30d8
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/gpu_nms.pyx
@@ -0,0 +1,31 @@
+# --------------------------------------------------------
+# Faster R-CNN
+# Copyright (c) 2015 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Ross Girshick
+# --------------------------------------------------------
+
+import numpy as np
+cimport numpy as np
+
+assert sizeof(int) == sizeof(np.int32_t)
+
+cdef extern from "gpu_nms.hpp":
+ void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int)
+
+def gpu_nms(np.ndarray[np.float32_t, ndim=2] dets, np.float thresh,
+ np.int32_t device_id=0):
+ cdef int boxes_num = dets.shape[0]
+ cdef int boxes_dim = dets.shape[1]
+ cdef int num_out
+ cdef np.ndarray[np.int32_t, ndim=1] \
+ keep = np.zeros(boxes_num, dtype=np.int32)
+ cdef np.ndarray[np.float32_t, ndim=1] \
+ scores = dets[:, 4]
+ cdef np.ndarray[np.int_t, ndim=1] \
+ order = scores.argsort()[::-1]
+ cdef np.ndarray[np.float32_t, ndim=2] \
+ sorted_dets = dets[order, :]
+ _nms(&keep[0], &num_out, &sorted_dets[0, 0], boxes_num, boxes_dim, thresh, device_id)
+ keep = keep[:num_out]
+ return list(order[keep])
diff --git a/insightface/detection/retinaface/rcnn/cython/nms_kernel.cu b/insightface/detection/retinaface/rcnn/cython/nms_kernel.cu
new file mode 100644
index 0000000000000000000000000000000000000000..038a59012f60ebdf1182ecb778eb3b01a69bc5ed
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/nms_kernel.cu
@@ -0,0 +1,144 @@
+// ------------------------------------------------------------------
+// Faster R-CNN
+// Copyright (c) 2015 Microsoft
+// Licensed under The MIT License [see fast-rcnn/LICENSE for details]
+// Written by Shaoqing Ren
+// ------------------------------------------------------------------
+
+#include "gpu_nms.hpp"
+#include
+#include
+
+#define CUDA_CHECK(condition) \
+ /* Code block avoids redefinition of cudaError_t error */ \
+ do { \
+ cudaError_t error = condition; \
+ if (error != cudaSuccess) { \
+ std::cout << cudaGetErrorString(error) << std::endl; \
+ } \
+ } while (0)
+
+#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))
+int const threadsPerBlock = sizeof(unsigned long long) * 8;
+
+__device__ inline float devIoU(float const * const a, float const * const b) {
+ float left = max(a[0], b[0]), right = min(a[2], b[2]);
+ float top = max(a[1], b[1]), bottom = min(a[3], b[3]);
+ float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);
+ float interS = width * height;
+ float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);
+ float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);
+ return interS / (Sa + Sb - interS);
+}
+
+__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,
+ const float *dev_boxes, unsigned long long *dev_mask) {
+ const int row_start = blockIdx.y;
+ const int col_start = blockIdx.x;
+
+ // if (row_start > col_start) return;
+
+ const int row_size =
+ min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);
+ const int col_size =
+ min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);
+
+ __shared__ float block_boxes[threadsPerBlock * 5];
+ if (threadIdx.x < col_size) {
+ block_boxes[threadIdx.x * 5 + 0] =
+ dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];
+ block_boxes[threadIdx.x * 5 + 1] =
+ dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];
+ block_boxes[threadIdx.x * 5 + 2] =
+ dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];
+ block_boxes[threadIdx.x * 5 + 3] =
+ dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];
+ block_boxes[threadIdx.x * 5 + 4] =
+ dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];
+ }
+ __syncthreads();
+
+ if (threadIdx.x < row_size) {
+ const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;
+ const float *cur_box = dev_boxes + cur_box_idx * 5;
+ int i = 0;
+ unsigned long long t = 0;
+ int start = 0;
+ if (row_start == col_start) {
+ start = threadIdx.x + 1;
+ }
+ for (i = start; i < col_size; i++) {
+ if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {
+ t |= 1ULL << i;
+ }
+ }
+ const int col_blocks = DIVUP(n_boxes, threadsPerBlock);
+ dev_mask[cur_box_idx * col_blocks + col_start] = t;
+ }
+}
+
+void _set_device(int device_id) {
+ int current_device;
+ CUDA_CHECK(cudaGetDevice(¤t_device));
+ if (current_device == device_id) {
+ return;
+ }
+ // The call to cudaSetDevice must come before any calls to Get, which
+ // may perform initialization using the GPU.
+ CUDA_CHECK(cudaSetDevice(device_id));
+}
+
+void _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num,
+ int boxes_dim, float nms_overlap_thresh, int device_id) {
+ _set_device(device_id);
+
+ float* boxes_dev = NULL;
+ unsigned long long* mask_dev = NULL;
+
+ const int col_blocks = DIVUP(boxes_num, threadsPerBlock);
+
+ CUDA_CHECK(cudaMalloc(&boxes_dev,
+ boxes_num * boxes_dim * sizeof(float)));
+ CUDA_CHECK(cudaMemcpy(boxes_dev,
+ boxes_host,
+ boxes_num * boxes_dim * sizeof(float),
+ cudaMemcpyHostToDevice));
+
+ CUDA_CHECK(cudaMalloc(&mask_dev,
+ boxes_num * col_blocks * sizeof(unsigned long long)));
+
+ dim3 blocks(DIVUP(boxes_num, threadsPerBlock),
+ DIVUP(boxes_num, threadsPerBlock));
+ dim3 threads(threadsPerBlock);
+ nms_kernel<<>>(boxes_num,
+ nms_overlap_thresh,
+ boxes_dev,
+ mask_dev);
+
+ std::vector mask_host(boxes_num * col_blocks);
+ CUDA_CHECK(cudaMemcpy(&mask_host[0],
+ mask_dev,
+ sizeof(unsigned long long) * boxes_num * col_blocks,
+ cudaMemcpyDeviceToHost));
+
+ std::vector remv(col_blocks);
+ memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);
+
+ int num_to_keep = 0;
+ for (int i = 0; i < boxes_num; i++) {
+ int nblock = i / threadsPerBlock;
+ int inblock = i % threadsPerBlock;
+
+ if (!(remv[nblock] & (1ULL << inblock))) {
+ keep_out[num_to_keep++] = i;
+ unsigned long long *p = &mask_host[0] + i * col_blocks;
+ for (int j = nblock; j < col_blocks; j++) {
+ remv[j] |= p[j];
+ }
+ }
+ }
+ *num_out = num_to_keep;
+
+ CUDA_CHECK(cudaFree(boxes_dev));
+ CUDA_CHECK(cudaFree(mask_dev));
+}
diff --git a/insightface/detection/retinaface/rcnn/cython/setup.py b/insightface/detection/retinaface/rcnn/cython/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..1af1a1aed9a9ff0ac9450f21fd1136c310214d43
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/cython/setup.py
@@ -0,0 +1,165 @@
+# --------------------------------------------------------
+# Fast R-CNN
+# Copyright (c) 2015 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Ross Girshick
+# --------------------------------------------------------
+
+import os
+from os.path import join as pjoin
+from setuptools import setup
+from distutils.extension import Extension
+from Cython.Distutils import build_ext
+import numpy as np
+
+
+def find_in_path(name, path):
+ "Find a file in a search path"
+ # Adapted fom
+ # http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
+ for dir in path.split(os.pathsep):
+ binpath = pjoin(dir, name)
+ if os.path.exists(binpath):
+ return os.path.abspath(binpath)
+ return None
+
+
+def locate_cuda():
+ """Locate the CUDA environment on the system
+
+ Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
+ and values giving the absolute path to each directory.
+
+ Starts by looking for the CUDAHOME env variable. If not found, everything
+ is based on finding 'nvcc' in the PATH.
+ """
+
+ # first check if the CUDAHOME env variable is in use
+ if 'CUDAHOME' in os.environ:
+ home = os.environ['CUDAHOME']
+ nvcc = pjoin(home, 'bin', 'nvcc')
+ else:
+ # otherwise, search the PATH for NVCC
+ default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')
+ nvcc = find_in_path('nvcc',
+ os.environ['PATH'] + os.pathsep + default_path)
+ if nvcc is None:
+ raise EnvironmentError(
+ 'The nvcc binary could not be '
+ 'located in your $PATH. Either add it to your path, or set $CUDAHOME'
+ )
+ home = os.path.dirname(os.path.dirname(nvcc))
+
+ cudaconfig = {
+ 'home': home,
+ 'nvcc': nvcc,
+ 'include': pjoin(home, 'include'),
+ 'lib64': pjoin(home, 'lib64')
+ }
+ for k, v in cudaconfig.items():
+ if not os.path.exists(v):
+ raise EnvironmentError(
+ 'The CUDA %s path could not be located in %s' % (k, v))
+
+ return cudaconfig
+
+
+# Test if cuda could be foun
+try:
+ CUDA = locate_cuda()
+except EnvironmentError:
+ CUDA = None
+
+# Obtain the numpy include directory. This logic works across numpy versions.
+try:
+ numpy_include = np.get_include()
+except AttributeError:
+ numpy_include = np.get_numpy_include()
+
+
+def customize_compiler_for_nvcc(self):
+ """inject deep into distutils to customize how the dispatch
+ to gcc/nvcc works.
+
+ If you subclass UnixCCompiler, it's not trivial to get your subclass
+ injected in, and still have the right customizations (i.e.
+ distutils.sysconfig.customize_compiler) run on it. So instead of going
+ the OO route, I have this. Note, it's kindof like a wierd functional
+ subclassing going on."""
+
+ # tell the compiler it can processes .cu
+ self.src_extensions.append('.cu')
+
+ # save references to the default compiler_so and _comple methods
+ default_compiler_so = self.compiler_so
+ super = self._compile
+
+ # now redefine the _compile method. This gets executed for each
+ # object but distutils doesn't have the ability to change compilers
+ # based on source extension: we add it.
+ def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
+ if os.path.splitext(src)[1] == '.cu':
+ # use the cuda for .cu files
+ self.set_executable('compiler_so', CUDA['nvcc'])
+ # use only a subset of the extra_postargs, which are 1-1 translated
+ # from the extra_compile_args in the Extension class
+ postargs = extra_postargs['nvcc']
+ else:
+ postargs = extra_postargs['gcc']
+
+ super(obj, src, ext, cc_args, postargs, pp_opts)
+ # reset the default compiler_so, which we might have changed for cuda
+ self.compiler_so = default_compiler_so
+
+ # inject our redefined _compile method into the class
+ self._compile = _compile
+
+
+# run the customize_compiler
+class custom_build_ext(build_ext):
+ def build_extensions(self):
+ customize_compiler_for_nvcc(self.compiler)
+ build_ext.build_extensions(self)
+
+
+ext_modules = [
+ Extension("bbox", ["bbox.pyx"],
+ extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
+ include_dirs=[numpy_include]),
+ Extension("anchors", ["anchors.pyx"],
+ extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
+ include_dirs=[numpy_include]),
+ Extension("cpu_nms", ["cpu_nms.pyx"],
+ extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
+ include_dirs=[numpy_include]),
+]
+
+if CUDA is not None:
+ ext_modules.append(
+ Extension(
+ 'gpu_nms',
+ ['nms_kernel.cu', 'gpu_nms.pyx'],
+ library_dirs=[CUDA['lib64']],
+ libraries=['cudart'],
+ language='c++',
+ runtime_library_dirs=[CUDA['lib64']],
+ # this syntax is specific to this build system
+ # we're only going to use certain compiler args with nvcc and not with
+ # gcc the implementation of this trick is in customize_compiler() below
+ extra_compile_args={
+ 'gcc': ["-Wno-unused-function"],
+ 'nvcc': [
+ '-arch=sm_35', '--ptxas-options=-v', '-c',
+ '--compiler-options', "'-fPIC'"
+ ]
+ },
+ include_dirs=[numpy_include, CUDA['include']]))
+else:
+ print('Skipping GPU_NMS')
+
+setup(
+ name='frcnn_cython',
+ ext_modules=ext_modules,
+ # inject our custom trigger
+ cmdclass={'build_ext': custom_build_ext},
+)
diff --git a/insightface/detection/retinaface/rcnn/dataset/__init__.py b/insightface/detection/retinaface/rcnn/dataset/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..fcee572aeb234733990eb49e4a3d54b458426b0f
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/dataset/__init__.py
@@ -0,0 +1,2 @@
+from .imdb import IMDB
+from .retinaface import retinaface
diff --git a/insightface/detection/retinaface/rcnn/dataset/ds_utils.py b/insightface/detection/retinaface/rcnn/dataset/ds_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..9432515eeb45040e0ccc87809773315f4aaf836b
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/dataset/ds_utils.py
@@ -0,0 +1,16 @@
+import numpy as np
+
+
+def unique_boxes(boxes, scale=1.0):
+ """ return indices of unique boxes """
+ v = np.array([1, 1e3, 1e6, 1e9])
+ hashes = np.round(boxes * scale).dot(v).astype(np.int)
+ _, index = np.unique(hashes, return_index=True)
+ return np.sort(index)
+
+
+def filter_small_boxes(boxes, min_size):
+ w = boxes[:, 2] - boxes[:, 0]
+ h = boxes[:, 3] - boxes[:, 1]
+ keep = np.where((w >= min_size) & (h > min_size))[0]
+ return keep
diff --git a/insightface/detection/retinaface/rcnn/dataset/imdb.py b/insightface/detection/retinaface/rcnn/dataset/imdb.py
new file mode 100644
index 0000000000000000000000000000000000000000..3c19817a857bfb9a2a030f728851f47e125e04e9
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/dataset/imdb.py
@@ -0,0 +1,351 @@
+"""
+General image database
+An image database creates a list of relative image path called image_set_index and
+transform index to absolute image path. As to training, it is necessary that ground
+truth and proposals are mixed together for training.
+roidb
+basic format [image_index]
+['image', 'height', 'width', 'flipped',
+'boxes', 'gt_classes', 'gt_overlaps', 'max_classes', 'max_overlaps', 'bbox_targets']
+"""
+
+from ..logger import logger
+import os
+try:
+ import cPickle as pickle
+except ImportError:
+ import pickle
+import numpy as np
+from ..processing.bbox_transform import bbox_overlaps
+
+
+class IMDB(object):
+ def __init__(self, name, image_set, root_path, dataset_path):
+ """
+ basic information about an image database
+ :param name: name of image database will be used for any output
+ :param root_path: root path store cache and proposal data
+ :param dataset_path: dataset path store images and image lists
+ """
+ self.name = name + '_' + image_set
+ self.image_set = image_set
+ self.root_path = root_path
+ self.data_path = dataset_path
+
+ # abstract attributes
+ self.classes = []
+ self.num_classes = 0
+ self.image_set_index = []
+ self.num_images = 0
+
+ self.config = {}
+
+ def image_path_from_index(self, index):
+ raise NotImplementedError
+
+ def gt_roidb(self):
+ raise NotImplementedError
+
+ def evaluate_detections(self, detections):
+ raise NotImplementedError
+
+ @property
+ def cache_path(self):
+ """
+ make a directory to store all caches
+ :return: cache path
+ """
+ cache_path = os.path.join(self.root_path, 'cache')
+ if not os.path.exists(cache_path):
+ os.mkdir(cache_path)
+ return cache_path
+
+ def image_path_at(self, index):
+ """
+ access image at index in image database
+ :param index: image index in image database
+ :return: image path
+ """
+ return self.image_path_from_index(self.image_set_index[index])
+
+ def load_rpn_data(self, full=False):
+ if full:
+ rpn_file = os.path.join(self.root_path, 'rpn_data',
+ self.name + '_full_rpn.pkl')
+ else:
+ rpn_file = os.path.join(self.root_path, 'rpn_data',
+ self.name + '_rpn.pkl')
+ assert os.path.exists(
+ rpn_file), '%s rpn data not found at %s' % (self.name, rpn_file)
+ logger.info('%s loading rpn data from %s' % (self.name, rpn_file))
+ with open(rpn_file, 'rb') as f:
+ box_list = pickle.load(f)
+ return box_list
+
+ def load_rpn_roidb(self, gt_roidb):
+ """
+ turn rpn detection boxes into roidb
+ :param gt_roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ """
+ box_list = self.load_rpn_data()
+ return self.create_roidb_from_box_list(box_list, gt_roidb)
+
+ def rpn_roidb(self, gt_roidb, append_gt=False):
+ """
+ get rpn roidb and ground truth roidb
+ :param gt_roidb: ground truth roidb
+ :param append_gt: append ground truth
+ :return: roidb of rpn
+ """
+ if append_gt:
+ logger.info('%s appending ground truth annotations' % self.name)
+ rpn_roidb = self.load_rpn_roidb(gt_roidb)
+ roidb = IMDB.merge_roidbs(gt_roidb, rpn_roidb)
+ else:
+ roidb = self.load_rpn_roidb(gt_roidb)
+ return roidb
+
+ def create_roidb_from_box_list(self, box_list, gt_roidb):
+ """
+ given ground truth, prepare roidb
+ :param box_list: [image_index] ndarray of [box_index][x1, x2, y1, y2]
+ :param gt_roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ """
+ assert len(
+ box_list
+ ) == self.num_images, 'number of boxes matrix must match number of images'
+ roidb = []
+ for i in range(self.num_images):
+ roi_rec = dict()
+ roi_rec['image'] = gt_roidb[i]['image']
+ roi_rec['height'] = gt_roidb[i]['height']
+ roi_rec['width'] = gt_roidb[i]['width']
+
+ boxes = box_list[i]
+ if boxes.shape[1] == 5:
+ boxes = boxes[:, :4]
+ num_boxes = boxes.shape[0]
+ overlaps = np.zeros((num_boxes, self.num_classes),
+ dtype=np.float32)
+ if gt_roidb is not None and gt_roidb[i]['boxes'].size > 0:
+ gt_boxes = gt_roidb[i]['boxes']
+ gt_classes = gt_roidb[i]['gt_classes']
+ # n boxes and k gt_boxes => n * k overlap
+ gt_overlaps = bbox_overlaps(boxes.astype(np.float),
+ gt_boxes.astype(np.float))
+ # for each box in n boxes, select only maximum overlap (must be greater than zero)
+ argmaxes = gt_overlaps.argmax(axis=1)
+ maxes = gt_overlaps.max(axis=1)
+ I = np.where(maxes > 0)[0]
+ overlaps[I, gt_classes[argmaxes[I]]] = maxes[I]
+
+ roi_rec.update({
+ 'boxes':
+ boxes,
+ 'gt_classes':
+ np.zeros((num_boxes, ), dtype=np.int32),
+ 'gt_overlaps':
+ overlaps,
+ 'max_classes':
+ overlaps.argmax(axis=1),
+ 'max_overlaps':
+ overlaps.max(axis=1),
+ 'flipped':
+ False
+ })
+
+ # background roi => background class
+ zero_indexes = np.where(roi_rec['max_overlaps'] == 0)[0]
+ assert all(roi_rec['max_classes'][zero_indexes] == 0)
+ # foreground roi => foreground class
+ nonzero_indexes = np.where(roi_rec['max_overlaps'] > 0)[0]
+ assert all(roi_rec['max_classes'][nonzero_indexes] != 0)
+
+ roidb.append(roi_rec)
+
+ return roidb
+
+ def append_flipped_images(self, roidb):
+ """
+ append flipped images to an roidb
+ flip boxes coordinates, images will be actually flipped when loading into network
+ :param roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ :return: roidb: [image_index]['boxes', 'gt_classes', 'gt_overlaps', 'flipped']
+ """
+ logger.info('%s append flipped images to roidb' % self.name)
+ assert self.num_images == len(roidb)
+ for i in range(self.num_images):
+ roi_rec = roidb[i]
+ entry = {
+ 'image': roi_rec['image'],
+ 'stream': roi_rec['stream'],
+ 'height': roi_rec['height'],
+ 'width': roi_rec['width'],
+ #'boxes': boxes,
+ 'gt_classes': roidb[i]['gt_classes'],
+ 'gt_overlaps': roidb[i]['gt_overlaps'],
+ 'max_classes': roidb[i]['max_classes'],
+ 'max_overlaps': roidb[i]['max_overlaps'],
+ 'flipped': True
+ }
+ for k in roi_rec:
+ if not k.startswith('boxes'):
+ continue
+ boxes = roi_rec[k].copy()
+ oldx1 = boxes[:, 0].copy()
+ oldx2 = boxes[:, 2].copy()
+ boxes[:, 0] = roi_rec['width'] - oldx2 - 1
+ boxes[:, 2] = roi_rec['width'] - oldx1 - 1
+ assert (boxes[:, 2] >= boxes[:, 0]).all()
+ entry[k] = boxes
+ if 'landmarks' in roi_rec:
+ k = 'landmarks'
+ landmarks = roi_rec[k].copy()
+ landmarks[:, :, 0] *= -1
+ landmarks[:, :, 0] += (roi_rec['width'] - 1)
+ #for a in range(0,10,2):
+ # oldx1 = landmarks[:, a].copy()
+ # landmarks[:,a] = roi_rec['width'] - oldx1 - 1
+ order = [1, 0, 2, 4, 3]
+ flandmarks = landmarks.copy()
+ for idx, a in enumerate(order):
+ flandmarks[:, idx, :] = landmarks[:, a, :]
+
+ entry[k] = flandmarks
+ if 'blur' in roi_rec:
+ entry['blur'] = roi_rec['blur']
+ roidb.append(entry)
+
+ self.image_set_index *= 2
+ return roidb
+
+ def evaluate_recall(self, roidb, candidate_boxes=None, thresholds=None):
+ """
+ evaluate detection proposal recall metrics
+ record max overlap value for each gt box; return vector of overlap values
+ :param roidb: used to evaluate
+ :param candidate_boxes: if not given, use roidb's non-gt boxes
+ :param thresholds: array-like recall threshold
+ :return: None
+ ar: average recall, recalls: vector recalls at each IoU overlap threshold
+ thresholds: vector of IoU overlap threshold, gt_overlaps: vector of all ground-truth overlaps
+ """
+ area_names = [
+ 'all', '0-25', '25-50', '50-100', '100-200', '200-300', '300-inf'
+ ]
+ area_ranges = [[0**2, 1e5**2], [0**2, 25**2], [25**2, 50**2],
+ [50**2, 100**2], [100**2, 200**2], [200**2, 300**2],
+ [300**2, 1e5**2]]
+ area_counts = []
+ for area_name, area_range in zip(area_names[1:], area_ranges[1:]):
+ area_count = 0
+ for i in range(self.num_images):
+ if candidate_boxes is None:
+ # default is use the non-gt boxes from roidb
+ non_gt_inds = np.where(roidb[i]['gt_classes'] == 0)[0]
+ boxes = roidb[i]['boxes'][non_gt_inds, :]
+ else:
+ boxes = candidate_boxes[i]
+ boxes_areas = (boxes[:, 2] - boxes[:, 0] +
+ 1) * (boxes[:, 3] - boxes[:, 1] + 1)
+ valid_range_inds = np.where((boxes_areas >= area_range[0])
+ & (boxes_areas < area_range[1]))[0]
+ area_count += len(valid_range_inds)
+ area_counts.append(area_count)
+ total_counts = float(sum(area_counts))
+ for area_name, area_count in zip(area_names[1:], area_counts):
+ logger.info('percentage of %s is %f' %
+ (area_name, area_count / total_counts))
+ logger.info('average number of proposal is %f' %
+ (total_counts / self.num_images))
+ for area_name, area_range in zip(area_names, area_ranges):
+ gt_overlaps = np.zeros(0)
+ num_pos = 0
+ for i in range(self.num_images):
+ # check for max_overlaps == 1 avoids including crowd annotations
+ max_gt_overlaps = roidb[i]['gt_overlaps'].max(axis=1)
+ gt_inds = np.where((roidb[i]['gt_classes'] > 0)
+ & (max_gt_overlaps == 1))[0]
+ gt_boxes = roidb[i]['boxes'][gt_inds, :]
+ gt_areas = (gt_boxes[:, 2] - gt_boxes[:, 0] +
+ 1) * (gt_boxes[:, 3] - gt_boxes[:, 1] + 1)
+ valid_gt_inds = np.where((gt_areas >= area_range[0])
+ & (gt_areas < area_range[1]))[0]
+ gt_boxes = gt_boxes[valid_gt_inds, :]
+ num_pos += len(valid_gt_inds)
+
+ if candidate_boxes is None:
+ # default is use the non-gt boxes from roidb
+ non_gt_inds = np.where(roidb[i]['gt_classes'] == 0)[0]
+ boxes = roidb[i]['boxes'][non_gt_inds, :]
+ else:
+ boxes = candidate_boxes[i]
+ if boxes.shape[0] == 0:
+ continue
+
+ overlaps = bbox_overlaps(boxes.astype(np.float),
+ gt_boxes.astype(np.float))
+
+ _gt_overlaps = np.zeros((gt_boxes.shape[0]))
+ # choose whatever is smaller to iterate
+ rounds = min(boxes.shape[0], gt_boxes.shape[0])
+ for j in range(rounds):
+ # find which proposal maximally covers each gt box
+ argmax_overlaps = overlaps.argmax(axis=0)
+ # get the IoU amount of coverage for each gt box
+ max_overlaps = overlaps.max(axis=0)
+ # find which gt box is covered by most IoU
+ gt_ind = max_overlaps.argmax()
+ gt_ovr = max_overlaps.max()
+ assert (gt_ovr >=
+ 0), '%s\n%s\n%s' % (boxes, gt_boxes, overlaps)
+ # find the proposal box that covers the best covered gt box
+ box_ind = argmax_overlaps[gt_ind]
+ # record the IoU coverage of this gt box
+ _gt_overlaps[j] = overlaps[box_ind, gt_ind]
+ assert (_gt_overlaps[j] == gt_ovr)
+ # mark the proposal box and the gt box as used
+ overlaps[box_ind, :] = -1
+ overlaps[:, gt_ind] = -1
+ # append recorded IoU coverage level
+ gt_overlaps = np.hstack((gt_overlaps, _gt_overlaps))
+
+ gt_overlaps = np.sort(gt_overlaps)
+ if thresholds is None:
+ step = 0.05
+ thresholds = np.arange(0.5, 0.95 + 1e-5, step)
+ recalls = np.zeros_like(thresholds)
+
+ # compute recall for each IoU threshold
+ for i, t in enumerate(thresholds):
+ recalls[i] = (gt_overlaps >= t).sum() / float(num_pos)
+ ar = recalls.mean()
+
+ # print results
+ print('average recall for {}: {:.3f}, number:{}'.format(
+ area_name, ar, num_pos))
+ for threshold, recall in zip(thresholds, recalls):
+ print('recall @{:.2f}: {:.3f}'.format(threshold, recall))
+
+ @staticmethod
+ def merge_roidbs(a, b):
+ """
+ merge roidbs into one
+ :param a: roidb to be merged into
+ :param b: roidb to be merged
+ :return: merged imdb
+ """
+ assert len(a) == len(b)
+ for i in range(len(a)):
+ a[i]['boxes'] = np.vstack((a[i]['boxes'], b[i]['boxes']))
+ a[i]['gt_classes'] = np.hstack(
+ (a[i]['gt_classes'], b[i]['gt_classes']))
+ a[i]['gt_overlaps'] = np.vstack(
+ (a[i]['gt_overlaps'], b[i]['gt_overlaps']))
+ a[i]['max_classes'] = np.hstack(
+ (a[i]['max_classes'], b[i]['max_classes']))
+ a[i]['max_overlaps'] = np.hstack(
+ (a[i]['max_overlaps'], b[i]['max_overlaps']))
+ return a
diff --git a/insightface/detection/retinaface/rcnn/dataset/retinaface.py b/insightface/detection/retinaface/rcnn/dataset/retinaface.py
new file mode 100644
index 0000000000000000000000000000000000000000..6e3b85689fb7048376571d07c3fe18657a05a1f4
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/dataset/retinaface.py
@@ -0,0 +1,197 @@
+from __future__ import print_function
+try:
+ import cPickle as pickle
+except ImportError:
+ import pickle
+import cv2
+import os
+import numpy as np
+import json
+#from PIL import Image
+
+from ..logger import logger
+from .imdb import IMDB
+from .ds_utils import unique_boxes, filter_small_boxes
+from ..config import config
+
+
+class retinaface(IMDB):
+ def __init__(self, image_set, root_path, data_path):
+ super(retinaface, self).__init__('retinaface', image_set, root_path,
+ data_path)
+ #assert image_set=='train'
+
+ split = image_set
+ self._split = image_set
+ self._image_set = image_set
+
+ self.root_path = root_path
+ self.data_path = data_path
+
+ self._dataset_path = self.data_path
+ self._imgs_path = os.path.join(self._dataset_path, image_set, 'images')
+ self._fp_bbox_map = {}
+ label_file = os.path.join(self._dataset_path, image_set, 'label.txt')
+ name = None
+ for line in open(label_file, 'r'):
+ line = line.strip()
+ if line.startswith('#'):
+ name = line[1:].strip()
+ self._fp_bbox_map[name] = []
+ continue
+ assert name is not None
+ assert name in self._fp_bbox_map
+ self._fp_bbox_map[name].append(line)
+ print('origin image size', len(self._fp_bbox_map))
+
+ #self.num_images = len(self._image_paths)
+ #self._image_index = range(len(self._image_paths))
+ self.classes = ['bg', 'face']
+ self.num_classes = len(self.classes)
+
+ def gt_roidb(self):
+ cache_file = os.path.join(
+ self.cache_path,
+ '{}_{}_gt_roidb.pkl'.format(self.name, self._split))
+ if os.path.exists(cache_file):
+ with open(cache_file, 'rb') as fid:
+ roidb = pickle.load(fid)
+ print('{} gt roidb loaded from {}'.format(self.name, cache_file))
+ self.num_images = len(roidb)
+ return roidb
+
+ roidb = []
+ max_num_boxes = 0
+ nonattr_box_num = 0
+ landmark_num = 0
+
+ pp = 0
+ for fp in self._fp_bbox_map:
+ pp += 1
+ if pp % 1000 == 0:
+ print('loading', pp)
+ if self._split == 'test':
+ image_path = os.path.join(self._imgs_path, fp)
+ roi = {'image': image_path}
+ roidb.append(roi)
+ continue
+ boxes = np.zeros([len(self._fp_bbox_map[fp]), 4], np.float)
+ landmarks = np.zeros([len(self._fp_bbox_map[fp]), 5, 3], np.float)
+ blur = np.zeros((len(self._fp_bbox_map[fp]), ), np.float)
+ boxes_mask = []
+
+ gt_classes = np.ones([len(self._fp_bbox_map[fp])], np.int32)
+ overlaps = np.zeros([len(self._fp_bbox_map[fp]), 2], np.float)
+
+ imsize = cv2.imread(os.path.join(self._imgs_path,
+ fp)).shape[0:2][::-1]
+ ix = 0
+
+ for aline in self._fp_bbox_map[fp]:
+ #imsize = Image.open(os.path.join(self._imgs_path, fp)).size
+ values = [float(x) for x in aline.strip().split()]
+ bbox = [
+ values[0], values[1], values[0] + values[2],
+ values[1] + values[3]
+ ]
+
+ x1 = bbox[0]
+ y1 = bbox[1]
+ x2 = min(imsize[0], bbox[2])
+ y2 = min(imsize[1], bbox[3])
+ if x1 >= x2 or y1 >= y2:
+ continue
+
+ if config.BBOX_MASK_THRESH > 0:
+ if (
+ x2 - x1
+ ) < config.BBOX_MASK_THRESH or y2 - y1 < config.BBOX_MASK_THRESH:
+ boxes_mask.append(np.array([x1, y1, x2, y2], np.float))
+ continue
+ if (
+ x2 - x1
+ ) < config.TRAIN.MIN_BOX_SIZE or y2 - y1 < config.TRAIN.MIN_BOX_SIZE:
+ continue
+
+ boxes[ix, :] = np.array([x1, y1, x2, y2], np.float)
+ if self._split == 'train':
+ landmark = np.array(values[4:19],
+ dtype=np.float32).reshape((5, 3))
+ for li in range(5):
+ #print(landmark)
+ if landmark[li][0] == -1. and landmark[li][
+ 1] == -1.: #missing landmark
+ assert landmark[li][2] == -1
+ else:
+ assert landmark[li][2] >= 0
+ if li == 0:
+ landmark_num += 1
+ if landmark[li][2] == 0.0: #visible
+ landmark[li][2] = 1.0
+ else:
+ landmark[li][2] = 0.0
+
+ landmarks[ix] = landmark
+
+ blur[ix] = values[19]
+ #print(aline, blur[ix])
+ if blur[ix] < 0.0:
+ blur[ix] = 0.3
+ nonattr_box_num += 1
+
+ cls = int(1)
+ gt_classes[ix] = cls
+ overlaps[ix, cls] = 1.0
+ ix += 1
+ max_num_boxes = max(max_num_boxes, ix)
+ #overlaps = scipy.sparse.csr_matrix(overlaps)
+ if self._split == 'train' and ix == 0:
+ continue
+ boxes = boxes[:ix, :]
+ landmarks = landmarks[:ix, :, :]
+ blur = blur[:ix]
+ gt_classes = gt_classes[:ix]
+ overlaps = overlaps[:ix, :]
+ image_path = os.path.join(self._imgs_path, fp)
+ with open(image_path, 'rb') as fin:
+ stream = fin.read()
+ stream = np.fromstring(stream, dtype=np.uint8)
+
+ roi = {
+ 'image': image_path,
+ 'stream': stream,
+ 'height': imsize[1],
+ 'width': imsize[0],
+ 'boxes': boxes,
+ 'landmarks': landmarks,
+ 'blur': blur,
+ 'gt_classes': gt_classes,
+ 'gt_overlaps': overlaps,
+ 'max_classes': overlaps.argmax(axis=1),
+ 'max_overlaps': overlaps.max(axis=1),
+ 'flipped': False,
+ }
+ if len(boxes_mask) > 0:
+ boxes_mask = np.array(boxes_mask)
+ roi['boxes_mask'] = boxes_mask
+ roidb.append(roi)
+ for roi in roidb:
+ roi['max_num_boxes'] = max_num_boxes
+ self.num_images = len(roidb)
+ print('roidb size', len(roidb))
+ print('non attr box num', nonattr_box_num)
+ print('landmark num', landmark_num)
+ with open(cache_file, 'wb') as fid:
+ pickle.dump(roidb, fid, pickle.HIGHEST_PROTOCOL)
+ print('wrote gt roidb to {}'.format(cache_file))
+
+ return roidb
+
+ def write_detections(self, all_boxes, output_dir='./output/'):
+ pass
+
+ def evaluate_detections(self,
+ all_boxes,
+ output_dir='./output/',
+ method_name='insightdetection'):
+ pass
diff --git a/insightface/detection/retinaface/rcnn/io/__init__.py b/insightface/detection/retinaface/rcnn/io/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/io/image.py b/insightface/detection/retinaface/rcnn/io/image.py
new file mode 100644
index 0000000000000000000000000000000000000000..0296fb4de0eebdc22f1261a70eefa9fbe815ddfe
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/io/image.py
@@ -0,0 +1,886 @@
+from __future__ import print_function
+import numpy as np
+import cv2
+import os
+import math
+import sys
+import random
+from ..config import config
+
+
+def brightness_aug(src, x):
+ alpha = 1.0 + random.uniform(-x, x)
+ src *= alpha
+ return src
+
+
+def contrast_aug(src, x):
+ alpha = 1.0 + random.uniform(-x, x)
+ coef = np.array([[[0.299, 0.587, 0.114]]])
+ gray = src * coef
+ gray = (3.0 * (1.0 - alpha) / gray.size) * np.sum(gray)
+ src *= alpha
+ src += gray
+ return src
+
+
+def saturation_aug(src, x):
+ alpha = 1.0 + random.uniform(-x, x)
+ coef = np.array([[[0.299, 0.587, 0.114]]])
+ gray = src * coef
+ gray = np.sum(gray, axis=2, keepdims=True)
+ gray *= (1.0 - alpha)
+ src *= alpha
+ src += gray
+ return src
+
+
+def color_aug(img, x):
+ if config.COLOR_MODE > 1:
+ augs = [brightness_aug, contrast_aug, saturation_aug]
+ random.shuffle(augs)
+ else:
+ augs = [brightness_aug]
+ for aug in augs:
+ #print(img.shape)
+ img = aug(img, x)
+ #print(img.shape)
+ return img
+
+
+def get_image(roidb, scale=False):
+ """
+ preprocess image and return processed roidb
+ :param roidb: a list of roidb
+ :return: list of img as in mxnet format
+ roidb add new item['im_info']
+ 0 --- x (width, second dim of im)
+ |
+ y (height, first dim of im)
+ """
+ num_images = len(roidb)
+ processed_ims = []
+ processed_roidb = []
+ for i in range(num_images):
+ roi_rec = roidb[i]
+ if 'stream' in roi_rec:
+ im = cv2.imdecode(roi_rec['stream'], cv2.IMREAD_COLOR)
+ else:
+ assert os.path.exists(
+ roi_rec['image']), '{} does not exist'.format(roi_rec['image'])
+ im = cv2.imread(roi_rec['image'])
+ if roidb[i]['flipped']:
+ im = im[:, ::-1, :]
+ new_rec = roi_rec.copy()
+ if scale:
+ scale_range = config.TRAIN.SCALE_RANGE
+ im_scale = np.random.uniform(scale_range[0], scale_range[1])
+ im = cv2.resize(im,
+ None,
+ None,
+ fx=im_scale,
+ fy=im_scale,
+ interpolation=cv2.INTER_LINEAR)
+ elif not config.ORIGIN_SCALE:
+ scale_ind = random.randrange(len(config.SCALES))
+ target_size = config.SCALES[scale_ind][0]
+ max_size = config.SCALES[scale_ind][1]
+ im, im_scale = resize(im,
+ target_size,
+ max_size,
+ stride=config.IMAGE_STRIDE)
+ else:
+ im_scale = 1.0
+ im_tensor = transform(im, config.PIXEL_MEANS, config.PIXEL_STDS)
+ if 'boxes_mask' in roi_rec:
+ im = im.astype(np.float32)
+ boxes_mask = roi_rec['boxes_mask'].copy() * im_scale
+ boxes_mask = boxes_mask.astype(np.int)
+ for j in range(boxes_mask.shape[0]):
+ m = boxes_mask[j]
+ im_tensor[:, :, m[1]:m[3], m[0]:m[2]] = 0.0
+ #print('find mask', m, file=sys.stderr)
+ processed_ims.append(im_tensor)
+ new_rec['boxes'] = roi_rec['boxes'].copy() * im_scale
+ if config.TRAIN.IMAGE_ALIGN > 0:
+ if im_tensor.shape[
+ 2] % config.TRAIN.IMAGE_ALIGN != 0 or im_tensor.shape[
+ 3] % config.TRAIN.IMAGE_ALIGN != 0:
+ new_height = math.ceil(
+ float(im_tensor.shape[2]) /
+ config.TRAIN.IMAGE_ALIGN) * config.TRAIN.IMAGE_ALIGN
+ new_width = math.ceil(
+ float(im_tensor.shape[3]) /
+ config.TRAIN.IMAGE_ALIGN) * config.TRAIN.IMAGE_ALIGN
+ new_im_tensor = np.zeros(
+ (1, 3, int(new_height), int(new_width)))
+ new_im_tensor[:, :, 0:im_tensor.shape[2],
+ 0:im_tensor.shape[3]] = im_tensor
+ print(im_tensor.shape, new_im_tensor.shape, file=sys.stderr)
+ im_tensor = new_im_tensor
+ #print('boxes', new_rec['boxes'], file=sys.stderr)
+ im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale]
+ new_rec['im_info'] = im_info
+ processed_roidb.append(new_rec)
+ return processed_ims, processed_roidb
+
+
+TMP_ID = -1
+
+
+#bakup method
+def __get_crop_image(roidb):
+ """
+ preprocess image and return processed roidb
+ :param roidb: a list of roidb
+ :return: list of img as in mxnet format
+ roidb add new item['im_info']
+ 0 --- x (width, second dim of im)
+ |
+ y (height, first dim of im)
+ """
+ #roidb and each roi_rec can not be changed as it will be reused in next epoch
+ num_images = len(roidb)
+ processed_ims = []
+ processed_roidb = []
+ for i in range(num_images):
+ roi_rec = roidb[i]
+ if 'stream' in roi_rec:
+ im = cv2.imdecode(roi_rec['stream'], cv2.IMREAD_COLOR)
+ else:
+ assert os.path.exists(
+ roi_rec['image']), '{} does not exist'.format(roi_rec['image'])
+ im = cv2.imread(roi_rec['image'])
+ if roidb[i]['flipped']:
+ im = im[:, ::-1, :]
+ if 'boxes_mask' in roi_rec:
+ #im = im.astype(np.float32)
+ boxes_mask = roi_rec['boxes_mask'].copy()
+ boxes_mask = boxes_mask.astype(np.int)
+ for j in range(boxes_mask.shape[0]):
+ m = boxes_mask[j]
+ im[m[1]:m[3], m[0]:m[2], :] = 0
+ #print('find mask', m, file=sys.stderr)
+ new_rec = roi_rec.copy()
+
+ #choose one gt randomly
+ SIZE = config.SCALES[0][0]
+ TARGET_BOX_SCALES = np.array([16, 32, 64, 128, 256, 512])
+ assert roi_rec['boxes'].shape[0] > 0
+ candidates = []
+ for i in range(roi_rec['boxes'].shape[0]):
+ box = roi_rec['boxes'][i]
+ box_size = max(box[2] - box[0], box[3] - box[1])
+ if box_size < config.TRAIN.MIN_BOX_SIZE:
+ continue
+ #if box[0]<0 or box[1]<0:
+ # continue
+ #if box[2]>im.shape[1] or box[3]>im.shape[0]:
+ # continue;
+ candidates.append(i)
+ assert len(candidates) > 0
+ box_ind = random.choice(candidates)
+ box = roi_rec['boxes'][box_ind]
+ box_size = max(box[2] - box[0], box[3] - box[1])
+ dist = np.abs(TARGET_BOX_SCALES - box_size)
+ nearest = np.argmin(dist)
+ target_ind = random.randrange(min(len(TARGET_BOX_SCALES), nearest + 2))
+ target_box_size = TARGET_BOX_SCALES[target_ind]
+ im_scale = float(target_box_size) / box_size
+ #min_scale = float(SIZE)/np.min(im.shape[0:2])
+ #if im_scale= im.shape[
+ 1] or center[1] >= im.shape[0]:
+ continue
+ if box_size < config.TRAIN.MIN_BOX_SIZE:
+ continue
+ boxes_new.append(box)
+ classes_new.append(new_rec['gt_classes'][i])
+ new_rec['boxes'] = np.array(boxes_new)
+ new_rec['gt_classes'] = np.array(classes_new)
+ #print('after', new_rec['boxes'].shape[0])
+ #assert new_rec['boxes'].shape[0]>0
+ DEBUG = True
+ if DEBUG:
+ global TMP_ID
+ if TMP_ID < 10:
+ tim = im.copy()
+ for i in range(new_rec['boxes'].shape[0]):
+ box = new_rec['boxes'][i].copy().astype(np.int)
+ cv2.rectangle(tim, (box[0], box[1]), (box[2], box[3]),
+ (255, 0, 0), 1)
+ filename = './trainimages/train%d.png' % TMP_ID
+ TMP_ID += 1
+ cv2.imwrite(filename, tim)
+
+ im_tensor = transform(im, config.PIXEL_MEANS, config.PIXEL_STDS,
+ config.PIXEL_SCALE)
+
+ processed_ims.append(im_tensor)
+ #print('boxes', new_rec['boxes'], file=sys.stderr)
+ im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale]
+ new_rec['im_info'] = im_info
+ processed_roidb.append(new_rec)
+ return processed_ims, processed_roidb
+
+
+def expand_bboxes(bboxes,
+ image_width,
+ image_height,
+ expand_left=2.,
+ expand_up=2.,
+ expand_right=2.,
+ expand_down=2.):
+ """
+ Expand bboxes, expand 2 times by defalut.
+ """
+ expand_boxes = []
+ for bbox in bboxes:
+ xmin = bbox[0]
+ ymin = bbox[1]
+ xmax = bbox[2]
+ ymax = bbox[3]
+ w = xmax - xmin
+ h = ymax - ymin
+ ex_xmin = max(xmin - w / expand_left, 0.)
+ ex_ymin = max(ymin - h / expand_up, 0.)
+ ex_xmax = min(xmax + w / expand_right, image_width)
+ ex_ymax = min(ymax + h / expand_down, image_height)
+ expand_boxes.append([ex_xmin, ex_ymin, ex_xmax, ex_ymax])
+ return expand_boxes
+
+
+def get_crop_image1(roidb):
+ """
+ preprocess image and return processed roidb
+ :param roidb: a list of roidb
+ :return: list of img as in mxnet format
+ roidb add new item['im_info']
+ 0 --- x (width, second dim of im)
+ |
+ y (height, first dim of im)
+ """
+ #roidb and each roi_rec can not be changed as it will be reused in next epoch
+ num_images = len(roidb)
+ processed_ims = []
+ processed_roidb = []
+ for i in range(num_images):
+ roi_rec = roidb[i]
+ if 'stream' in roi_rec:
+ im = cv2.imdecode(roi_rec['stream'], cv2.IMREAD_COLOR)
+ else:
+ assert os.path.exists(
+ roi_rec['image']), '{} does not exist'.format(roi_rec['image'])
+ im = cv2.imread(roi_rec['image'])
+ if roidb[i]['flipped']:
+ im = im[:, ::-1, :]
+ if 'boxes_mask' in roi_rec:
+ #im = im.astype(np.float32)
+ boxes_mask = roi_rec['boxes_mask'].copy()
+ boxes_mask = boxes_mask.astype(np.int)
+ for j in range(boxes_mask.shape[0]):
+ m = boxes_mask[j]
+ im[m[1]:m[3], m[0]:m[2], :] = 127
+ #print('find mask', m, file=sys.stderr)
+ SIZE = config.SCALES[0][0]
+ PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0]
+ #PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0, 0.8, 1.0, 0.8, 1.0]
+ _scale = random.choice(PRE_SCALES)
+ #_scale = np.random.uniform(PRE_SCALES[0], PRE_SCALES[-1])
+ size = int(np.min(im.shape[0:2]) * _scale)
+ #size = int(np.round(_scale*np.min(im.shape[0:2])))
+ im_scale = float(SIZE) / size
+ #origin_im_scale = im_scale
+ #size = np.round(np.min(im.shape[0:2])*im_scale)
+ #im_scale *= (float(SIZE)/size)
+ origin_shape = im.shape
+ if _scale > 10.0: #avoid im.size= SIZE and im.shape[1] >= SIZE
+ #print('image size', origin_shape, _scale, SIZE, size, im_scale)
+
+ new_rec = roi_rec.copy()
+ new_rec['boxes'] = roi_rec['boxes'].copy() * im_scale
+ if config.FACE_LANDMARK:
+ new_rec['landmarks'] = roi_rec['landmarks'].copy()
+ new_rec['landmarks'][:, :, 0:2] *= im_scale
+ retry = 0
+ LIMIT = 25
+ size = SIZE
+ while retry < LIMIT:
+ up, left = (np.random.randint(0, im.shape[0] - size + 1),
+ np.random.randint(0, im.shape[1] - size + 1))
+ boxes_new = new_rec['boxes'].copy()
+ im_new = im[up:(up + size), left:(left + size), :]
+ #print('crop', up, left, size, im_scale)
+ boxes_new[:, 0] -= left
+ boxes_new[:, 2] -= left
+ boxes_new[:, 1] -= up
+ boxes_new[:, 3] -= up
+ if config.FACE_LANDMARK:
+ landmarks_new = new_rec['landmarks'].copy()
+ landmarks_new[:, :, 0] -= left
+ landmarks_new[:, :, 1] -= up
+ #for i in range(0,10,2):
+ # landmarks_new[:,i] -= left
+ #for i in range(1,10,2):
+ # landmarks_new[:,i] -= up
+ valid_landmarks = []
+ #im_new = cv2.resize(im_new, (SIZE, SIZE), interpolation=cv2.INTER_LINEAR)
+ #boxes_new *= im_scale
+ #print(origin_shape, im_new.shape, im_scale)
+ valid = []
+ valid_boxes = []
+ for i in range(boxes_new.shape[0]):
+ box = boxes_new[i]
+ #center = np.array(([box[0], box[1]]+[box[2], box[3]]))/2
+ centerx = (box[0] + box[2]) / 2
+ centery = (box[1] + box[3]) / 2
+
+ #box[0] = max(0, box[0])
+ #box[1] = max(0, box[1])
+ #box[2] = min(im_new.shape[1], box[2])
+ #box[3] = min(im_new.shape[0], box[3])
+ box_size = max(box[2] - box[0], box[3] - box[1])
+
+ if centerx < 0 or centery < 0 or centerx >= im_new.shape[
+ 1] or centery >= im_new.shape[0]:
+ continue
+ if box_size < config.TRAIN.MIN_BOX_SIZE:
+ continue
+ #filter by landmarks? TODO
+ valid.append(i)
+ valid_boxes.append(box)
+ if config.FACE_LANDMARK:
+ valid_landmarks.append(landmarks_new[i])
+ if len(valid) > 0 or retry == LIMIT - 1:
+ im = im_new
+ new_rec['boxes'] = np.array(valid_boxes)
+ new_rec['gt_classes'] = new_rec['gt_classes'][valid]
+ if config.FACE_LANDMARK:
+ new_rec['landmarks'] = np.array(valid_landmarks)
+ if config.HEAD_BOX:
+ face_box = new_rec['boxes']
+ head_box = expand_bboxes(face_box,
+ image_width=im.shape[1],
+ image_height=im.shape[0])
+ new_rec['boxes_head'] = np.array(head_box)
+ break
+
+ retry += 1
+
+ if config.COLOR_MODE > 0 and config.COLOR_JITTERING > 0.0:
+ im = im.astype(np.float32)
+ im = color_aug(im, config.COLOR_JITTERING)
+
+ #assert np.all(new_rec['landmarks'][:,10]>0.0)
+ global TMP_ID
+ if TMP_ID >= 0 and TMP_ID < 10:
+ tim = im.copy().astype(np.uint8)
+ for i in range(new_rec['boxes'].shape[0]):
+ box = new_rec['boxes'][i].copy().astype(np.int)
+ cv2.rectangle(tim, (box[0], box[1]), (box[2], box[3]),
+ (255, 0, 0), 1)
+ print('draw box:', box)
+ if config.FACE_LANDMARK:
+ for i in range(new_rec['landmarks'].shape[0]):
+ landmark = new_rec['landmarks'][i].copy()
+ if landmark[0][2] < 0:
+ print('zero', landmark)
+ continue
+ landmark = landmark.astype(np.int)
+ print('draw landmark', landmark)
+ for k in range(5):
+ color = (0, 0, 255)
+ if k == 0 or k == 3:
+ color = (0, 255, 0)
+ pp = (landmark[k][0], landmark[k][1])
+ cv2.circle(tim, (pp[0], pp[1]), 1, color, 2)
+ filename = './trainimages/train%d.png' % TMP_ID
+ print('write', filename)
+ cv2.imwrite(filename, tim)
+ TMP_ID += 1
+
+ im_tensor = transform(im, config.PIXEL_MEANS, config.PIXEL_STDS,
+ config.PIXEL_SCALE)
+
+ processed_ims.append(im_tensor)
+ #print('boxes', new_rec['boxes'], file=sys.stderr)
+ im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale]
+ new_rec['im_info'] = np.array(im_info, dtype=np.float32)
+ processed_roidb.append(new_rec)
+ return processed_ims, processed_roidb
+
+
+def get_crop_image2(roidb):
+ """
+ preprocess image and return processed roidb
+ :param roidb: a list of roidb
+ :return: list of img as in mxnet format
+ roidb add new item['im_info']
+ 0 --- x (width, second dim of im)
+ |
+ y (height, first dim of im)
+ """
+ #roidb and each roi_rec can not be changed as it will be reused in next epoch
+ num_images = len(roidb)
+ processed_ims = []
+ processed_roidb = []
+ for i in range(num_images):
+ roi_rec = roidb[i]
+ if 'stream' in roi_rec:
+ im = cv2.imdecode(roi_rec['stream'], cv2.IMREAD_COLOR)
+ else:
+ assert os.path.exists(
+ roi_rec['image']), '{} does not exist'.format(roi_rec['image'])
+ im = cv2.imread(roi_rec['image'])
+ if roidb[i]['flipped']:
+ im = im[:, ::-1, :]
+ if 'boxes_mask' in roi_rec:
+ #im = im.astype(np.float32)
+ boxes_mask = roi_rec['boxes_mask'].copy()
+ boxes_mask = boxes_mask.astype(np.int)
+ for j in range(boxes_mask.shape[0]):
+ m = boxes_mask[j]
+ im[m[1]:m[3], m[0]:m[2], :] = 0
+ #print('find mask', m, file=sys.stderr)
+ SIZE = config.SCALES[0][0]
+ scale_array = np.array([16, 32, 64, 128, 256, 512], dtype=np.float32)
+ candidates = []
+ for i in range(roi_rec['boxes'].shape[0]):
+ box = roi_rec['boxes'][i]
+ box_size = max(box[2] - box[0], box[3] - box[1])
+ if box_size < config.TRAIN.MIN_BOX_SIZE:
+ continue
+ #if box[0]<0 or box[1]<0:
+ # continue
+ #if box[2]>im.shape[1] or box[3]>im.shape[0]:
+ # continue;
+ candidates.append(i)
+ assert len(candidates) > 0
+ box_ind = random.choice(candidates)
+ box = roi_rec['boxes'][box_ind]
+ width = box[2] - box[0]
+ height = box[3] - box[1]
+ wid = width
+ hei = height
+ resize_width, resize_height = config.SCALES[0]
+ image_width = im.shape[0]
+ image_height = im.shape[1]
+ area = width * height
+ range_size = 0
+ for scale_ind in range(0, len(scale_array) - 1):
+ if area > scale_array[scale_ind] ** 2 and area < \
+ scale_array[scale_ind + 1] ** 2:
+ range_size = scale_ind + 1
+ break
+
+ if area > scale_array[len(scale_array) - 2]**2:
+ range_size = len(scale_array) - 2
+ scale_choose = 0.0
+ if range_size == 0:
+ rand_idx_size = 0
+ else:
+ # np.random.randint range: [low, high)
+ rng_rand_size = np.random.randint(0, range_size + 1)
+ rand_idx_size = rng_rand_size % (range_size + 1)
+
+ if rand_idx_size == range_size:
+ min_resize_val = scale_array[rand_idx_size] / 2.0
+ max_resize_val = min(2.0 * scale_array[rand_idx_size],
+ 2 * math.sqrt(wid * hei))
+ scale_choose = random.uniform(min_resize_val, max_resize_val)
+ else:
+ min_resize_val = scale_array[rand_idx_size] / 2.0
+ max_resize_val = 2.0 * scale_array[rand_idx_size]
+ scale_choose = random.uniform(min_resize_val, max_resize_val)
+
+ sample_bbox_size = wid * resize_width / scale_choose
+
+ w_off_orig = 0.0
+ h_off_orig = 0.0
+ if sample_bbox_size < max(image_height, image_width):
+ if wid <= sample_bbox_size:
+ w_off_orig = np.random.uniform(xmin + wid - sample_bbox_size,
+ xmin)
+ else:
+ w_off_orig = np.random.uniform(xmin,
+ xmin + wid - sample_bbox_size)
+
+ if hei <= sample_bbox_size:
+ h_off_orig = np.random.uniform(ymin + hei - sample_bbox_size,
+ ymin)
+ else:
+ h_off_orig = np.random.uniform(ymin,
+ ymin + hei - sample_bbox_size)
+
+ else:
+ w_off_orig = np.random.uniform(image_width - sample_bbox_size, 0.0)
+ h_off_orig = np.random.uniform(image_height - sample_bbox_size,
+ 0.0)
+
+ w_off_orig = math.floor(w_off_orig)
+ h_off_orig = math.floor(h_off_orig)
+
+ # Figure out top left coordinates.
+ w_off = 0.0
+ h_off = 0.0
+ w_off = float(w_off_orig / image_width)
+ h_off = float(h_off_orig / image_height)
+ im_new = im[up:(up + size), left:(left + size), :]
+
+ sampled_bbox = bbox(w_off, h_off,
+ w_off + float(sample_bbox_size / image_width),
+ h_off + float(sample_bbox_size / image_height))
+ return sampled_bbox
+
+ box_size = max(box[2] - box[0], box[3] - box[1])
+ dist = np.abs(TARGET_BOX_SCALES - box_size)
+ nearest = np.argmin(dist)
+ target_ind = random.randrange(min(len(TARGET_BOX_SCALES), nearest + 2))
+ target_box_size = TARGET_BOX_SCALES[target_ind]
+ im_scale = float(target_box_size) / box_size
+ PRE_SCALES = [0.3, 0.45, 0.6, 0.8, 1.0]
+ _scale = random.choice(PRE_SCALES)
+ #_scale = np.random.uniform(PRE_SCALES[0], PRE_SCALES[-1])
+ size = int(np.round(_scale * np.min(im.shape[0:2])))
+ im_scale = float(SIZE) / size
+ #origin_im_scale = im_scale
+ #size = np.round(np.min(im.shape[0:2])*im_scale)
+ #im_scale *= (float(SIZE)/size)
+ origin_shape = im.shape
+ if _scale > 10.0: #avoid im.size= SIZE and im.shape[1] >= SIZE
+
+ new_rec = roi_rec.copy()
+ new_rec['boxes'] = roi_rec['boxes'].copy() * im_scale
+ if config.FACE_LANDMARK:
+ new_rec['landmarks'] = roi_rec['landmarks'].copy() * im_scale
+ retry = 0
+ LIMIT = 25
+ size = SIZE
+ while retry < LIMIT:
+ up, left = (np.random.randint(0, im.shape[0] - size + 1),
+ np.random.randint(0, im.shape[1] - size + 1))
+ boxes_new = new_rec['boxes'].copy()
+ im_new = im[up:(up + size), left:(left + size), :]
+ #print('crop', up, left, size, im_scale)
+ boxes_new[:, 0] -= left
+ boxes_new[:, 2] -= left
+ boxes_new[:, 1] -= up
+ boxes_new[:, 3] -= up
+ if config.FACE_LANDMARK:
+ landmarks_new = new_rec['landmarks'].copy()
+ for i in range(0, 10, 2):
+ landmarks_new[:, i] -= left
+ for i in range(1, 10, 2):
+ landmarks_new[:, i] -= up
+ valid_landmarks = []
+ #im_new = cv2.resize(im_new, (SIZE, SIZE), interpolation=cv2.INTER_LINEAR)
+ #boxes_new *= im_scale
+ #print(origin_shape, im_new.shape, im_scale)
+ valid = []
+ valid_boxes = []
+ for i in range(boxes_new.shape[0]):
+ box = boxes_new[i]
+ #center = np.array(([box[0], box[1]]+[box[2], box[3]]))/2
+ centerx = (box[0] + box[2]) / 2
+ centery = (box[1] + box[3]) / 2
+
+ #box[0] = max(0, box[0])
+ #box[1] = max(0, box[1])
+ #box[2] = min(im_new.shape[1], box[2])
+ #box[3] = min(im_new.shape[0], box[3])
+ box_size = max(box[2] - box[0], box[3] - box[1])
+
+ if centerx < 0 or centery < 0 or centerx >= im_new.shape[
+ 1] or centery >= im_new.shape[0]:
+ continue
+ if box_size < config.TRAIN.MIN_BOX_SIZE:
+ continue
+ #filter by landmarks? TODO
+ valid.append(i)
+ valid_boxes.append(box)
+ if config.FACE_LANDMARK:
+ valid_landmarks.append(landmarks_new[i])
+ if len(valid) > 0 or retry == LIMIT - 1:
+ im = im_new
+ new_rec['boxes'] = np.array(valid_boxes)
+ new_rec['gt_classes'] = new_rec['gt_classes'][valid]
+ if config.FACE_LANDMARK:
+ new_rec['landmarks'] = np.array(valid_landmarks)
+ break
+
+ retry += 1
+
+ if config.COLOR_JITTERING > 0.0:
+ im = im.astype(np.float32)
+ im = color_aug(im, config.COLOR_JITTERING)
+
+ #assert np.all(new_rec['landmarks'][:,10]>0.0)
+ global TMP_ID
+ if TMP_ID >= 0 and TMP_ID < 10:
+ tim = im.copy().astype(np.uint8)
+ for i in range(new_rec['boxes'].shape[0]):
+ box = new_rec['boxes'][i].copy().astype(np.int)
+ cv2.rectangle(tim, (box[0], box[1]), (box[2], box[3]),
+ (255, 0, 0), 1)
+ print('draw box:', box)
+ if config.FACE_LANDMARK:
+ for i in range(new_rec['landmarks'].shape[0]):
+ landmark = new_rec['landmarks'][i].copy()
+ if landmark[10] == 0.0:
+ print('zero', landmark)
+ continue
+ landmark = landmark.astype(np.int)
+ print('draw landmark', landmark)
+ for k in range(5):
+ color = (0, 0, 255)
+ if k == 0 or k == 3:
+ color = (0, 255, 0)
+ pp = (landmark[k * 2], landmark[1 + k * 2])
+ cv2.circle(tim, (pp[0], pp[1]), 1, color, 2)
+ filename = './trainimages/train%d.png' % TMP_ID
+ print('write', filename)
+ cv2.imwrite(filename, tim)
+ TMP_ID += 1
+
+ im_tensor = transform(im, config.PIXEL_MEANS, config.PIXEL_STDS,
+ config.PIXEL_SCALE)
+
+ processed_ims.append(im_tensor)
+ #print('boxes', new_rec['boxes'], file=sys.stderr)
+ im_info = [im_tensor.shape[2], im_tensor.shape[3], im_scale]
+ new_rec['im_info'] = np.array(im_info, dtype=np.float32)
+ processed_roidb.append(new_rec)
+ return processed_ims, processed_roidb
+
+
+def do_mixup(im1, roidb1, im2, roidb2):
+ im = (im1 + im2) / 2.0
+ roidb = {}
+ #print(roidb1.keys())
+ #for k in roidb1:
+ for k in ['boxes', 'landmarks', 'gt_classes', 'im_info']:
+ v1 = roidb1[k]
+ v2 = roidb2[k]
+ if k != 'im_info':
+ #print('try', k, v1.shape, v2.shape)
+ if v1.shape[0] > 0 and v2.shape[0] > 0:
+ v = np.concatenate((v1, v2), axis=0)
+ else:
+ v = v1
+ else:
+ v = v1
+ #print(k, v1.shape, v2.shape, v.shape)
+ roidb[k] = v
+ return im, roidb
+
+
+def get_crop_image(roidb):
+ ims, roidbs = get_crop_image1(roidb)
+ if config.MIXUP > 0.0 and np.random.random() < config.MIXUP:
+ for i in range(len(ims)):
+ im = ims[i]
+ roidb = roidbs[i]
+ j = np.random.randint(0, len(ims) - 1)
+ if j >= i:
+ j += 1
+ im, roidb = do_mixup(im, roidb, ims[j], roidbs[j])
+ ims[i] = im
+ roidbs[i] = roidb
+ return ims, roidbs
+
+
+def resize(im, target_size, max_size, stride=0, min_size=0):
+ """
+ only resize input image to target size and return scale
+ :param im: BGR image input by opencv
+ :param target_size: one dimensional size (the short side)
+ :param max_size: one dimensional max size (the long side)
+ :param stride: if given, pad the image to designated stride
+ :return:
+ """
+ im_shape = im.shape
+ im_size_min = np.min(im_shape[0:2])
+ im_size_max = np.max(im_shape[0:2])
+ im_scale = float(target_size) / float(im_size_min)
+ # prevent bigger axis from being more than max_size:
+ if np.round(im_scale * im_size_max) > max_size:
+ im_scale = float(max_size) / float(im_size_max)
+ if min_size > 0 and np.round(im_scale * im_size_min) < min_size:
+ im_scale = float(min_size) / float(im_size_min)
+ im = cv2.resize(im,
+ None,
+ None,
+ fx=im_scale,
+ fy=im_scale,
+ interpolation=cv2.INTER_LINEAR)
+
+ if stride == 0:
+ return im, im_scale
+ else:
+ # pad to product of stride
+ im_height = int(np.ceil(im.shape[0] / float(stride)) * stride)
+ im_width = int(np.ceil(im.shape[1] / float(stride)) * stride)
+ im_channel = im.shape[2]
+ padded_im = np.zeros((im_height, im_width, im_channel))
+ padded_im[:im.shape[0], :im.shape[1], :] = im
+ return padded_im, im_scale
+
+
+def transform(im, pixel_means, pixel_stds, pixel_scale):
+ """
+ transform into mxnet tensor,
+ subtract pixel size and transform to correct format
+ :param im: [height, width, channel] in BGR
+ :param pixel_means: [B, G, R pixel means]
+ :return: [batch, channel, height, width]
+ """
+ im_tensor = np.zeros((1, 3, im.shape[0], im.shape[1]))
+ for i in range(3):
+ im_tensor[0, i, :, :] = (im[:, :, 2 - i] / pixel_scale -
+ pixel_means[2 - i]) / pixel_stds[2 - i]
+ return im_tensor
+
+
+def transform_inverse(im_tensor, pixel_means):
+ """
+ transform from mxnet im_tensor to ordinary RGB image
+ im_tensor is limited to one image
+ :param im_tensor: [batch, channel, height, width]
+ :param pixel_means: [B, G, R pixel means]
+ :return: im [height, width, channel(RGB)]
+ """
+ assert im_tensor.shape[0] == 1
+ im_tensor = im_tensor.copy()
+ # put channel back
+ channel_swap = (0, 2, 3, 1)
+ im_tensor = im_tensor.transpose(channel_swap)
+ im = im_tensor[0]
+ assert im.shape[2] == 3
+ im += pixel_means[[2, 1, 0]]
+ im = im.astype(np.uint8)
+ return im
+
+
+def tensor_vstack(tensor_list, pad=0):
+ """
+ vertically stack tensors
+ :param tensor_list: list of tensor to be stacked vertically
+ :param pad: label to pad with
+ :return: tensor with max shape
+ """
+ ndim = len(tensor_list[0].shape)
+ dtype = tensor_list[0].dtype
+ islice = tensor_list[0].shape[0]
+ dimensions = []
+ first_dim = sum([tensor.shape[0] for tensor in tensor_list])
+ dimensions.append(first_dim)
+ for dim in range(1, ndim):
+ dimensions.append(max([tensor.shape[dim] for tensor in tensor_list]))
+ if pad == 0:
+ all_tensor = np.zeros(tuple(dimensions), dtype=dtype)
+ elif pad == 1:
+ all_tensor = np.ones(tuple(dimensions), dtype=dtype)
+ else:
+ all_tensor = np.full(tuple(dimensions), pad, dtype=dtype)
+ if ndim == 1:
+ for ind, tensor in enumerate(tensor_list):
+ all_tensor[ind * islice:(ind + 1) * islice] = tensor
+ elif ndim == 2:
+ for ind, tensor in enumerate(tensor_list):
+ all_tensor[ind * islice:(ind + 1) *
+ islice, :tensor.shape[1]] = tensor
+ elif ndim == 3:
+ for ind, tensor in enumerate(tensor_list):
+ all_tensor[ind * islice:(ind + 1) *
+ islice, :tensor.shape[1], :tensor.shape[2]] = tensor
+ elif ndim == 4:
+ for ind, tensor in enumerate(tensor_list):
+ all_tensor[ind * islice:(ind + 1) * islice, :tensor.
+ shape[1], :tensor.shape[2], :tensor.shape[3]] = tensor
+ elif ndim == 5:
+ for ind, tensor in enumerate(tensor_list):
+ all_tensor[ind * islice:(ind + 1) *
+ islice, :tensor.shape[1], :tensor.shape[2], :tensor.
+ shape[3], :tensor.shape[4]] = tensor
+ else:
+ print(tensor_list[0].shape)
+ raise Exception('Sorry, unimplemented.')
+ return all_tensor
diff --git a/insightface/detection/retinaface/rcnn/io/rcnn.py b/insightface/detection/retinaface/rcnn/io/rcnn.py
new file mode 100644
index 0000000000000000000000000000000000000000..1b3a443f6c101a83760f950553537441e6dda037
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/io/rcnn.py
@@ -0,0 +1,661 @@
+"""
+Fast R-CNN:
+data =
+ {'data': [num_images, c, h, w],
+ 'rois': [num_rois, 5]}
+label =
+ {'label': [num_rois],
+ 'bbox_target': [num_rois, 4 * num_classes],
+ 'bbox_weight': [num_rois, 4 * num_classes]}
+roidb extended format [image_index]
+ ['image', 'height', 'width', 'flipped',
+ 'boxes', 'gt_classes', 'gt_overlaps', 'max_classes', 'max_overlaps', 'bbox_targets']
+"""
+
+import numpy as np
+import numpy.random as npr
+
+from ..config import config
+from ..io.image import get_image, tensor_vstack
+from ..processing.bbox_transform import bbox_overlaps, bbox_transform
+from ..processing.bbox_regression import expand_bbox_regression_targets
+
+
+def get_rcnn_testbatch(roidb):
+ """
+ return a dict of testbatch
+ :param roidb: ['image', 'flipped'] + ['boxes']
+ :return: data, label, im_info
+ """
+ assert len(roidb) == 1, 'Single batch only'
+ imgs, roidb = get_image(roidb)
+ im_array = imgs[0]
+ im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
+
+ im_rois = roidb[0]['boxes']
+ rois = im_rois
+ batch_index = 0 * np.ones((rois.shape[0], 1))
+ rois_array = np.hstack((batch_index, rois))[np.newaxis, :]
+
+ data = {'data': im_array, 'rois': rois_array, 'im_info': im_info}
+ label = {}
+
+ return data, label
+
+
+def get_rcnn_batch(roidb):
+ """
+ return a dict of multiple images
+ :param roidb: a list of dict, whose length controls batch size
+ ['images', 'flipped'] + ['gt_boxes', 'boxes', 'gt_overlap'] => ['bbox_targets']
+ :return: data, label
+ """
+ num_images = len(roidb)
+ imgs, roidb = get_image(roidb)
+ im_array = tensor_vstack(imgs)
+
+ assert config.TRAIN.BATCH_ROIS % config.TRAIN.BATCH_IMAGES == 0, \
+ 'BATCHIMAGES {} must divide BATCH_ROIS {}'.format(config.TRAIN.BATCH_IMAGES, config.TRAIN.BATCH_ROIS)
+ rois_per_image = int(config.TRAIN.BATCH_ROIS / config.TRAIN.BATCH_IMAGES)
+ fg_rois_per_image = int(round(config.TRAIN.FG_FRACTION * rois_per_image))
+
+ rois_array = list()
+ labels_array = list()
+ bbox_targets_array = list()
+ bbox_weights_array = list()
+
+ for im_i in range(num_images):
+ roi_rec = roidb[im_i]
+
+ # infer num_classes from gt_overlaps
+ num_classes = roi_rec['gt_overlaps'].shape[1]
+
+ # label = class RoI has max overlap with
+ rois = roi_rec['boxes']
+ labels = roi_rec['max_classes']
+ overlaps = roi_rec['max_overlaps']
+ bbox_targets = roi_rec['bbox_targets']
+
+ im_rois, labels, bbox_targets, bbox_weights = \
+ sample_rois(rois, fg_rois_per_image, rois_per_image, num_classes,
+ labels, overlaps, bbox_targets)
+
+ # project im_rois
+ # do not round roi
+ rois = im_rois
+ batch_index = im_i * np.ones((rois.shape[0], 1))
+ rois_array_this_image = np.hstack((batch_index, rois))
+ rois_array.append(rois_array_this_image)
+
+ # add labels
+ labels_array.append(labels)
+ bbox_targets_array.append(bbox_targets)
+ bbox_weights_array.append(bbox_weights)
+
+ rois_array = np.array(rois_array)
+ labels_array = np.array(labels_array)
+ bbox_targets_array = np.array(bbox_targets_array)
+ bbox_weights_array = np.array(bbox_weights_array)
+
+ data = {'data': im_array, 'rois': rois_array}
+ label = {
+ 'label': labels_array,
+ 'bbox_target': bbox_targets_array,
+ 'bbox_weight': bbox_weights_array
+ }
+
+ return data, label
+
+
+def sample_rois(rois,
+ fg_rois_per_image,
+ rois_per_image,
+ num_classes,
+ labels=None,
+ overlaps=None,
+ bbox_targets=None,
+ gt_boxes=None):
+ """
+ generate random sample of ROIs comprising foreground and background examples
+ :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index
+ :param fg_rois_per_image: foreground roi number
+ :param rois_per_image: total roi number
+ :param num_classes: number of classes
+ :param labels: maybe precomputed
+ :param overlaps: maybe precomputed (max_overlaps)
+ :param bbox_targets: maybe precomputed
+ :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls)
+ :return: (labels, rois, bbox_targets, bbox_weights)
+ """
+ if labels is None:
+ overlaps = bbox_overlaps(rois[:, 1:].astype(np.float),
+ gt_boxes[:, :4].astype(np.float))
+ gt_assignment = overlaps.argmax(axis=1)
+ overlaps = overlaps.max(axis=1)
+ labels = gt_boxes[gt_assignment, 4]
+
+ # foreground RoI with FG_THRESH overlap
+ fg_indexes = np.where(overlaps >= config.TRAIN.FG_THRESH)[0]
+ # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs
+ fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size)
+ # Sample foreground regions without replacement
+ if len(fg_indexes) > fg_rois_per_this_image:
+ fg_indexes = npr.choice(fg_indexes,
+ size=fg_rois_per_this_image,
+ replace=False)
+
+ # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)
+ bg_indexes = np.where((overlaps < config.TRAIN.BG_THRESH_HI)
+ & (overlaps >= config.TRAIN.BG_THRESH_LO))[0]
+ # Compute number of background RoIs to take from this image (guarding against there being fewer than desired)
+ bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image
+ bg_rois_per_this_image = np.minimum(bg_rois_per_this_image,
+ bg_indexes.size)
+ # Sample foreground regions without replacement
+ if len(bg_indexes) > bg_rois_per_this_image:
+ bg_indexes = npr.choice(bg_indexes,
+ size=bg_rois_per_this_image,
+ replace=False)
+
+ # indexes selected
+ keep_indexes = np.append(fg_indexes, bg_indexes)
+ neg_idx = np.where(overlaps < config.TRAIN.FG_THRESH)[0]
+ neg_rois = rois[neg_idx]
+ # pad more to ensure a fixed minibatch size
+ while keep_indexes.shape[0] < rois_per_image:
+ gap = np.minimum(len(neg_rois), rois_per_image - keep_indexes.shape[0])
+ gap_indexes = npr.choice(range(len(neg_rois)), size=gap, replace=False)
+ keep_indexes = np.append(keep_indexes, neg_idx[gap_indexes])
+
+ # select labels
+ labels = labels[keep_indexes]
+ # set labels of bg_rois to be 0
+ labels[fg_rois_per_this_image:] = 0
+ rois = rois[keep_indexes]
+
+ # load or compute bbox_target
+ if bbox_targets is not None:
+ bbox_target_data = bbox_targets[keep_indexes, :]
+ else:
+ targets = bbox_transform(rois[:, 1:],
+ gt_boxes[gt_assignment[keep_indexes], :4])
+ if config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED:
+ targets = ((targets - np.array(config.TRAIN.BBOX_MEANS)) /
+ np.array(config.TRAIN.BBOX_STDS))
+ bbox_target_data = np.hstack((labels[:, np.newaxis], targets))
+
+ bbox_targets, bbox_weights = \
+ expand_bbox_regression_targets(bbox_target_data, num_classes)
+
+ return rois, labels, bbox_targets, bbox_weights
+
+
+def get_fpn_rcnn_testbatch(roidb):
+ """
+ return a dict of testbatch
+ :param roidb: ['image', 'flipped'] + ['boxes']
+ :return: data, label, im_info
+ """
+ assert len(roidb) == 1, 'Single batch only'
+ imgs, roidb = get_image(roidb)
+ im_array = imgs[0]
+ im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
+
+ im_rois = roidb[0]['boxes']
+ rois = im_rois
+
+ # assign rois
+ rois_area = np.sqrt((rois[:, 2] - rois[:, 0]) * (rois[:, 3] - rois[:, 1]))
+ area_threshold = {'P5': 448, 'P4': 224, 'P3': 112}
+ rois_p5 = rois[area_threshold['P5'] <= rois_area]
+ rois_p4 = rois[np.logical_and(area_threshold['P4'] <= rois_area,
+ rois_area < area_threshold['P5'])]
+ rois_p3 = rois[np.logical_and(area_threshold['P3'] <= rois_area,
+ rois_area < area_threshold['P4'])]
+ rois_p2 = rois[np.logical_and(0 < rois_area,
+ rois_area < area_threshold['P3'])]
+
+ # pad a virtual rois if on rois assigned
+ if rois_p5.size == 0:
+ rois_p5 = np.array([[12, 34, 56, 78]])
+ if rois_p4.size == 0:
+ rois_p4 = np.array([[12, 34, 56, 78]])
+ if rois_p3.size == 0:
+ rois_p3 = np.array([[12, 34, 56, 78]])
+ if rois_p2.size == 0:
+ rois_p2 = np.array([[12, 34, 56, 78]])
+
+ p5_batch_index = 0 * np.ones((rois_p5.shape[0], 1))
+ rois_p5_array = np.hstack((p5_batch_index, rois_p5))[np.newaxis, :]
+
+ p4_batch_index = 0 * np.ones((rois_p4.shape[0], 1))
+ rois_p4_array = np.hstack((p4_batch_index, rois_p4))[np.newaxis, :]
+
+ p3_batch_index = 0 * np.ones((rois_p3.shape[0], 1))
+ rois_p3_array = np.hstack((p3_batch_index, rois_p3))[np.newaxis, :]
+
+ p2_batch_index = 0 * np.ones((rois_p2.shape[0], 1))
+ rois_p2_array = np.hstack((p2_batch_index, rois_p2))[np.newaxis, :]
+
+ data = {
+ 'data': im_array,
+ 'rois_stride32': rois_p5_array,
+ 'rois_stride16': rois_p4_array,
+ 'rois_stride8': rois_p3_array,
+ 'rois_stride4': rois_p2_array
+ }
+ label = {}
+
+ return data, label, im_info
+
+
+def get_fpn_maskrcnn_batch(roidb):
+ """
+ return a dictionary that contains raw data.
+ """
+ num_images = len(roidb)
+ imgs, roidb = get_image(roidb, scale=config.TRAIN.SCALE) #TODO
+ #imgs, roidb = get_image(roidb)
+ im_array = tensor_vstack(imgs)
+
+ assert config.TRAIN.BATCH_ROIS % config.TRAIN.BATCH_IMAGES == 0, \
+ 'BATCHIMAGES {} must divide BATCH_ROIS {}'.format(config.TRAIN.BATCH_IMAGES, config.TRAIN.BATCH_ROIS)
+ rois_per_image = config.TRAIN.BATCH_ROIS / config.TRAIN.BATCH_IMAGES
+ fg_rois_per_image = np.round(config.TRAIN.FG_FRACTION *
+ rois_per_image).astype(int)
+
+ rois_on_imgs = dict()
+ labels_on_imgs = dict()
+ bbox_targets_on_imgs = dict()
+ bbox_weights_on_imgs = dict()
+ mask_targets_on_imgs = dict()
+ mask_weights_on_imgs = dict()
+ for s in config.RCNN_FEAT_STRIDE:
+ rois_on_imgs.update({'stride%s' % s: list()})
+ labels_on_imgs.update({'stride%s' % s: list()})
+ bbox_targets_on_imgs.update({'stride%s' % s: list()})
+ bbox_weights_on_imgs.update({'stride%s' % s: list()})
+ mask_targets_on_imgs.update({'stride%s' % s: list()})
+ mask_weights_on_imgs.update({'stride%s' % s: list()})
+
+ # Sample rois
+ level_related_data_on_imgs = {}
+ for im_i in range(num_images):
+ roi_rec = roidb[im_i]
+ # infer num_classes from gt_overlaps
+ num_classes = roi_rec['gt_overlaps'].shape[1]
+ # label = class RoI has max overlap with
+ rois = roi_rec['boxes']
+ labels = roi_rec['max_classes']
+ overlaps = roi_rec['max_overlaps']
+ bbox_targets = roi_rec['bbox_targets']
+ im_info = roi_rec['im_info']
+
+ mask_targets = roi_rec['mask_targets']
+ mask_labels = roi_rec['mask_labels']
+ mask_inds = roi_rec['mask_inds']
+
+ assign_levels = roi_rec['assign_levels']
+
+ im_rois_on_levels, labels_on_levels, bbox_targets_on_levels, bbox_weights_on_levels, mask_targets_on_levels, mask_weights_on_levels = \
+ sample_rois_fpn(rois, assign_levels, fg_rois_per_image, rois_per_image, num_classes,
+ labels, overlaps, bbox_targets, mask_targets=mask_targets, mask_labels=mask_labels, mask_inds=mask_inds, im_info=im_info)
+
+ level_related_data_on_imgs.update({
+ 'img_%s' % im_i: {
+ 'rois_on_levels': im_rois_on_levels,
+ 'labels_on_levels': labels_on_levels,
+ 'bbox_targets_on_levels': bbox_targets_on_levels,
+ 'bbox_weights_on_levels': bbox_weights_on_levels,
+ 'mask_targets_on_levels': mask_targets_on_levels,
+ 'mask_weights_on_levels': mask_weights_on_levels,
+ }
+ })
+
+ return im_array, level_related_data_on_imgs
+
+
+def sample_rois(rois,
+ fg_rois_per_image,
+ rois_per_image,
+ num_classes,
+ labels=None,
+ overlaps=None,
+ bbox_targets=None,
+ gt_boxes=None,
+ mask_targets=None,
+ mask_labels=None,
+ mask_inds=None):
+ """
+ generate random sample of ROIs comprising foreground and background examples
+ :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index
+ :param fg_rois_per_image: foreground roi number
+ :param rois_per_image: total roi number
+ :param num_classes: number of classes
+ :param labels: maybe precomputed
+ :param overlaps: maybe precomputed (max_overlaps)
+ :param bbox_targets: maybe precomputed
+ :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls)
+ :return: (rois, labels, bbox_targets, bbox_weights)
+ """
+ if labels is None:
+ if len(gt_boxes) == 0:
+ gt_boxes = np.zeros((1, 5))
+ gt_assignment = np.zeros((len(rois), ), dtype=np.int32)
+ overlaps = np.zeros((len(rois), ))
+ labels = np.zeros((len(rois), ))
+ else:
+ overlaps = bbox_overlaps(rois[:, 1:].astype(np.float),
+ gt_boxes[:, :4].astype(np.float))
+ gt_assignment = overlaps.argmax(axis=1)
+ overlaps = overlaps.max(axis=1)
+ labels = gt_boxes[gt_assignment, 4]
+
+ num_rois = rois.shape[0]
+ # foreground RoI with FG_THRESH overlap
+ fg_indexes = np.where(overlaps >= config.TRAIN.FG_THRESH)[0]
+ # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs
+ fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size)
+ # Sample foreground regions without replacement
+ if len(fg_indexes) > fg_rois_per_this_image:
+ fg_indexes = npr.choice(fg_indexes,
+ size=fg_rois_per_this_image,
+ replace=False)
+
+ # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)
+ bg_indexes = np.where((overlaps < config.TRAIN.BG_THRESH_HI)
+ & (overlaps >= config.TRAIN.BG_THRESH_LO))[0]
+ # Compute number of background RoIs to take from this image (guarding against there being fewer than desired)
+ bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image
+ bg_rois_per_this_image = np.minimum(bg_rois_per_this_image,
+ bg_indexes.size)
+ # Sample foreground regions without replacement
+ if len(bg_indexes) > bg_rois_per_this_image:
+ bg_indexes = npr.choice(bg_indexes,
+ size=bg_rois_per_this_image,
+ replace=False)
+
+ # indexes selected
+ keep_indexes = np.append(fg_indexes, bg_indexes)
+
+ neg_idx = np.where(overlaps < config.TRAIN.FG_THRESH)[0]
+ neg_rois = rois[neg_idx]
+
+ # pad more to ensure a fixed minibatch size
+ while keep_indexes.shape[0] < rois_per_image:
+ gap = np.minimum(len(neg_rois), rois_per_image - keep_indexes.shape[0])
+ gap_indexes = npr.choice(range(len(neg_rois)), size=gap, replace=False)
+ keep_indexes = np.append(keep_indexes, neg_idx[gap_indexes])
+
+ # select labels
+ labels = labels[keep_indexes]
+ # set labels of bg_rois to be 0
+ labels[fg_rois_per_this_image:] = 0
+ rois = rois[keep_indexes]
+ if mask_targets is not None:
+ assert mask_labels is not None
+ assert mask_inds is not None
+
+ def _mask_umap(mask_targets, mask_labels, mask_inds):
+ _mask_targets = np.zeros((num_rois, num_classes, 28, 28),
+ dtype=np.int8)
+ _mask_weights = np.zeros((num_rois, num_classes, 28, 28),
+ dtype=np.int8)
+ _mask_targets[mask_inds, mask_labels] = mask_targets
+ _mask_weights[mask_inds, mask_labels] = 1
+ _mask_weights[:, 0] = 0 # set background mask weight to zeros
+ return _mask_targets, _mask_weights # [num_rois, num_classes, 28, 28]
+
+ mask_targets, mask_weights = _mask_umap(mask_targets, mask_labels,
+ mask_inds)
+ mask_targets = mask_targets[keep_indexes]
+ mask_weights = mask_weights[keep_indexes]
+
+ # load or compute bbox_target
+ if bbox_targets is not None:
+ bbox_target_data = bbox_targets[keep_indexes, :]
+ else:
+ targets = bbox_transform(rois[:, 1:],
+ gt_boxes[gt_assignment[keep_indexes], :4])
+ if config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED:
+ targets = ((targets - np.array(config.TRAIN.BBOX_MEANS)) /
+ np.array(config.TRAIN.BBOX_STDS))
+ bbox_target_data = np.hstack((labels[:, np.newaxis], targets))
+
+ bbox_targets, bbox_weights = \
+ expand_bbox_regression_targets(bbox_target_data, num_classes)
+
+ if mask_targets is not None:
+ return rois, labels, bbox_targets, bbox_weights, mask_targets, mask_weights
+ else:
+ return rois, labels, bbox_targets, bbox_weights
+
+
+def sample_rois_fpn(rois,
+ assign_levels,
+ fg_rois_per_image,
+ rois_per_image,
+ num_classes,
+ labels=None,
+ overlaps=None,
+ bbox_targets=None,
+ mask_targets=None,
+ mask_labels=None,
+ mask_inds=None,
+ gt_boxes=None,
+ im_info=None):
+ """
+ generate random sample of ROIs comprising foreground and background examples
+ :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index
+ :param assign_levels: [n]
+ :param fg_rois_per_image: foreground roi number
+ :param rois_per_image: total roi number
+ :param num_classes: number of classes
+ :param labels: maybe precomputed
+ :param overlaps: maybe precomputed (max_overlaps)
+ :param bbox_targets: maybe precomputed
+ :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls)
+ :return: (rois, labels, bbox_targets, bbox_weights)
+ """
+ DEBUG = False
+ if labels is None:
+ if len(gt_boxes) == 0:
+ gt_boxes = np.zeros((1, 5))
+ gt_assignment = np.zeros((len(rois), ), dtype=np.int32)
+ overlaps = np.zeros((len(rois), ))
+ labels = np.zeros((len(rois), ))
+ else:
+ overlaps = bbox_overlaps(rois[:, 1:].astype(np.float),
+ gt_boxes[:, :4].astype(np.float))
+ gt_assignment = overlaps.argmax(axis=1)
+ overlaps = overlaps.max(axis=1)
+ labels = gt_boxes[gt_assignment, 4]
+
+ num_rois = rois.shape[0]
+ # foreground RoI with FG_THRESH overlap
+ fg_indexes = np.where(overlaps >= config.TRAIN.FG_THRESH)[0]
+ # guard against the case when an image has fewer than fg_rois_per_image foreground RoIs
+ fg_rois_per_this_image = np.minimum(fg_rois_per_image, fg_indexes.size)
+
+ if DEBUG:
+ print 'fg total num:', len(fg_indexes)
+
+ # Sample foreground regions without replacement
+ if len(fg_indexes) > fg_rois_per_this_image:
+ fg_indexes = npr.choice(fg_indexes,
+ size=fg_rois_per_this_image,
+ replace=False)
+
+ # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)
+ bg_indexes = np.where((overlaps < config.TRAIN.BG_THRESH_HI)
+ & (overlaps >= config.TRAIN.BG_THRESH_LO))[0]
+ if DEBUG:
+ print 'bg total num:', len(bg_indexes)
+ # Compute number of background RoIs to take from this image (guarding against there being fewer than desired)
+ bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image
+ bg_rois_per_this_image = np.minimum(bg_rois_per_this_image,
+ bg_indexes.size)
+ # Sample foreground regions without replacement
+ if len(bg_indexes) > bg_rois_per_this_image:
+ bg_indexes = npr.choice(bg_indexes,
+ size=bg_rois_per_this_image,
+ replace=False)
+ if DEBUG:
+ print 'fg num:', len(fg_indexes)
+ print 'bg num:', len(bg_indexes)
+
+ # bg rois statistics
+ if DEBUG:
+ bg_assign = assign_levels[bg_indexes]
+ bg_rois_on_levels = dict()
+ for i, s in enumerate(config.RCNN_FEAT_STRIDE):
+ bg_rois_on_levels.update(
+ {'stride%s' % s: len(np.where(bg_assign == s)[0])})
+ print bg_rois_on_levels
+
+ # indexes selected
+ keep_indexes = np.append(fg_indexes, bg_indexes)
+
+ neg_idx = np.where(overlaps < config.TRAIN.FG_THRESH)[0]
+ neg_rois = rois[neg_idx]
+
+ # pad more to ensure a fixed minibatch size
+ while keep_indexes.shape[0] < rois_per_image:
+ gap = np.minimum(len(neg_rois), rois_per_image - keep_indexes.shape[0])
+ gap_indexes = npr.choice(range(len(neg_rois)), size=gap, replace=False)
+ keep_indexes = np.append(keep_indexes, neg_idx[gap_indexes])
+
+ # select labels
+ labels = labels[keep_indexes]
+ # set labels of bg_rois to be 0
+ labels[fg_rois_per_this_image:] = 0
+ rois = rois[keep_indexes]
+ assign_levels = assign_levels[keep_indexes]
+
+ if mask_targets is not None:
+ assert mask_labels is not None
+ assert mask_inds is not None
+
+ def _mask_umap(mask_targets, mask_labels, mask_inds):
+ _mask_targets = np.zeros((num_rois, num_classes, 28, 28),
+ dtype=np.int8)
+ _mask_weights = np.zeros((num_rois, num_classes, 1, 1),
+ dtype=np.int8)
+ _mask_targets[mask_inds, mask_labels] = mask_targets
+ _mask_weights[mask_inds, mask_labels] = 1
+ return _mask_targets, _mask_weights # [num_rois, num_classes, 28, 28]
+
+ mask_targets, mask_weights = _mask_umap(mask_targets, mask_labels,
+ mask_inds)
+ mask_targets = mask_targets[keep_indexes]
+ mask_weights = mask_weights[keep_indexes]
+
+ # load or compute bbox_target
+ if bbox_targets is not None:
+ bbox_target_data = bbox_targets[keep_indexes, :]
+ else:
+ targets = bbox_transform(rois[:, 1:],
+ gt_boxes[gt_assignment[keep_indexes], :4])
+ if config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED:
+ targets = ((targets - np.array(config.TRAIN.BBOX_MEANS)) /
+ np.array(config.TRAIN.BBOX_STDS))
+ bbox_target_data = np.hstack((labels[:, np.newaxis], targets))
+
+ bbox_targets, bbox_weights = \
+ expand_bbox_regression_targets(bbox_target_data, num_classes)
+
+ # Assign to levels
+ rois_on_levels = dict()
+ labels_on_levels = dict()
+ bbox_targets_on_levels = dict()
+ bbox_weights_on_levels = dict()
+ if mask_targets is not None:
+ mask_targets_on_levels = dict()
+ mask_weights_on_levels = dict()
+ for i, s in enumerate(config.RCNN_FEAT_STRIDE):
+ index = np.where(assign_levels == s)
+ _rois = rois[index]
+ _labels = labels[index]
+ _bbox_targets = bbox_targets[index]
+ _bbox_weights = bbox_weights[index]
+ if mask_targets is not None:
+ _mask_targets = mask_targets[index]
+ _mask_weights = mask_weights[index]
+
+ rois_on_levels.update({'stride%s' % s: _rois})
+ labels_on_levels.update({'stride%s' % s: _labels})
+ bbox_targets_on_levels.update({'stride%s' % s: _bbox_targets})
+ bbox_weights_on_levels.update({'stride%s' % s: _bbox_weights})
+ if mask_targets is not None:
+ mask_targets_on_levels.update({'stride%s' % s: _mask_targets})
+ mask_weights_on_levels.update({'stride%s' % s: _mask_weights})
+
+ if mask_targets is not None:
+ return rois_on_levels, labels_on_levels, bbox_targets_on_levels, bbox_weights_on_levels, mask_targets_on_levels, mask_weights_on_levels
+ else:
+ return rois_on_levels, labels_on_levels, bbox_targets_on_levels, bbox_weights_on_levels
+
+
+def get_rois(rois,
+ rois_per_image,
+ num_classes,
+ labels=None,
+ overlaps=None,
+ bbox_targets=None,
+ gt_boxes=None):
+ """
+ get top N ROIs, used in online hard example mining
+ :param rois: all_rois [n, 4]; e2e: [n, 5] with batch_index
+ :param rois_per_image: total roi number
+ :param num_classes: number of classes
+ :param labels: maybe precomputed
+ :param overlaps: maybe precomputed (max_overlaps)
+ :param bbox_targets: maybe precomputed
+ :param gt_boxes: optional for e2e [n, 5] (x1, y1, x2, y2, cls)
+ :return: (rois, labels, bbox_targets, bbox_weights)
+ """
+ if labels is None:
+ if len(gt_boxes) == 0:
+ gt_boxes = np.array([[1, 1, 1, 1, 0]])
+ overlaps = bbox_overlaps(rois[:, 1:].astype(np.float),
+ gt_boxes[:, :4].astype(np.float))
+ gt_assignment = overlaps.argmax(axis=1)
+ overlaps = overlaps.max(axis=1)
+ labels = gt_boxes[gt_assignment, 4]
+
+ # select indices
+ keep_indexes = np.arange(rois.shape[0])
+ if keep_indexes.shape[0] > rois_per_image:
+ keep_indexes = npr.choice(keep_indexes,
+ size=rois_per_image,
+ replace=False)
+
+ # if not enough, pad until rois_per_image is satisfied
+ while keep_indexes.shape[0] < rois_per_image:
+ gap = np.minimum(rois_per_image - keep_indexes.shape[0], len(rois))
+ gap_indexes = npr.choice(range(len(rois)), size=gap, replace=False)
+ keep_indexes = np.append(keep_indexes, gap_indexes)
+
+ # suppress any bg defined by overlap
+ bg_indexes = np.where((overlaps < config.TRAIN.BG_THRESH_HI)
+ & (overlaps >= config.TRAIN.BG_THRESH_LO))[0]
+ labels[bg_indexes] = 0
+
+ labels = labels[keep_indexes]
+ rois = rois[keep_indexes]
+
+ # load or compute bbox_target
+ if bbox_targets is not None:
+ bbox_target_data = bbox_targets[keep_indexes, :]
+ else:
+ targets = bbox_transform(rois[:, 1:],
+ gt_boxes[gt_assignment[keep_indexes], :4])
+ if config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED:
+ targets = ((targets - np.array(config.TRAIN.BBOX_MEANS)) /
+ np.array(config.TRAIN.BBOX_STDS))
+ bbox_target_data = np.hstack((labels[:, np.newaxis], targets))
+
+ bbox_targets, bbox_weights = \
+ expand_bbox_regression_targets(bbox_target_data, num_classes)
+
+ return rois, labels, bbox_targets, bbox_weights
diff --git a/insightface/detection/retinaface/rcnn/io/rpn.py b/insightface/detection/retinaface/rcnn/io/rpn.py
new file mode 100644
index 0000000000000000000000000000000000000000..998fad8c85685517618b8fb2f34fddf3c5f992fa
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/io/rpn.py
@@ -0,0 +1,887 @@
+"""
+RPN:
+data =
+ {'data': [num_images, c, h, w],
+ 'im_info': [num_images, 4] (optional)}
+label =
+ {'gt_boxes': [num_boxes, 5] (optional),
+ 'label': [batch_size, 1] <- [batch_size, num_anchors, feat_height, feat_width],
+ 'bbox_target': [batch_size, num_anchors, feat_height, feat_width],
+ 'bbox_weight': [batch_size, num_anchors, feat_height, feat_width]}
+"""
+
+from __future__ import print_function
+import sys
+import logging
+import datetime
+import numpy as np
+import numpy.random as npr
+
+from ..logger import logger
+from ..config import config
+from .image import get_image, tensor_vstack, get_crop_image
+from ..processing.generate_anchor import generate_anchors, anchors_plane
+from ..processing.bbox_transform import bbox_overlaps, bbox_transform, landmark_transform
+
+STAT = {0: 0, 8: 0, 16: 0, 32: 0}
+
+
+def get_rpn_testbatch(roidb):
+ """
+ return a dict of testbatch
+ :param roidb: ['image', 'flipped']
+ :return: data, label, im_info
+ """
+ assert len(roidb) == 1, 'Single batch only'
+ imgs, roidb = get_image(roidb)
+ im_array = imgs[0]
+ im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
+
+ data = {'data': im_array, 'im_info': im_info}
+ label = {}
+
+ return data, label, im_info
+
+
+def get_rpn_batch(roidb):
+ """
+ prototype for rpn batch: data, im_info, gt_boxes
+ :param roidb: ['image', 'flipped'] + ['gt_boxes', 'boxes', 'gt_classes']
+ :return: data, label
+ """
+ assert len(roidb) == 1, 'Single batch only'
+ imgs, roidb = get_image(roidb)
+ im_array = imgs[0]
+ im_info = np.array([roidb[0]['im_info']], dtype=np.float32)
+
+ # gt boxes: (x1, y1, x2, y2, cls)
+ if roidb[0]['gt_classes'].size > 0:
+ gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]
+ gt_boxes = np.empty((roidb[0]['boxes'].shape[0], 5), dtype=np.float32)
+ gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :]
+ gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]
+ else:
+ gt_boxes = np.empty((0, 5), dtype=np.float32)
+
+ data = {'data': im_array, 'im_info': im_info}
+ label = {'gt_boxes': gt_boxes}
+
+ return data, label
+
+
+def get_crop_batch(roidb):
+ """
+ prototype for rpn batch: data, im_info, gt_boxes
+ :param roidb: ['image', 'flipped'] + ['gt_boxes', 'boxes', 'gt_classes']
+ :return: data, label
+ """
+ #assert len(roidb) == 1, 'Single batch only'
+ data_list = []
+ label_list = []
+ imgs, roidb = get_crop_image(roidb)
+ assert len(imgs) == len(roidb)
+ for i in range(len(imgs)):
+ im_array = imgs[i]
+ im_info = np.array([roidb[i]['im_info']], dtype=np.float32)
+
+ # gt boxes: (x1, y1, x2, y2, cls)
+ if roidb[i]['gt_classes'].size > 0:
+ gt_inds = np.where(roidb[i]['gt_classes'] != 0)[0]
+ gt_boxes = np.empty((roidb[i]['boxes'].shape[0], 5),
+ dtype=np.float32)
+ gt_boxes[:, 0:4] = roidb[i]['boxes'][gt_inds, :]
+ gt_boxes[:, 4] = roidb[i]['gt_classes'][gt_inds]
+ if config.USE_BLUR:
+ gt_blur = roidb[i]['blur']
+ if config.FACE_LANDMARK:
+ #gt_landmarks = np.empty((roidb[i]['landmarks'].shape[0], 11), dtype=np.float32)
+ gt_landmarks = roidb[i]['landmarks'][gt_inds, :, :]
+ if config.HEAD_BOX:
+ gt_boxes_head = np.empty((roidb[i]['boxes_head'].shape[0], 5),
+ dtype=np.float32)
+ gt_boxes_head[:, 0:4] = roidb[i]['boxes_head'][gt_inds, :]
+ gt_boxes_head[:, 4] = roidb[i]['gt_classes'][gt_inds]
+ else:
+ gt_boxes = np.empty((0, 5), dtype=np.float32)
+ if config.USE_BLUR:
+ gt_blur = np.empty((0, ), dtype=np.float32)
+ if config.FACE_LANDMARK:
+ gt_landmarks = np.empty((0, 5, 3), dtype=np.float32)
+ if config.HEAD_BOX:
+ gt_boxes_head = np.empty((0, 5), dtype=np.float32)
+
+ data = {'data': im_array, 'im_info': im_info}
+ label = {'gt_boxes': gt_boxes}
+ if config.USE_BLUR:
+ label['gt_blur'] = gt_blur
+ if config.FACE_LANDMARK:
+ label['gt_landmarks'] = gt_landmarks
+ if config.HEAD_BOX:
+ label['gt_boxes_head'] = gt_boxes_head
+ data_list.append(data)
+ label_list.append(label)
+
+ return data_list, label_list
+
+
+def assign_anchor_fpn(feat_shape,
+ gt_label,
+ im_info,
+ landmark=False,
+ prefix='face',
+ select_stride=0):
+ """
+ assign ground truth boxes to anchor positions
+ :param feat_shape: infer output shape
+ :param gt_boxes: assign ground truth
+ :param im_info: filter out anchors overlapped with edges
+ :return: tuple
+ labels: of shape (batch_size, 1) <- (batch_size, num_anchors, feat_height, feat_width)
+ bbox_targets: of shape (batch_size, num_anchors * 4, feat_height, feat_width)
+ bbox_weights: mark the assigned anchors
+ """
+ def _unmap(data, count, inds, fill=0):
+ """" unmap a subset inds of data into original data of size count """
+ if len(data.shape) == 1:
+ ret = np.empty((count, ), dtype=np.float32)
+ ret.fill(fill)
+ ret[inds] = data
+ else:
+ ret = np.empty((count, ) + data.shape[1:], dtype=np.float32)
+ ret.fill(fill)
+ ret[inds, :] = data
+ return ret
+
+ global STAT
+ DEBUG = False
+
+ im_info = im_info[0]
+ gt_boxes = gt_label['gt_boxes']
+ # clean up boxes
+ nonneg = np.where(gt_boxes[:, 4] != -1)[0]
+ gt_boxes = gt_boxes[nonneg]
+ if config.USE_BLUR:
+ gt_blur = gt_label['gt_blur']
+ gt_blur = gt_blur[nonneg]
+ if landmark:
+ gt_landmarks = gt_label['gt_landmarks']
+ gt_landmarks = gt_landmarks[nonneg]
+ assert gt_boxes.shape[0] == gt_landmarks.shape[0]
+ #scales = np.array(scales, dtype=np.float32)
+ feat_strides = config.RPN_FEAT_STRIDE
+ bbox_pred_len = 4
+ landmark_pred_len = 10
+ if config.USE_BLUR:
+ gt_boxes[:, 4] = gt_blur
+ bbox_pred_len = 5
+ if config.USE_OCCLUSION:
+ landmark_pred_len = 15
+
+ anchors_list = []
+ anchors_num_list = []
+ inds_inside_list = []
+ feat_infos = []
+ A_list = []
+ for i in range(len(feat_strides)):
+ stride = feat_strides[i]
+ sstride = str(stride)
+ base_size = config.RPN_ANCHOR_CFG[sstride]['BASE_SIZE']
+ allowed_border = config.RPN_ANCHOR_CFG[sstride]['ALLOWED_BORDER']
+ ratios = config.RPN_ANCHOR_CFG[sstride]['RATIOS']
+ scales = config.RPN_ANCHOR_CFG[sstride]['SCALES']
+ base_anchors = generate_anchors(base_size=base_size,
+ ratios=list(ratios),
+ scales=np.array(scales,
+ dtype=np.float32),
+ stride=stride,
+ dense_anchor=config.DENSE_ANCHOR)
+ num_anchors = base_anchors.shape[0]
+ feat_height, feat_width = feat_shape[i][-2:]
+ feat_stride = feat_strides[i]
+ feat_infos.append([feat_height, feat_width])
+
+ A = num_anchors
+ A_list.append(A)
+ K = feat_height * feat_width
+
+ all_anchors = anchors_plane(feat_height, feat_width, feat_stride,
+ base_anchors)
+ all_anchors = all_anchors.reshape((K * A, 4))
+ #print('anchor0', stride, all_anchors[0])
+
+ total_anchors = int(K * A)
+ anchors_num_list.append(total_anchors)
+ # only keep anchors inside the image
+ inds_inside = np.where(
+ (all_anchors[:, 0] >= -allowed_border)
+ & (all_anchors[:, 1] >= -allowed_border)
+ & (all_anchors[:, 2] < im_info[1] + allowed_border)
+ & (all_anchors[:, 3] < im_info[0] + allowed_border))[0]
+ if DEBUG:
+ print('total_anchors', total_anchors)
+ print('inds_inside', len(inds_inside))
+
+ # keep only inside anchors
+ anchors = all_anchors[inds_inside, :]
+ #print('AA', anchors.shape, len(inds_inside))
+
+ anchors_list.append(anchors)
+ inds_inside_list.append(inds_inside)
+
+ # Concat anchors from each level
+ anchors = np.concatenate(anchors_list)
+ for i in range(1, len(inds_inside_list)):
+ inds_inside_list[i] = inds_inside_list[i] + sum(anchors_num_list[:i])
+ inds_inside = np.concatenate(inds_inside_list)
+ total_anchors = sum(anchors_num_list)
+ #print('total_anchors', anchors.shape[0], len(inds_inside), file=sys.stderr)
+
+ # label: 1 is positive, 0 is negative, -1 is dont care
+ labels = np.empty((len(inds_inside), ), dtype=np.float32)
+ labels.fill(-1)
+ #print('BB', anchors.shape, len(inds_inside))
+ #print('gt_boxes', gt_boxes.shape, file=sys.stderr)
+
+ if gt_boxes.size > 0:
+ # overlap between the anchors and the gt boxes
+ # overlaps (ex, gt)
+ overlaps = bbox_overlaps(anchors.astype(np.float),
+ gt_boxes.astype(np.float))
+ argmax_overlaps = overlaps.argmax(axis=1)
+ #print('AAA', argmax_overlaps.shape)
+ max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps]
+ gt_argmax_overlaps = overlaps.argmax(axis=0)
+ gt_max_overlaps = overlaps[gt_argmax_overlaps,
+ np.arange(overlaps.shape[1])]
+ gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]
+
+ if not config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels first so that positive labels can clobber them
+ labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
+
+ # fg label: for each gt, anchor with highest overlap
+ if config.TRAIN.RPN_FORCE_POSITIVE:
+ labels[gt_argmax_overlaps] = 1
+
+ # fg label: above threshold IoU
+ labels[max_overlaps >= config.TRAIN.RPN_POSITIVE_OVERLAP] = 1
+
+ if config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels last so that negative labels can clobber positives
+ labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
+ else:
+ labels[:] = 0
+ fg_inds = np.where(labels == 1)[0]
+ #print('fg count', len(fg_inds))
+
+ # subsample positive labels if we have too many
+ if config.TRAIN.RPN_ENABLE_OHEM == 0:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = int(config.TRAIN.RPN_FG_FRACTION *
+ config.TRAIN.RPN_BATCH_SIZE)
+ if len(fg_inds) > num_fg:
+ disable_inds = npr.choice(fg_inds,
+ size=(len(fg_inds) - num_fg),
+ replace=False)
+ if DEBUG:
+ disable_inds = fg_inds[:(len(fg_inds) - num_fg)]
+ labels[disable_inds] = -1
+
+ # subsample negative labels if we have too many
+ num_bg = config.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1)
+ bg_inds = np.where(labels == 0)[0]
+ if len(bg_inds) > num_bg:
+ disable_inds = npr.choice(bg_inds,
+ size=(len(bg_inds) - num_bg),
+ replace=False)
+ if DEBUG:
+ disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ labels[disable_inds] = -1
+
+ #fg_inds = np.where(labels == 1)[0]
+ #num_fg = len(fg_inds)
+ #num_bg = num_fg*int(1.0/config.TRAIN.RPN_FG_FRACTION-1)
+
+ #bg_inds = np.where(labels == 0)[0]
+ #if len(bg_inds) > num_bg:
+ # disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
+ # if DEBUG:
+ # disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ # labels[disable_inds] = -1
+ else:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = len(fg_inds)
+ bg_inds = np.where(labels == 0)[0]
+ num_bg = len(bg_inds)
+
+ #print('anchor stat', num_fg, num_bg)
+
+ bbox_targets = np.zeros((len(inds_inside), bbox_pred_len),
+ dtype=np.float32)
+ if gt_boxes.size > 0:
+ #print('GT', gt_boxes.shape, gt_boxes[argmax_overlaps, :4].shape)
+ bbox_targets[:, :] = bbox_transform(anchors,
+ gt_boxes[argmax_overlaps, :])
+ #bbox_targets[:,4] = gt_blur
+
+ bbox_weights = np.zeros((len(inds_inside), bbox_pred_len),
+ dtype=np.float32)
+ #bbox_weights[labels == 1, :] = np.array(config.TRAIN.RPN_BBOX_WEIGHTS)
+ bbox_weights[labels == 1, 0:4] = 1.0
+ if bbox_pred_len > 4:
+ bbox_weights[labels == 1, 4:bbox_pred_len] = 0.1
+
+ if landmark:
+ landmark_targets = np.zeros((len(inds_inside), landmark_pred_len),
+ dtype=np.float32)
+ #landmark_weights = np.zeros((len(inds_inside), 10), dtype=np.float32)
+ landmark_weights = np.zeros((len(inds_inside), landmark_pred_len),
+ dtype=np.float32)
+ #landmark_weights[labels == 1, :] = np.array(config.TRAIN.RPN_LANDMARK_WEIGHTS)
+ if landmark_pred_len == 10:
+ landmark_weights[labels == 1, :] = 1.0
+ elif landmark_pred_len == 15:
+ v = [1.0, 1.0, 0.1] * 5
+ assert len(v) == 15
+ landmark_weights[labels == 1, :] = np.array(v)
+ else:
+ assert False
+ #TODO here
+ if gt_landmarks.size > 0:
+ #print('AAA',argmax_overlaps)
+ a_landmarks = gt_landmarks[argmax_overlaps, :, :]
+ landmark_targets[:] = landmark_transform(anchors, a_landmarks)
+ invalid = np.where(a_landmarks[:, 0, 2] < 0.0)[0]
+ #assert len(invalid)==0
+ #landmark_weights[invalid, :] = np.array(config.TRAIN.RPN_INVALID_LANDMARK_WEIGHTS)
+ landmark_weights[invalid, :] = 0.0
+
+ #if DEBUG:
+ # _sums = bbox_targets[labels == 1, :].sum(axis=0)
+ # _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0)
+ # _counts = np.sum(labels == 1)
+ # means = _sums / (_counts + 1e-14)
+ # stds = np.sqrt(_squared_sums / _counts - means ** 2)
+ # print 'means', means
+ # print 'stdevs', stds
+ # map up to original set of anchors
+ #print(labels.shape, total_anchors, inds_inside.shape, inds_inside[0], inds_inside[-1])
+ labels = _unmap(labels, total_anchors, inds_inside, fill=-1)
+ bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0)
+ bbox_weights = _unmap(bbox_weights, total_anchors, inds_inside, fill=0)
+ if landmark:
+ landmark_targets = _unmap(landmark_targets,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ landmark_weights = _unmap(landmark_weights,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ #print('CC', anchors.shape, len(inds_inside))
+
+ #if DEBUG:
+ # if gt_boxes.size > 0:
+ # print 'rpn: max max_overlaps', np.max(max_overlaps)
+ # print 'rpn: num_positives', np.sum(labels == 1)
+ # print 'rpn: num_negatives', np.sum(labels == 0)
+ # _fg_sum = np.sum(labels == 1)
+ # _bg_sum = np.sum(labels == 0)
+ # _count = 1
+ # print 'rpn: num_positive avg', _fg_sum / _count
+ # print 'rpn: num_negative avg', _bg_sum / _count
+
+ # resahpe
+ label_list = list()
+ bbox_target_list = list()
+ bbox_weight_list = list()
+ if landmark:
+ landmark_target_list = list()
+ landmark_weight_list = list()
+ anchors_num_range = [0] + anchors_num_list
+ label = {}
+ for i in range(len(feat_strides)):
+ stride = feat_strides[i]
+ feat_height, feat_width = feat_infos[i]
+ A = A_list[i]
+ _label = labels[sum(anchors_num_range[:i +
+ 1]):sum(anchors_num_range[:i +
+ 1]) +
+ anchors_num_range[i + 1]]
+ if select_stride > 0 and stride != select_stride:
+ #print('set', stride, select_stride)
+ _label[:] = -1
+ #print('_label', _label.shape, select_stride)
+ #_fg_inds = np.where(_label == 1)[0]
+ #n_fg = len(_fg_inds)
+ #STAT[0]+=1
+ #STAT[stride]+=n_fg
+ #if STAT[0]%100==0:
+ # print('rpn_stat', STAT, file=sys.stderr)
+ bbox_target = bbox_targets[sum(anchors_num_range[:i + 1]
+ ):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ bbox_weight = bbox_weights[sum(anchors_num_range[:i + 1]
+ ):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ if landmark:
+ landmark_target = landmark_targets[
+ sum(anchors_num_range[:i + 1]):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ landmark_weight = landmark_weights[
+ sum(anchors_num_range[:i + 1]):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+
+ _label = _label.reshape(
+ (1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)
+ _label = _label.reshape((1, A * feat_height * feat_width))
+ bbox_target = bbox_target.reshape(
+ (1, feat_height * feat_width,
+ A * bbox_pred_len)).transpose(0, 2, 1)
+ bbox_weight = bbox_weight.reshape(
+ (1, feat_height * feat_width, A * bbox_pred_len)).transpose(
+ (0, 2, 1))
+ label['%s_label_stride%d' % (prefix, stride)] = _label
+ label['%s_bbox_target_stride%d' % (prefix, stride)] = bbox_target
+ label['%s_bbox_weight_stride%d' % (prefix, stride)] = bbox_weight
+ if landmark:
+ landmark_target = landmark_target.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose(0, 2, 1)
+ landmark_weight = landmark_weight.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose((0, 2, 1))
+ label['%s_landmark_target_stride%d' %
+ (prefix, stride)] = landmark_target
+ label['%s_landmark_weight_stride%d' %
+ (prefix, stride)] = landmark_weight
+ #print('in_rpn', stride,_label.shape, bbox_target.shape, bbox_weight.shape, file=sys.stderr)
+ label_list.append(_label)
+ #print('DD', _label.shape)
+ bbox_target_list.append(bbox_target)
+ bbox_weight_list.append(bbox_weight)
+ if landmark:
+ landmark_target_list.append(landmark_target)
+ landmark_weight_list.append(landmark_weight)
+
+ label_concat = np.concatenate(label_list, axis=1)
+ bbox_target_concat = np.concatenate(bbox_target_list, axis=2)
+ bbox_weight_concat = np.concatenate(bbox_weight_list, axis=2)
+ #fg_inds = np.where(label_concat[0] == 1)[0]
+ #print('fg_inds_in_rpn2', fg_inds, file=sys.stderr)
+
+ label.update({
+ '%s_label' % prefix: label_concat,
+ '%s_bbox_target' % prefix: bbox_target_concat,
+ '%s_bbox_weight' % prefix: bbox_weight_concat
+ })
+ if landmark:
+ landmark_target_concat = np.concatenate(landmark_target_list, axis=2)
+ landmark_weight_concat = np.concatenate(landmark_weight_list, axis=2)
+ label['%s_landmark_target' % prefix] = landmark_target_concat
+ label['%s_landmark_weight' % prefix] = landmark_weight_concat
+ return label
+
+
+class AA:
+ def __init__(self, feat_shape):
+ self.feat_shape = feat_shape
+ feat_strides = config.RPN_FEAT_STRIDE
+ anchors_list = []
+ anchors_num_list = []
+ inds_inside_list = []
+ feat_infos = []
+ A_list = []
+ DEBUG = False
+ for i in range(len(feat_strides)):
+ stride = feat_strides[i]
+ sstride = str(stride)
+ base_size = config.RPN_ANCHOR_CFG[sstride]['BASE_SIZE']
+ allowed_border = config.RPN_ANCHOR_CFG[sstride]['ALLOWED_BORDER']
+ ratios = config.RPN_ANCHOR_CFG[sstride]['RATIOS']
+ scales = config.RPN_ANCHOR_CFG[sstride]['SCALES']
+ base_anchors = generate_anchors(base_size=base_size,
+ ratios=list(ratios),
+ scales=np.array(scales,
+ dtype=np.float32),
+ stride=stride,
+ dense_anchor=config.DENSE_ANCHOR)
+ num_anchors = base_anchors.shape[0]
+ feat_height, feat_width = feat_shape[i][-2:]
+ feat_stride = feat_strides[i]
+ feat_infos.append([feat_height, feat_width])
+
+ A = num_anchors
+ A_list.append(A)
+ K = feat_height * feat_width
+
+ all_anchors = anchors_plane(feat_height, feat_width, feat_stride,
+ base_anchors)
+ all_anchors = all_anchors.reshape((K * A, 4))
+ #print('anchor0', stride, all_anchors[0])
+
+ total_anchors = int(K * A)
+ anchors_num_list.append(total_anchors)
+ # only keep anchors inside the image
+ inds_inside = np.where(
+ (all_anchors[:, 0] >= -allowed_border)
+ & (all_anchors[:, 1] >= -allowed_border)
+ & (all_anchors[:, 2] < config.SCALES[0][1] + allowed_border) &
+ (all_anchors[:, 3] < config.SCALES[0][1] + allowed_border))[0]
+ if DEBUG:
+ print('total_anchors', total_anchors)
+ print('inds_inside', len(inds_inside))
+
+ # keep only inside anchors
+ anchors = all_anchors[inds_inside, :]
+ #print('AA', anchors.shape, len(inds_inside))
+
+ anchors_list.append(anchors)
+ inds_inside_list.append(inds_inside)
+ anchors = np.concatenate(anchors_list)
+ for i in range(1, len(inds_inside_list)):
+ inds_inside_list[i] = inds_inside_list[i] + sum(
+ anchors_num_list[:i])
+ inds_inside = np.concatenate(inds_inside_list)
+ #self.anchors_list = anchors_list
+ #self.inds_inside_list = inds_inside_list
+ self.anchors = anchors
+ self.inds_inside = inds_inside
+ self.anchors_num_list = anchors_num_list
+ self.feat_infos = feat_infos
+ self.A_list = A_list
+ self._times = [0.0, 0.0, 0.0, 0.0]
+
+ @staticmethod
+ def _unmap(data, count, inds, fill=0):
+ """" unmap a subset inds of data into original data of size count """
+ if len(data.shape) == 1:
+ ret = np.empty((count, ), dtype=np.float32)
+ ret.fill(fill)
+ ret[inds] = data
+ else:
+ ret = np.empty((count, ) + data.shape[1:], dtype=np.float32)
+ ret.fill(fill)
+ ret[inds, :] = data
+ return ret
+
+ def assign_anchor_fpn(self,
+ gt_label,
+ im_info,
+ landmark=False,
+ prefix='face',
+ select_stride=0):
+
+ #ta = datetime.datetime.now()
+ im_info = im_info[0]
+ gt_boxes = gt_label['gt_boxes']
+ # clean up boxes
+ nonneg = np.where(gt_boxes[:, 4] != -1)[0]
+ gt_boxes = gt_boxes[nonneg]
+ if config.USE_BLUR:
+ gt_blur = gt_label['gt_blur']
+ gt_blur = gt_blur[nonneg]
+ if landmark:
+ gt_landmarks = gt_label['gt_landmarks']
+ gt_landmarks = gt_landmarks[nonneg]
+ assert gt_boxes.shape[0] == gt_landmarks.shape[0]
+ #scales = np.array(scales, dtype=np.float32)
+ feat_strides = config.RPN_FEAT_STRIDE
+ bbox_pred_len = 4
+ landmark_pred_len = 10
+ if config.USE_BLUR:
+ gt_boxes[:, 4] = gt_blur
+ bbox_pred_len = 5
+ if config.USE_OCCLUSION:
+ landmark_pred_len = 15
+
+ #anchors_list = self.anchors_list
+ #inds_inside_list = self.inds_inside_list
+ anchors = self.anchors
+ inds_inside = self.inds_inside
+ anchors_num_list = self.anchors_num_list
+ feat_infos = self.feat_infos
+ A_list = self.A_list
+
+ total_anchors = sum(anchors_num_list)
+ #print('total_anchors', anchors.shape[0], len(inds_inside), file=sys.stderr)
+
+ # label: 1 is positive, 0 is negative, -1 is dont care
+ labels = np.empty((len(inds_inside), ), dtype=np.float32)
+ labels.fill(-1)
+ #print('BB', anchors.shape, len(inds_inside))
+ #print('gt_boxes', gt_boxes.shape, file=sys.stderr)
+ #tb = datetime.datetime.now()
+ #self._times[0] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+
+ if gt_boxes.size > 0:
+ # overlap between the anchors and the gt boxes
+ # overlaps (ex, gt)
+ overlaps = bbox_overlaps(anchors.astype(np.float),
+ gt_boxes.astype(np.float))
+ argmax_overlaps = overlaps.argmax(axis=1)
+ #print('AAA', argmax_overlaps.shape)
+ max_overlaps = overlaps[np.arange(len(inds_inside)),
+ argmax_overlaps]
+ gt_argmax_overlaps = overlaps.argmax(axis=0)
+ gt_max_overlaps = overlaps[gt_argmax_overlaps,
+ np.arange(overlaps.shape[1])]
+ gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]
+
+ if not config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels first so that positive labels can clobber them
+ labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
+
+ # fg label: for each gt, anchor with highest overlap
+ if config.TRAIN.RPN_FORCE_POSITIVE:
+ labels[gt_argmax_overlaps] = 1
+
+ # fg label: above threshold IoU
+ labels[max_overlaps >= config.TRAIN.RPN_POSITIVE_OVERLAP] = 1
+
+ if config.TRAIN.RPN_CLOBBER_POSITIVES:
+ # assign bg labels last so that negative labels can clobber positives
+ labels[max_overlaps < config.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
+ else:
+ labels[:] = 0
+ fg_inds = np.where(labels == 1)[0]
+ #print('fg count', len(fg_inds))
+
+ # subsample positive labels if we have too many
+ if config.TRAIN.RPN_ENABLE_OHEM == 0:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = int(config.TRAIN.RPN_FG_FRACTION *
+ config.TRAIN.RPN_BATCH_SIZE)
+ if len(fg_inds) > num_fg:
+ disable_inds = npr.choice(fg_inds,
+ size=(len(fg_inds) - num_fg),
+ replace=False)
+ if DEBUG:
+ disable_inds = fg_inds[:(len(fg_inds) - num_fg)]
+ labels[disable_inds] = -1
+
+ # subsample negative labels if we have too many
+ num_bg = config.TRAIN.RPN_BATCH_SIZE - np.sum(labels == 1)
+ bg_inds = np.where(labels == 0)[0]
+ if len(bg_inds) > num_bg:
+ disable_inds = npr.choice(bg_inds,
+ size=(len(bg_inds) - num_bg),
+ replace=False)
+ if DEBUG:
+ disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ labels[disable_inds] = -1
+
+ #fg_inds = np.where(labels == 1)[0]
+ #num_fg = len(fg_inds)
+ #num_bg = num_fg*int(1.0/config.TRAIN.RPN_FG_FRACTION-1)
+
+ #bg_inds = np.where(labels == 0)[0]
+ #if len(bg_inds) > num_bg:
+ # disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
+ # if DEBUG:
+ # disable_inds = bg_inds[:(len(bg_inds) - num_bg)]
+ # labels[disable_inds] = -1
+ else:
+ fg_inds = np.where(labels == 1)[0]
+ num_fg = len(fg_inds)
+ bg_inds = np.where(labels == 0)[0]
+ num_bg = len(bg_inds)
+
+ #print('anchor stat', num_fg, num_bg)
+
+ bbox_targets = np.zeros((len(inds_inside), bbox_pred_len),
+ dtype=np.float32)
+ if gt_boxes.size > 0:
+ #print('GT', gt_boxes.shape, gt_boxes[argmax_overlaps, :4].shape)
+ bbox_targets[:, :] = bbox_transform(anchors,
+ gt_boxes[argmax_overlaps, :])
+ #bbox_targets[:,4] = gt_blur
+ #tb = datetime.datetime.now()
+ #self._times[1] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+
+ bbox_weights = np.zeros((len(inds_inside), bbox_pred_len),
+ dtype=np.float32)
+ #bbox_weights[labels == 1, :] = np.array(config.TRAIN.RPN_BBOX_WEIGHTS)
+ bbox_weights[labels == 1, 0:4] = 1.0
+ if bbox_pred_len > 4:
+ bbox_weights[labels == 1, 4:bbox_pred_len] = 0.1
+
+ if landmark:
+ landmark_targets = np.zeros((len(inds_inside), landmark_pred_len),
+ dtype=np.float32)
+ #landmark_weights = np.zeros((len(inds_inside), 10), dtype=np.float32)
+ landmark_weights = np.zeros((len(inds_inside), landmark_pred_len),
+ dtype=np.float32)
+ #landmark_weights[labels == 1, :] = np.array(config.TRAIN.RPN_LANDMARK_WEIGHTS)
+ if landmark_pred_len == 10:
+ landmark_weights[labels == 1, :] = 1.0
+ elif landmark_pred_len == 15:
+ v = [1.0, 1.0, 0.1] * 5
+ assert len(v) == 15
+ landmark_weights[labels == 1, :] = np.array(v)
+ else:
+ assert False
+ #TODO here
+ if gt_landmarks.size > 0:
+ #print('AAA',argmax_overlaps)
+ a_landmarks = gt_landmarks[argmax_overlaps, :, :]
+ landmark_targets[:] = landmark_transform(anchors, a_landmarks)
+ invalid = np.where(a_landmarks[:, 0, 2] < 0.0)[0]
+ #assert len(invalid)==0
+ #landmark_weights[invalid, :] = np.array(config.TRAIN.RPN_INVALID_LANDMARK_WEIGHTS)
+ landmark_weights[invalid, :] = 0.0
+ #tb = datetime.datetime.now()
+ #self._times[2] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+
+ #if DEBUG:
+ # _sums = bbox_targets[labels == 1, :].sum(axis=0)
+ # _squared_sums = (bbox_targets[labels == 1, :] ** 2).sum(axis=0)
+ # _counts = np.sum(labels == 1)
+ # means = _sums / (_counts + 1e-14)
+ # stds = np.sqrt(_squared_sums / _counts - means ** 2)
+ # print 'means', means
+ # print 'stdevs', stds
+ # map up to original set of anchors
+ #print(labels.shape, total_anchors, inds_inside.shape, inds_inside[0], inds_inside[-1])
+ labels = AA._unmap(labels, total_anchors, inds_inside, fill=-1)
+ bbox_targets = AA._unmap(bbox_targets,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ bbox_weights = AA._unmap(bbox_weights,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ if landmark:
+ landmark_targets = AA._unmap(landmark_targets,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ landmark_weights = AA._unmap(landmark_weights,
+ total_anchors,
+ inds_inside,
+ fill=0)
+ #print('CC', anchors.shape, len(inds_inside))
+
+ bbox_targets[:,
+ 0::4] = bbox_targets[:, 0::4] / config.TRAIN.BBOX_STDS[0]
+ bbox_targets[:,
+ 1::4] = bbox_targets[:, 1::4] / config.TRAIN.BBOX_STDS[1]
+ bbox_targets[:,
+ 2::4] = bbox_targets[:, 2::4] / config.TRAIN.BBOX_STDS[2]
+ bbox_targets[:,
+ 3::4] = bbox_targets[:, 3::4] / config.TRAIN.BBOX_STDS[3]
+ landmark_targets /= config.TRAIN.LANDMARK_STD
+ #print('applied STD')
+
+ #if DEBUG:
+ # if gt_boxes.size > 0:
+ # print 'rpn: max max_overlaps', np.max(max_overlaps)
+ # print 'rpn: num_positives', np.sum(labels == 1)
+ # print 'rpn: num_negatives', np.sum(labels == 0)
+ # _fg_sum = np.sum(labels == 1)
+ # _bg_sum = np.sum(labels == 0)
+ # _count = 1
+ # print 'rpn: num_positive avg', _fg_sum / _count
+ # print 'rpn: num_negative avg', _bg_sum / _count
+
+ # resahpe
+ label_list = list()
+ bbox_target_list = list()
+ bbox_weight_list = list()
+ if landmark:
+ landmark_target_list = list()
+ landmark_weight_list = list()
+ anchors_num_range = [0] + anchors_num_list
+ label = {}
+ for i in range(len(feat_strides)):
+ stride = feat_strides[i]
+ feat_height, feat_width = feat_infos[i]
+ A = A_list[i]
+ _label = labels[sum(anchors_num_range[:i + 1]
+ ):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ if select_stride > 0 and stride != select_stride:
+ #print('set', stride, select_stride)
+ _label[:] = -1
+ #print('_label', _label.shape, select_stride)
+ #_fg_inds = np.where(_label == 1)[0]
+ #n_fg = len(_fg_inds)
+ #STAT[0]+=1
+ #STAT[stride]+=n_fg
+ #if STAT[0]%100==0:
+ # print('rpn_stat', STAT, file=sys.stderr)
+ bbox_target = bbox_targets[sum(anchors_num_range[:i + 1]
+ ):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ bbox_weight = bbox_weights[sum(anchors_num_range[:i + 1]
+ ):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ if landmark:
+ landmark_target = landmark_targets[
+ sum(anchors_num_range[:i +
+ 1]):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+ landmark_weight = landmark_weights[
+ sum(anchors_num_range[:i +
+ 1]):sum(anchors_num_range[:i + 1]) +
+ anchors_num_range[i + 1]]
+
+ _label = _label.reshape(
+ (1, feat_height, feat_width, A)).transpose(0, 3, 1, 2)
+ _label = _label.reshape((1, A * feat_height * feat_width))
+ bbox_target = bbox_target.reshape(
+ (1, feat_height * feat_width,
+ A * bbox_pred_len)).transpose(0, 2, 1)
+ bbox_weight = bbox_weight.reshape(
+ (1, feat_height * feat_width, A * bbox_pred_len)).transpose(
+ (0, 2, 1))
+ label['%s_label_stride%d' % (prefix, stride)] = _label
+ label['%s_bbox_target_stride%d' % (prefix, stride)] = bbox_target
+ label['%s_bbox_weight_stride%d' % (prefix, stride)] = bbox_weight
+ if landmark:
+ landmark_target = landmark_target.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose(0, 2, 1)
+ landmark_weight = landmark_weight.reshape(
+ (1, feat_height * feat_width,
+ A * landmark_pred_len)).transpose((0, 2, 1))
+ label['%s_landmark_target_stride%d' %
+ (prefix, stride)] = landmark_target
+ label['%s_landmark_weight_stride%d' %
+ (prefix, stride)] = landmark_weight
+ #print('in_rpn', stride,_label.shape, bbox_target.shape, bbox_weight.shape, file=sys.stderr)
+ label_list.append(_label)
+ #print('DD', _label.shape)
+ bbox_target_list.append(bbox_target)
+ bbox_weight_list.append(bbox_weight)
+ if landmark:
+ landmark_target_list.append(landmark_target)
+ landmark_weight_list.append(landmark_weight)
+
+ label_concat = np.concatenate(label_list, axis=1)
+ bbox_target_concat = np.concatenate(bbox_target_list, axis=2)
+ bbox_weight_concat = np.concatenate(bbox_weight_list, axis=2)
+ #fg_inds = np.where(label_concat[0] == 1)[0]
+ #print('fg_inds_in_rpn2', fg_inds, file=sys.stderr)
+
+ label.update({
+ '%s_label' % prefix: label_concat,
+ '%s_bbox_target' % prefix: bbox_target_concat,
+ '%s_bbox_weight' % prefix: bbox_weight_concat
+ })
+ if landmark:
+ landmark_target_concat = np.concatenate(landmark_target_list,
+ axis=2)
+ landmark_weight_concat = np.concatenate(landmark_weight_list,
+ axis=2)
+ label['%s_landmark_target' % prefix] = landmark_target_concat
+ label['%s_landmark_weight' % prefix] = landmark_weight_concat
+ #tb = datetime.datetime.now()
+ #self._times[3] += (tb-ta).total_seconds()
+ #ta = datetime.datetime.now()
+ #print(self._times)
+ return label
diff --git a/insightface/detection/retinaface/rcnn/logger.py b/insightface/detection/retinaface/rcnn/logger.py
new file mode 100644
index 0000000000000000000000000000000000000000..2806e1add180b4530956387e112ed07a566ce869
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/logger.py
@@ -0,0 +1,6 @@
+import logging
+
+# set up logger
+logging.basicConfig()
+logger = logging.getLogger()
+logger.setLevel(logging.INFO)
diff --git a/insightface/detection/retinaface/rcnn/processing/__init__.py b/insightface/detection/retinaface/rcnn/processing/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/insightface/detection/retinaface/rcnn/processing/assign_levels.py b/insightface/detection/retinaface/rcnn/processing/assign_levels.py
new file mode 100755
index 0000000000000000000000000000000000000000..012d73d2134cc50aee3aba73641c520084538621
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/processing/assign_levels.py
@@ -0,0 +1,36 @@
+from rcnn.config import config
+import numpy as np
+
+
+def compute_assign_targets(rois, threshold):
+ rois_area = np.sqrt(
+ (rois[:, 2] - rois[:, 0] + 1) * (rois[:, 3] - rois[:, 1] + 1))
+ num_rois = np.shape(rois)[0]
+ assign_levels = np.zeros(num_rois, dtype=np.uint8)
+ for i, stride in enumerate(config.RCNN_FEAT_STRIDE):
+ thd = threshold[i]
+ idx = np.logical_and(thd[1] <= rois_area, rois_area < thd[0])
+ assign_levels[idx] = stride
+
+ assert 0 not in assign_levels, "All rois should assign to specify levels."
+ return assign_levels
+
+
+def add_assign_targets(roidb):
+ """
+ given roidb, add ['assign_level']
+ :param roidb: roidb to be processed. must have gone through imdb.prepare_roidb
+ """
+ print 'add assign targets'
+ assert len(roidb) > 0
+ assert 'boxes' in roidb[0]
+
+ area_threshold = [[np.inf, 448], [448, 224], [224, 112], [112, 0]]
+
+ assert len(config.RCNN_FEAT_STRIDE) == len(area_threshold)
+
+ num_images = len(roidb)
+ for im_i in range(num_images):
+ rois = roidb[im_i]['boxes']
+ roidb[im_i]['assign_levels'] = compute_assign_targets(
+ rois, area_threshold)
diff --git a/insightface/detection/retinaface/rcnn/processing/bbox_regression.py b/insightface/detection/retinaface/rcnn/processing/bbox_regression.py
new file mode 100644
index 0000000000000000000000000000000000000000..0eaf917a6f3a2282c3fabb929b270441329b5198
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/processing/bbox_regression.py
@@ -0,0 +1,263 @@
+"""
+This file has functions about generating bounding box regression targets
+"""
+
+from ..pycocotools.mask import encode
+import numpy as np
+
+from ..logger import logger
+from .bbox_transform import bbox_overlaps, bbox_transform
+from rcnn.config import config
+import math
+import cv2
+import PIL.Image as Image
+import threading
+import Queue
+
+
+def compute_bbox_regression_targets(rois, overlaps, labels):
+ """
+ given rois, overlaps, gt labels, compute bounding box regression targets
+ :param rois: roidb[i]['boxes'] k * 4
+ :param overlaps: roidb[i]['max_overlaps'] k * 1
+ :param labels: roidb[i]['max_classes'] k * 1
+ :return: targets[i][class, dx, dy, dw, dh] k * 5
+ """
+ # Ensure ROIs are floats
+ rois = rois.astype(np.float, copy=False)
+
+ # Sanity check
+ if len(rois) != len(overlaps):
+ logger.warning('bbox regression: len(rois) != len(overlaps)')
+
+ # Indices of ground-truth ROIs
+ gt_inds = np.where(overlaps == 1)[0]
+ if len(gt_inds) == 0:
+ logger.warning('bbox regression: len(gt_inds) == 0')
+
+ # Indices of examples for which we try to make predictions
+ ex_inds = np.where(overlaps >= config.TRAIN.BBOX_REGRESSION_THRESH)[0]
+
+ # Get IoU overlap between each ex ROI and gt ROI
+ ex_gt_overlaps = bbox_overlaps(rois[ex_inds, :], rois[gt_inds, :])
+
+ # Find which gt ROI each ex ROI has max overlap with:
+ # this will be the ex ROI's gt target
+ gt_assignment = ex_gt_overlaps.argmax(axis=1)
+ gt_rois = rois[gt_inds[gt_assignment], :]
+ ex_rois = rois[ex_inds, :]
+
+ targets = np.zeros((rois.shape[0], 5), dtype=np.float32)
+ targets[ex_inds, 0] = labels[ex_inds]
+ targets[ex_inds, 1:] = bbox_transform(ex_rois, gt_rois)
+ return targets
+
+
+def add_bbox_regression_targets(roidb):
+ """
+ given roidb, add ['bbox_targets'] and normalize bounding box regression targets
+ :param roidb: roidb to be processed. must have gone through imdb.prepare_roidb
+ :return: means, std variances of targets
+ """
+ logger.info('bbox regression: add bounding box regression targets')
+ assert len(roidb) > 0
+ assert 'max_classes' in roidb[0]
+
+ num_images = len(roidb)
+ num_classes = roidb[0]['gt_overlaps'].shape[1]
+ for im_i in range(num_images):
+ rois = roidb[im_i]['boxes']
+ max_overlaps = roidb[im_i]['max_overlaps']
+ max_classes = roidb[im_i]['max_classes']
+ roidb[im_i]['bbox_targets'] = compute_bbox_regression_targets(
+ rois, max_overlaps, max_classes)
+
+ if config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED:
+ # use fixed / precomputed means and stds instead of empirical values
+ means = np.tile(np.array(config.TRAIN.BBOX_MEANS), (num_classes, 1))
+ stds = np.tile(np.array(config.TRAIN.BBOX_STDS), (num_classes, 1))
+ else:
+ # compute mean, std values
+ class_counts = np.zeros((num_classes, 1)) + 1e-14
+ sums = np.zeros((num_classes, 4))
+ squared_sums = np.zeros((num_classes, 4))
+ for im_i in range(num_images):
+ targets = roidb[im_i]['bbox_targets']
+ for cls in range(1, num_classes):
+ cls_indexes = np.where(targets[:, 0] == cls)[0]
+ if cls_indexes.size > 0:
+ class_counts[cls] += cls_indexes.size
+ sums[cls, :] += targets[cls_indexes, 1:].sum(axis=0)
+ squared_sums[cls, :] += (targets[cls_indexes,
+ 1:]**2).sum(axis=0)
+
+ means = sums / class_counts
+ # var(x) = E(x^2) - E(x)^2
+ stds = np.sqrt(squared_sums / class_counts - means**2)
+
+ # normalized targets
+ for im_i in range(num_images):
+ targets = roidb[im_i]['bbox_targets']
+ for cls in range(1, num_classes):
+ cls_indexes = np.where(targets[:, 0] == cls)[0]
+ roidb[im_i]['bbox_targets'][cls_indexes, 1:] -= means[cls, :]
+ roidb[im_i]['bbox_targets'][cls_indexes, 1:] /= stds[cls, :]
+
+ return means.ravel(), stds.ravel()
+
+
+def expand_bbox_regression_targets(bbox_targets_data, num_classes):
+ """
+ expand from 5 to 4 * num_classes; only the right class has non-zero bbox regression targets
+ :param bbox_targets_data: [k * 5]
+ :param num_classes: number of classes
+ :return: bbox target processed [k * 4 num_classes]
+ bbox_weights ! only foreground boxes have bbox regression computation!
+ """
+ classes = bbox_targets_data[:, 0]
+ bbox_targets = np.zeros((classes.size, 4 * num_classes), dtype=np.float32)
+ bbox_weights = np.zeros(bbox_targets.shape, dtype=np.float32)
+ indexes = np.where(classes > 0)[0]
+ for index in indexes:
+ cls = classes[index]
+ start = int(4 * cls)
+ end = start + 4
+ bbox_targets[index, start:end] = bbox_targets_data[index, 1:]
+ bbox_weights[index, start:end] = config.TRAIN.BBOX_WEIGHTS
+ return bbox_targets, bbox_weights
+
+
+def compute_mask_and_label(ex_rois, ex_labels, seg, flipped):
+ # assert os.path.exists(seg_gt), 'Path does not exist: {}'.format(seg_gt)
+ # im = Image.open(seg_gt)
+ # pixel = list(im.getdata())
+ # pixel = np.array(pixel).reshape([im.size[1], im.size[0]])
+ im = Image.open(seg)
+ pixel = list(im.getdata())
+ ins_seg = np.array(pixel).reshape([im.size[1], im.size[0]])
+ if flipped:
+ ins_seg = ins_seg[:, ::-1]
+ rois = ex_rois
+ n_rois = ex_rois.shape[0]
+ label = ex_labels
+ class_id = config.CLASS_ID
+ mask_target = np.zeros((n_rois, 28, 28), dtype=np.int8)
+ mask_label = np.zeros((n_rois), dtype=np.int8)
+ for n in range(n_rois):
+ target = ins_seg[int(rois[n, 1]):int(rois[n, 3]),
+ int(rois[n, 0]):int(rois[n, 2])]
+ ids = np.unique(target)
+ ins_id = 0
+ max_count = 0
+ for id in ids:
+ if math.floor(id / 1000) == class_id[int(label[int(n)])]:
+ px = np.where(ins_seg == int(id))
+ x_min = np.min(px[1])
+ y_min = np.min(px[0])
+ x_max = np.max(px[1])
+ y_max = np.max(px[0])
+ x1 = max(rois[n, 0], x_min)
+ y1 = max(rois[n, 1], y_min)
+ x2 = min(rois[n, 2], x_max)
+ y2 = min(rois[n, 3], y_max)
+ iou = (x2 - x1) * (y2 - y1)
+ iou = iou / ((rois[n, 2] - rois[n, 0]) *
+ (rois[n, 3] - rois[n, 1]) + (x_max - x_min) *
+ (y_max - y_min) - iou)
+ if iou > max_count:
+ ins_id = id
+ max_count = iou
+
+ if max_count == 0:
+ continue
+ # print max_count
+ mask = np.zeros(target.shape)
+ idx = np.where(target == ins_id)
+ mask[idx] = 1
+ mask = cv2.resize(mask, (28, 28), interpolation=cv2.INTER_NEAREST)
+
+ mask_target[n] = mask
+ mask_label[n] = label[int(n)]
+ return mask_target, mask_label
+
+
+def compute_bbox_mask_targets_and_label(rois, overlaps, labels, seg, flipped):
+ """
+ given rois, overlaps, gt labels, seg, compute bounding box mask targets
+ :param rois: roidb[i]['boxes'] k * 4
+ :param overlaps: roidb[i]['max_overlaps'] k * 1
+ :param labels: roidb[i]['max_classes'] k * 1
+ :return: targets[i][class, dx, dy, dw, dh] k * 5
+ """
+ # Ensure ROIs are floats
+ rois = rois.astype(np.float, copy=False)
+
+ # Sanity check
+ if len(rois) != len(overlaps):
+ print 'bbox regression: this should not happen'
+
+ # Indices of ground-truth ROIs
+ gt_inds = np.where(overlaps == 1)[0]
+ if len(gt_inds) == 0:
+ print 'something wrong : zero ground truth rois'
+ # Indices of examples for which we try to make predictions
+ ex_inds = np.where(overlaps >= config.TRAIN.BBOX_REGRESSION_THRESH)[0]
+
+ # Get IoU overlap between each ex ROI and gt ROI
+ ex_gt_overlaps = bbox_overlaps(rois[ex_inds, :], rois[gt_inds, :])
+
+ # Find which gt ROI each ex ROI has max overlap with:
+ # this will be the ex ROI's gt target
+ gt_assignment = ex_gt_overlaps.argmax(axis=1)
+ gt_rois = rois[gt_inds[gt_assignment], :]
+ ex_rois = rois[ex_inds, :]
+
+ mask_targets, mask_label = compute_mask_and_label(ex_rois, labels[ex_inds],
+ seg, flipped)
+ return mask_targets, mask_label, ex_inds
+
+
+def add_mask_targets(roidb):
+ """
+ given roidb, add ['bbox_targets'] and normalize bounding box regression targets
+ :param roidb: roidb to be processed. must have gone through imdb.prepare_roidb
+ :return: means, std variances of targets
+ """
+ print 'add bounding box mask targets'
+ assert len(roidb) > 0
+ assert 'max_classes' in roidb[0]
+
+ num_images = len(roidb)
+
+ # Multi threads processing
+ im_quene = Queue.Queue(maxsize=0)
+ for im_i in range(num_images):
+ im_quene.put(im_i)
+
+ def process():
+ while not im_quene.empty():
+ im_i = im_quene.get()
+ print "-----process img {}".format(im_i)
+ rois = roidb[im_i]['boxes']
+ max_overlaps = roidb[im_i]['max_overlaps']
+ max_classes = roidb[im_i]['max_classes']
+ ins_seg = roidb[im_i]['ins_seg']
+ flipped = roidb[im_i]['flipped']
+ roidb[im_i]['mask_targets'], roidb[im_i]['mask_labels'], roidb[im_i]['mask_inds'] = \
+ compute_bbox_mask_targets_and_label(rois, max_overlaps, max_classes, ins_seg, flipped)
+
+ threads = [threading.Thread(target=process, args=()) for i in range(10)]
+ for t in threads:
+ t.start()
+ for t in threads:
+ t.join()
+ # Single thread
+ # for im_i in range(num_images):
+ # print "-----processing img {}".format(im_i)
+ # rois = roidb[im_i]['boxes']
+ # max_overlaps = roidb[im_i]['max_overlaps']
+ # max_classes = roidb[im_i]['max_classes']
+ # ins_seg = roidb[im_i]['ins_seg']
+ # # roidb[im_i]['mask_targets'] = compute_bbox_mask_targets(rois, max_overlaps, max_classes, ins_seg)
+ # roidb[im_i]['mask_targets'], roidb[im_i]['mask_labels'], roidb[im_i]['mask_inds'] = \
+ # compute_bbox_mask_targets_and_label(rois, max_overlaps, max_classes, ins_seg)
diff --git a/insightface/detection/retinaface/rcnn/processing/bbox_transform.py b/insightface/detection/retinaface/rcnn/processing/bbox_transform.py
new file mode 100644
index 0000000000000000000000000000000000000000..5ee3646ab3bea9ed7b0c5f378d388ef84ac857fb
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/processing/bbox_transform.py
@@ -0,0 +1,223 @@
+import numpy as np
+from ..cython.bbox import bbox_overlaps_cython
+#from rcnn.config import config
+
+
+def bbox_overlaps(boxes, query_boxes):
+ return bbox_overlaps_cython(boxes, query_boxes)
+
+
+def bbox_overlaps_py(boxes, query_boxes):
+ """
+ determine overlaps between boxes and query_boxes
+ :param boxes: n * 4 bounding boxes
+ :param query_boxes: k * 4 bounding boxes
+ :return: overlaps: n * k overlaps
+ """
+ n_ = boxes.shape[0]
+ k_ = query_boxes.shape[0]
+ overlaps = np.zeros((n_, k_), dtype=np.float)
+ for k in range(k_):
+ query_box_area = (query_boxes[k, 2] - query_boxes[k, 0] +
+ 1) * (query_boxes[k, 3] - query_boxes[k, 1] + 1)
+ for n in range(n_):
+ iw = min(boxes[n, 2], query_boxes[k, 2]) - max(
+ boxes[n, 0], query_boxes[k, 0]) + 1
+ if iw > 0:
+ ih = min(boxes[n, 3], query_boxes[k, 3]) - max(
+ boxes[n, 1], query_boxes[k, 1]) + 1
+ if ih > 0:
+ box_area = (boxes[n, 2] - boxes[n, 0] +
+ 1) * (boxes[n, 3] - boxes[n, 1] + 1)
+ all_area = float(box_area + query_box_area - iw * ih)
+ overlaps[n, k] = iw * ih / all_area
+ return overlaps
+
+
+def clip_boxes(boxes, im_shape):
+ """
+ Clip boxes to image boundaries.
+ :param boxes: [N, 4* num_classes]
+ :param im_shape: tuple of 2
+ :return: [N, 4* num_classes]
+ """
+ # x1 >= 0
+ boxes[:, 0::4] = np.maximum(np.minimum(boxes[:, 0::4], im_shape[1] - 1), 0)
+ # y1 >= 0
+ boxes[:, 1::4] = np.maximum(np.minimum(boxes[:, 1::4], im_shape[0] - 1), 0)
+ # x2 < im_shape[1]
+ boxes[:, 2::4] = np.maximum(np.minimum(boxes[:, 2::4], im_shape[1] - 1), 0)
+ # y2 < im_shape[0]
+ boxes[:, 3::4] = np.maximum(np.minimum(boxes[:, 3::4], im_shape[0] - 1), 0)
+ return boxes
+
+
+def nonlinear_transform(ex_rois, gt_rois):
+ """
+ compute bounding box regression targets from ex_rois to gt_rois
+ :param ex_rois: [N, 4]
+ :param gt_rois: [N, 4]
+ :return: [N, 4]
+ """
+ assert ex_rois.shape[0] == gt_rois.shape[0], 'inconsistent rois number'
+
+ ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0
+ ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0
+ ex_ctr_x = ex_rois[:, 0] + 0.5 * (ex_widths - 1.0)
+ ex_ctr_y = ex_rois[:, 1] + 0.5 * (ex_heights - 1.0)
+
+ gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0
+ gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0
+ gt_ctr_x = gt_rois[:, 0] + 0.5 * (gt_widths - 1.0)
+ gt_ctr_y = gt_rois[:, 1] + 0.5 * (gt_heights - 1.0)
+
+ targets_dx = (gt_ctr_x - ex_ctr_x) / (ex_widths + 1e-14)
+ targets_dy = (gt_ctr_y - ex_ctr_y) / (ex_heights + 1e-14)
+ targets_dw = np.log(gt_widths / ex_widths)
+ targets_dh = np.log(gt_heights / ex_heights)
+
+ if gt_rois.shape[1] <= 4:
+ targets = np.vstack(
+ (targets_dx, targets_dy, targets_dw, targets_dh)).transpose()
+ return targets
+ else:
+ targets = [targets_dx, targets_dy, targets_dw, targets_dh]
+ #if config.USE_BLUR:
+ # for i in range(4, gt_rois.shape[1]):
+ # t = gt_rois[:,i]
+ # targets.append(t)
+ targets = np.vstack(targets).transpose()
+ return targets
+
+
+def landmark_transform(ex_rois, gt_rois):
+
+ assert ex_rois.shape[0] == gt_rois.shape[0], 'inconsistent rois number'
+
+ ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0
+ ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0
+ ex_ctr_x = ex_rois[:, 0] + 0.5 * (ex_widths - 1.0)
+ ex_ctr_y = ex_rois[:, 1] + 0.5 * (ex_heights - 1.0)
+
+ targets = []
+ for i in range(gt_rois.shape[1]):
+ for j in range(gt_rois.shape[2]):
+ #if not config.USE_OCCLUSION and j==2:
+ # continue
+ if j == 2:
+ continue
+ if j == 0: #w
+ target = (gt_rois[:, i, j] - ex_ctr_x) / (ex_widths + 1e-14)
+ elif j == 1: #h
+ target = (gt_rois[:, i, j] - ex_ctr_y) / (ex_heights + 1e-14)
+ else: #visibile
+ target = gt_rois[:, i, j]
+ targets.append(target)
+
+ targets = np.vstack(targets).transpose()
+ return targets
+
+
+def nonlinear_pred(boxes, box_deltas):
+ """
+ Transform the set of class-agnostic boxes into class-specific boxes
+ by applying the predicted offsets (box_deltas)
+ :param boxes: !important [N 4]
+ :param box_deltas: [N, 4 * num_classes]
+ :return: [N 4 * num_classes]
+ """
+ if boxes.shape[0] == 0:
+ return np.zeros((0, box_deltas.shape[1]))
+
+ boxes = boxes.astype(np.float, copy=False)
+ widths = boxes[:, 2] - boxes[:, 0] + 1.0
+ heights = boxes[:, 3] - boxes[:, 1] + 1.0
+ ctr_x = boxes[:, 0] + 0.5 * (widths - 1.0)
+ ctr_y = boxes[:, 1] + 0.5 * (heights - 1.0)
+
+ dx = box_deltas[:, 0::4]
+ dy = box_deltas[:, 1::4]
+ dw = box_deltas[:, 2::4]
+ dh = box_deltas[:, 3::4]
+
+ pred_ctr_x = dx * widths[:, np.newaxis] + ctr_x[:, np.newaxis]
+ pred_ctr_y = dy * heights[:, np.newaxis] + ctr_y[:, np.newaxis]
+ pred_w = np.exp(dw) * widths[:, np.newaxis]
+ pred_h = np.exp(dh) * heights[:, np.newaxis]
+
+ pred_boxes = np.zeros(box_deltas.shape)
+ # x1
+ pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1.0)
+ # y1
+ pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1.0)
+ # x2
+ pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1.0)
+ # y2
+ pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1.0)
+
+ return pred_boxes
+
+
+def landmark_pred(boxes, landmark_deltas):
+ if boxes.shape[0] == 0:
+ return np.zeros((0, landmark_deltas.shape[1]))
+ boxes = boxes.astype(np.float, copy=False)
+ widths = boxes[:, 2] - boxes[:, 0] + 1.0
+ heights = boxes[:, 3] - boxes[:, 1] + 1.0
+ ctr_x = boxes[:, 0] + 0.5 * (widths - 1.0)
+ ctr_y = boxes[:, 1] + 0.5 * (heights - 1.0)
+ preds = []
+ for i in range(landmark_deltas.shape[1]):
+ if i % 2 == 0:
+ pred = (landmark_deltas[:, i] * widths + ctr_x)
+ else:
+ pred = (landmark_deltas[:, i] * heights + ctr_y)
+ preds.append(pred)
+ preds = np.vstack(preds).transpose()
+ return preds
+
+
+def iou_transform(ex_rois, gt_rois):
+ """ return bbox targets, IoU loss uses gt_rois as gt """
+ assert ex_rois.shape[0] == gt_rois.shape[0], 'inconsistent rois number'
+ return gt_rois
+
+
+def iou_pred(boxes, box_deltas):
+ """
+ Transform the set of class-agnostic boxes into class-specific boxes
+ by applying the predicted offsets (box_deltas)
+ :param boxes: !important [N 4]
+ :param box_deltas: [N, 4 * num_classes]
+ :return: [N 4 * num_classes]
+ """
+ if boxes.shape[0] == 0:
+ return np.zeros((0, box_deltas.shape[1]))
+
+ boxes = boxes.astype(np.float, copy=False)
+ x1 = boxes[:, 0]
+ y1 = boxes[:, 1]
+ x2 = boxes[:, 2]
+ y2 = boxes[:, 3]
+
+ dx1 = box_deltas[:, 0::4]
+ dy1 = box_deltas[:, 1::4]
+ dx2 = box_deltas[:, 2::4]
+ dy2 = box_deltas[:, 3::4]
+
+ pred_boxes = np.zeros(box_deltas.shape)
+ # x1
+ pred_boxes[:, 0::4] = dx1 + x1[:, np.newaxis]
+ # y1
+ pred_boxes[:, 1::4] = dy1 + y1[:, np.newaxis]
+ # x2
+ pred_boxes[:, 2::4] = dx2 + x2[:, np.newaxis]
+ # y2
+ pred_boxes[:, 3::4] = dy2 + y2[:, np.newaxis]
+
+ return pred_boxes
+
+
+# define bbox_transform and bbox_pred
+bbox_transform = nonlinear_transform
+bbox_pred = nonlinear_pred
diff --git a/insightface/detection/retinaface/rcnn/processing/generate_anchor.py b/insightface/detection/retinaface/rcnn/processing/generate_anchor.py
new file mode 100644
index 0000000000000000000000000000000000000000..83c5ada2a635186d31911cf1faa7df0e6a65d7ff
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/processing/generate_anchor.py
@@ -0,0 +1,135 @@
+"""
+Generate base anchors on index 0
+"""
+from __future__ import print_function
+import sys
+from builtins import range
+import numpy as np
+from ..cython.anchors import anchors_cython
+#from ..config import config
+
+
+def anchors_plane(feat_h, feat_w, stride, base_anchor):
+ return anchors_cython(feat_h, feat_w, stride, base_anchor)
+
+
+def generate_anchors(base_size=16,
+ ratios=[0.5, 1, 2],
+ scales=2**np.arange(3, 6),
+ stride=16,
+ dense_anchor=False):
+ """
+ Generate anchor (reference) windows by enumerating aspect ratios X
+ scales wrt a reference (0, 0, 15, 15) window.
+ """
+
+ base_anchor = np.array([1, 1, base_size, base_size]) - 1
+ ratio_anchors = _ratio_enum(base_anchor, ratios)
+ anchors = np.vstack([
+ _scale_enum(ratio_anchors[i, :], scales)
+ for i in range(ratio_anchors.shape[0])
+ ])
+ if dense_anchor:
+ assert stride % 2 == 0
+ anchors2 = anchors.copy()
+ anchors2[:, :] += int(stride / 2)
+ anchors = np.vstack((anchors, anchors2))
+ #print('GA',base_anchor.shape, ratio_anchors.shape, anchors.shape)
+ return anchors
+
+
+#def generate_anchors_fpn(base_size=[64,32,16,8,4], ratios=[0.5, 1, 2], scales=8):
+# """
+# Generate anchor (reference) windows by enumerating aspect ratios X
+# scales wrt a reference (0, 0, 15, 15) window.
+# """
+# anchors = []
+# _ratios = ratios.reshape( (len(base_size), -1) )
+# _scales = scales.reshape( (len(base_size), -1) )
+# for i,bs in enumerate(base_size):
+# __ratios = _ratios[i]
+# __scales = _scales[i]
+# #print('anchors_fpn', bs, __ratios, __scales, file=sys.stderr)
+# r = generate_anchors(bs, __ratios, __scales)
+# #print('anchors_fpn', r.shape, file=sys.stderr)
+# anchors.append(r)
+# return anchors
+
+
+def generate_anchors_fpn(dense_anchor=False, cfg=None):
+ #assert(False)
+ """
+ Generate anchor (reference) windows by enumerating aspect ratios X
+ scales wrt a reference (0, 0, 15, 15) window.
+ """
+ if cfg is None:
+ from ..config import config
+ cfg = config.RPN_ANCHOR_CFG
+ RPN_FEAT_STRIDE = []
+ for k in cfg:
+ RPN_FEAT_STRIDE.append(int(k))
+ RPN_FEAT_STRIDE = sorted(RPN_FEAT_STRIDE, reverse=True)
+ anchors = []
+ for k in RPN_FEAT_STRIDE:
+ v = cfg[str(k)]
+ bs = v['BASE_SIZE']
+ __ratios = np.array(v['RATIOS'])
+ __scales = np.array(v['SCALES'])
+ stride = int(k)
+ #print('anchors_fpn', bs, __ratios, __scales, file=sys.stderr)
+ r = generate_anchors(bs, __ratios, __scales, stride, dense_anchor)
+ #print('anchors_fpn', r.shape, file=sys.stderr)
+ anchors.append(r)
+
+ return anchors
+
+
+def _whctrs(anchor):
+ """
+ Return width, height, x center, and y center for an anchor (window).
+ """
+
+ w = anchor[2] - anchor[0] + 1
+ h = anchor[3] - anchor[1] + 1
+ x_ctr = anchor[0] + 0.5 * (w - 1)
+ y_ctr = anchor[1] + 0.5 * (h - 1)
+ return w, h, x_ctr, y_ctr
+
+
+def _mkanchors(ws, hs, x_ctr, y_ctr):
+ """
+ Given a vector of widths (ws) and heights (hs) around a center
+ (x_ctr, y_ctr), output a set of anchors (windows).
+ """
+
+ ws = ws[:, np.newaxis]
+ hs = hs[:, np.newaxis]
+ anchors = np.hstack((x_ctr - 0.5 * (ws - 1), y_ctr - 0.5 * (hs - 1),
+ x_ctr + 0.5 * (ws - 1), y_ctr + 0.5 * (hs - 1)))
+ return anchors
+
+
+def _ratio_enum(anchor, ratios):
+ """
+ Enumerate a set of anchors for each aspect ratio wrt an anchor.
+ """
+
+ w, h, x_ctr, y_ctr = _whctrs(anchor)
+ size = w * h
+ size_ratios = size / ratios
+ ws = np.round(np.sqrt(size_ratios))
+ hs = np.round(ws * ratios)
+ anchors = _mkanchors(ws, hs, x_ctr, y_ctr)
+ return anchors
+
+
+def _scale_enum(anchor, scales):
+ """
+ Enumerate a set of anchors for each scale wrt an anchor.
+ """
+
+ w, h, x_ctr, y_ctr = _whctrs(anchor)
+ ws = w * scales
+ hs = h * scales
+ anchors = _mkanchors(ws, hs, x_ctr, y_ctr)
+ return anchors
diff --git a/insightface/detection/retinaface/rcnn/processing/nms.py b/insightface/detection/retinaface/rcnn/processing/nms.py
new file mode 100644
index 0000000000000000000000000000000000000000..b32d92d0ff738f7ad4f8ecc180ec04423a9a0a73
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/processing/nms.py
@@ -0,0 +1,67 @@
+import numpy as np
+from ..cython.cpu_nms import cpu_nms
+try:
+ from ..cython.gpu_nms import gpu_nms
+except ImportError:
+ gpu_nms = None
+
+
+def py_nms_wrapper(thresh):
+ def _nms(dets):
+ return nms(dets, thresh)
+
+ return _nms
+
+
+def cpu_nms_wrapper(thresh):
+ def _nms(dets):
+ return cpu_nms(dets, thresh)
+
+ return _nms
+
+
+def gpu_nms_wrapper(thresh, device_id):
+ def _nms(dets):
+ return gpu_nms(dets, thresh, device_id)
+
+ if gpu_nms is not None:
+ return _nms
+ else:
+ return cpu_nms_wrapper(thresh)
+
+
+def nms(dets, thresh):
+ """
+ greedily select boxes with high confidence and overlap with current maximum <= thresh
+ rule out overlap >= thresh
+ :param dets: [[x1, y1, x2, y2 score]]
+ :param thresh: retain overlap < thresh
+ :return: indexes to keep
+ """
+ x1 = dets[:, 0]
+ y1 = dets[:, 1]
+ x2 = dets[:, 2]
+ y2 = dets[:, 3]
+ scores = dets[:, 4]
+
+ areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ order = scores.argsort()[::-1]
+
+ keep = []
+ while order.size > 0:
+ i = order[0]
+ keep.append(i)
+ xx1 = np.maximum(x1[i], x1[order[1:]])
+ yy1 = np.maximum(y1[i], y1[order[1:]])
+ xx2 = np.minimum(x2[i], x2[order[1:]])
+ yy2 = np.minimum(y2[i], y2[order[1:]])
+
+ w = np.maximum(0.0, xx2 - xx1 + 1)
+ h = np.maximum(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (areas[i] + areas[order[1:]] - inter)
+
+ inds = np.where(ovr <= thresh)[0]
+ order = order[inds + 1]
+
+ return keep
diff --git a/insightface/detection/retinaface/rcnn/pycocotools/UPSTREAM_REV b/insightface/detection/retinaface/rcnn/pycocotools/UPSTREAM_REV
new file mode 100644
index 0000000000000000000000000000000000000000..9613b145b23779106bacd2a8e9bbe72dc39c6bbc
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/pycocotools/UPSTREAM_REV
@@ -0,0 +1 @@
+https://github.com/pdollar/coco/commit/336d2a27c91e3c0663d2dcf0b13574674d30f88e
diff --git a/insightface/detection/retinaface/rcnn/pycocotools/__init__.py b/insightface/detection/retinaface/rcnn/pycocotools/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/pycocotools/__init__.py
@@ -0,0 +1 @@
+__author__ = 'tylin'
diff --git a/insightface/detection/retinaface/rcnn/pycocotools/_mask.c b/insightface/detection/retinaface/rcnn/pycocotools/_mask.c
new file mode 100644
index 0000000000000000000000000000000000000000..0706a2fe4545a464645726310ccac117d5fb041c
--- /dev/null
+++ b/insightface/detection/retinaface/rcnn/pycocotools/_mask.c
@@ -0,0 +1,17234 @@
+/* Generated by Cython 0.28.5 */
+
+/* BEGIN: Cython Metadata
+{
+ "distutils": {
+ "depends": [
+ "/root/anaconda2/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h",
+ "/root/anaconda2/lib/python2.7/site-packages/numpy/core/include/numpy/ufuncobject.h",
+ "maskApi.h"
+ ],
+ "extra_compile_args": [
+ "-Wno-cpp",
+ "-Wno-unused-function",
+ "-std=c99"
+ ],
+ "include_dirs": [
+ "/root/anaconda2/lib/python2.7/site-packages/numpy/core/include"
+ ],
+ "language": "c",
+ "name": "_mask",
+ "sources": [
+ "_mask.pyx",
+ "maskApi.c"
+ ]
+ },
+ "module_name": "_mask"
+}
+END: Cython Metadata */
+
+#define PY_SSIZE_T_CLEAN
+#include "Python.h"
+#ifndef Py_PYTHON_H
+ #error Python headers needed to compile C extensions, please install development version of Python.
+#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
+ #error Cython requires Python 2.6+ or Python 3.3+.
+#else
+#define CYTHON_ABI "0_28_5"
+#define CYTHON_FUTURE_DIVISION 0
+#include
+#ifndef offsetof
+ #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
+#endif
+#if !defined(WIN32) && !defined(MS_WINDOWS)
+ #ifndef __stdcall
+ #define __stdcall
+ #endif
+ #ifndef __cdecl
+ #define __cdecl
+ #endif
+ #ifndef __fastcall
+ #define __fastcall
+ #endif
+#endif
+#ifndef DL_IMPORT
+ #define DL_IMPORT(t) t
+#endif
+#ifndef DL_EXPORT
+ #define DL_EXPORT(t) t
+#endif
+#define __PYX_COMMA ,
+#ifndef HAVE_LONG_LONG
+ #if PY_VERSION_HEX >= 0x02070000
+ #define HAVE_LONG_LONG
+ #endif
+#endif
+#ifndef PY_LONG_LONG
+ #define PY_LONG_LONG LONG_LONG
+#endif
+#ifndef Py_HUGE_VAL
+ #define Py_HUGE_VAL HUGE_VAL
+#endif
+#ifdef PYPY_VERSION
+ #define CYTHON_COMPILING_IN_PYPY 1
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #undef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 0
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #if PY_VERSION_HEX < 0x03050000
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #undef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 1
+ #undef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 0
+ #undef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 0
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+#elif defined(PYSTON_VERSION)
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 1
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+#else
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 1
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
+ #define CYTHON_USE_PYTYPE_LOOKUP 1
+ #endif
+ #if PY_MAJOR_VERSION < 3
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
+ #define CYTHON_USE_PYLONG_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #if PY_VERSION_HEX < 0x030300F0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #elif !defined(CYTHON_USE_UNICODE_WRITER)
+ #define CYTHON_USE_UNICODE_WRITER 1
+ #endif
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #ifndef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 1
+ #endif
+ #ifndef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 1
+ #endif
+ #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT (0 && PY_VERSION_HEX >= 0x03050000)
+ #endif
+ #ifndef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
+ #endif
+#endif
+#if !defined(CYTHON_FAST_PYCCALL)
+#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
+#endif
+#if CYTHON_USE_PYLONG_INTERNALS
+ #include "longintrepr.h"
+ #undef SHIFT
+ #undef BASE
+ #undef MASK
+#endif
+#ifndef __has_attribute
+ #define __has_attribute(x) 0
+#endif
+#ifndef __has_cpp_attribute
+ #define __has_cpp_attribute(x) 0
+#endif
+#ifndef CYTHON_RESTRICT
+ #if defined(__GNUC__)
+ #define CYTHON_RESTRICT __restrict__
+ #elif defined(_MSC_VER) && _MSC_VER >= 1400
+ #define CYTHON_RESTRICT __restrict
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_RESTRICT restrict
+ #else
+ #define CYTHON_RESTRICT
+ #endif
+#endif
+#ifndef CYTHON_UNUSED
+# if defined(__GNUC__)
+# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+#endif
+#ifndef CYTHON_MAYBE_UNUSED_VAR
+# if defined(__cplusplus)
+ template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
+# else
+# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
+# endif
+#endif
+#ifndef CYTHON_NCP_UNUSED
+# if CYTHON_COMPILING_IN_CPYTHON
+# define CYTHON_NCP_UNUSED
+# else
+# define CYTHON_NCP_UNUSED CYTHON_UNUSED
+# endif
+#endif
+#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
+#ifdef _MSC_VER
+ #ifndef _MSC_STDINT_H_
+ #if _MSC_VER < 1300
+ typedef unsigned char uint8_t;
+ typedef unsigned int uint32_t;
+ #else
+ typedef unsigned __int8 uint8_t;
+ typedef unsigned __int32 uint32_t;
+ #endif
+ #endif
+#else
+ #include
+#endif
+#ifndef CYTHON_FALLTHROUGH
+ #if defined(__cplusplus) && __cplusplus >= 201103L
+ #if __has_cpp_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH [[fallthrough]]
+ #elif __has_cpp_attribute(clang::fallthrough)
+ #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
+ #elif __has_cpp_attribute(gnu::fallthrough)
+ #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
+ #endif
+ #endif
+ #ifndef CYTHON_FALLTHROUGH
+ #if __has_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
+ #else
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+ #if defined(__clang__ ) && defined(__apple_build_version__)
+ #if __apple_build_version__ < 7000000
+ #undef CYTHON_FALLTHROUGH
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+#endif
+
+#ifndef CYTHON_INLINE
+ #if defined(__clang__)
+ #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
+ #elif defined(__GNUC__)
+ #define CYTHON_INLINE __inline__
+ #elif defined(_MSC_VER)
+ #define CYTHON_INLINE __inline
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_INLINE inline
+ #else
+ #define CYTHON_INLINE
+ #endif
+#endif
+
+#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
+ #define Py_OptimizeFlag 0
+#endif
+#define __PYX_BUILD_PY_SSIZE_T "n"
+#define CYTHON_FORMAT_SSIZE_T "z"
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+ #define __Pyx_DefaultClassType PyClass_Type
+#else
+ #define __Pyx_BUILTIN_MODULE_NAME "builtins"
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+ #define __Pyx_DefaultClassType PyType_Type
+#endif
+#ifndef Py_TPFLAGS_CHECKTYPES
+ #define Py_TPFLAGS_CHECKTYPES 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_INDEX
+ #define Py_TPFLAGS_HAVE_INDEX 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
+ #define Py_TPFLAGS_HAVE_NEWBUFFER 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_FINALIZE
+ #define Py_TPFLAGS_HAVE_FINALIZE 0
+#endif
+#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
+ #ifndef METH_FASTCALL
+ #define METH_FASTCALL 0x80
+ #endif
+ typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
+ typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
+ Py_ssize_t nargs, PyObject *kwnames);
+#else
+ #define __Pyx_PyCFunctionFast _PyCFunctionFast
+ #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
+#endif
+#if CYTHON_FAST_PYCCALL
+#define __Pyx_PyFastCFunction_Check(func)\
+ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS)))))
+#else
+#define __Pyx_PyFastCFunction_Check(func) 0
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
+ #define PyObject_Malloc(s) PyMem_Malloc(s)
+ #define PyObject_Free(p) PyMem_Free(p)
+ #define PyObject_Realloc(p) PyMem_Realloc(p)
+#endif
+#if CYTHON_COMPILING_IN_PYSTON
+ #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
+#else
+ #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
+#endif
+#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#elif PY_VERSION_HEX >= 0x03060000
+ #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
+#elif PY_VERSION_HEX >= 0x03000000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#else
+ #define __Pyx_PyThreadState_Current _PyThreadState_Current
+#endif
+#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
+#include "pythread.h"
+#define Py_tss_NEEDS_INIT 0
+typedef int Py_tss_t;
+static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
+ *key = PyThread_create_key();
+ return 0; // PyThread_create_key reports success always
+}
+static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
+ Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
+ *key = Py_tss_NEEDS_INIT;
+ return key;
+}
+static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
+ PyObject_Free(key);
+}
+static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
+ return *key != Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
+ PyThread_delete_key(*key);
+ *key = Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
+ return PyThread_set_key_value(*key, value);
+}
+static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
+ return PyThread_get_key_value(*key);
+}
+#endif // TSS (Thread Specific Storage) API
+#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
+#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
+#else
+#define __Pyx_PyDict_NewPresized(n) PyDict_New()
+#endif
+#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
+#else
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
+#endif
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
+#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
+#else
+#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
+#endif
+#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
+ #define CYTHON_PEP393_ENABLED 1
+ #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
+ 0 : _PyUnicode_Ready((PyObject *)(op)))
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
+ #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
+ #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
+ #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
+#else
+ #define CYTHON_PEP393_ENABLED 0
+ #define PyUnicode_1BYTE_KIND 1
+ #define PyUnicode_2BYTE_KIND 2
+ #define PyUnicode_4BYTE_KIND 4
+ #define __Pyx_PyUnicode_READY(op) (0)
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
+ #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
+ #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
+ #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
+#endif
+#if CYTHON_COMPILING_IN_PYPY
+ #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
+#else
+ #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
+ PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
+ #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
+ #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
+ #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
+#endif
+#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
+#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
+#else
+ #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
+#endif
+#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
+ #define PyObject_ASCII(o) PyObject_Repr(o)
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBaseString_Type PyUnicode_Type
+ #define PyStringObject PyUnicodeObject
+ #define PyString_Type PyUnicode_Type
+ #define PyString_Check PyUnicode_Check
+ #define PyString_CheckExact PyUnicode_CheckExact
+ #define PyObject_Unicode PyObject_Str
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
+ #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
+#else
+ #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
+ #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
+#endif
+#ifndef PySet_CheckExact
+ #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
+#endif
+#if CYTHON_ASSUME_SAFE_MACROS
+ #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
+#else
+ #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyIntObject PyLongObject
+ #define PyInt_Type PyLong_Type
+ #define PyInt_Check(op) PyLong_Check(op)
+ #define PyInt_CheckExact(op) PyLong_CheckExact(op)
+ #define PyInt_FromString PyLong_FromString
+ #define PyInt_FromUnicode PyLong_FromUnicode
+ #define PyInt_FromLong PyLong_FromLong
+ #define PyInt_FromSize_t PyLong_FromSize_t
+ #define PyInt_FromSsize_t PyLong_FromSsize_t
+ #define PyInt_AsLong PyLong_AsLong
+ #define PyInt_AS_LONG PyLong_AS_LONG
+ #define PyInt_AsSsize_t PyLong_AsSsize_t
+ #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
+ #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
+ #define PyNumber_Int PyNumber_Long
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBoolObject PyLongObject
+#endif
+#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
+ #ifndef PyUnicode_InternFromString
+ #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
+ #endif
+#endif
+#if PY_VERSION_HEX < 0x030200A4
+ typedef long Py_hash_t;
+ #define __Pyx_PyInt_FromHash_t PyInt_FromLong
+ #define __Pyx_PyInt_AsHash_t PyInt_AsLong
+#else
+ #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
+ #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func))
+#else
+ #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
+#endif
+#if CYTHON_USE_ASYNC_SLOTS
+ #if PY_VERSION_HEX >= 0x030500B1
+ #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
+ #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
+ #else
+ #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
+ #endif
+#else
+ #define __Pyx_PyType_AsAsync(obj) NULL
+#endif
+#ifndef __Pyx_PyAsyncMethodsStruct
+ typedef struct {
+ unaryfunc am_await;
+ unaryfunc am_aiter;
+ unaryfunc am_anext;
+ } __Pyx_PyAsyncMethodsStruct;
+#endif
+
+#if defined(WIN32) || defined(MS_WINDOWS)
+ #define _USE_MATH_DEFINES
+#endif
+#include
+#ifdef NAN
+#define __PYX_NAN() ((float) NAN)
+#else
+static CYTHON_INLINE float __PYX_NAN() {
+ float value;
+ memset(&value, 0xFF, sizeof(value));
+ return value;
+}
+#endif
+#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
+#define __Pyx_truncl trunc
+#else
+#define __Pyx_truncl truncl
+#endif
+
+
+#define __PYX_ERR(f_index, lineno, Ln_error) \
+{ \
+ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \
+}
+
+#ifndef __PYX_EXTERN_C
+ #ifdef __cplusplus
+ #define __PYX_EXTERN_C extern "C"
+ #else
+ #define __PYX_EXTERN_C extern
+ #endif
+#endif
+
+#define __PYX_HAVE___mask
+#define __PYX_HAVE_API___mask
+/* Early includes */
+#include
+#include
+#include "numpy/arrayobject.h"
+#include "numpy/ufuncobject.h"
+#include
+#include "maskApi.h"
+#ifdef _OPENMP
+#include
+#endif /* _OPENMP */
+
+#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
+#define CYTHON_WITHOUT_ASSERTIONS
+#endif
+
+typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
+ const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
+
+#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
+#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0
+#define __PYX_DEFAULT_STRING_ENCODING ""
+#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
+#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#define __Pyx_uchar_cast(c) ((unsigned char)c)
+#define __Pyx_long_cast(x) ((long)x)
+#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
+ (sizeof(type) < sizeof(Py_ssize_t)) ||\
+ (sizeof(type) > sizeof(Py_ssize_t) &&\
+ likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX) &&\
+ (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
+ v == (type)PY_SSIZE_T_MIN))) ||\
+ (sizeof(type) == sizeof(Py_ssize_t) &&\
+ (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX))) )
+#if defined (__cplusplus) && __cplusplus >= 201103L
+ #include
+ #define __Pyx_sst_abs(value) std::abs(value)
+#elif SIZEOF_INT >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) abs(value)
+#elif SIZEOF_LONG >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) labs(value)
+#elif defined (_MSC_VER)
+ #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
+#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define __Pyx_sst_abs(value) llabs(value)
+#elif defined (__GNUC__)
+ #define __Pyx_sst_abs(value) __builtin_llabs(value)
+#else
+ #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
+#endif
+static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
+static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
+#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
+#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
+#define __Pyx_PyBytes_FromString PyBytes_FromString
+#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
+static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#else
+ #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
+#endif
+#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
+#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
+#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
+#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
+#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
+static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
+ const Py_UNICODE *u_end = u;
+ while (*u_end++) ;
+ return (size_t)(u_end - u - 1);
+}
+#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
+#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
+#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
+#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
+#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
+static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
+static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
+static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
+#define __Pyx_PySequence_Tuple(obj)\
+ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
+static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
+static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
+#if CYTHON_ASSUME_SAFE_MACROS
+#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
+#else
+#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
+#endif
+#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
+#if PY_MAJOR_VERSION >= 3
+#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
+#else
+#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
+#endif
+#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
+#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
+static int __Pyx_sys_getdefaultencoding_not_ascii;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ PyObject* ascii_chars_u = NULL;
+ PyObject* ascii_chars_b = NULL;
+ const char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ if (strcmp(default_encoding_c, "ascii") == 0) {
+ __Pyx_sys_getdefaultencoding_not_ascii = 0;
+ } else {
+ char ascii_chars[128];
+ int c;
+ for (c = 0; c < 128; c++) {
+ ascii_chars[c] = c;
+ }
+ __Pyx_sys_getdefaultencoding_not_ascii = 1;
+ ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
+ if (!ascii_chars_u) goto bad;
+ ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
+ if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
+ PyErr_Format(
+ PyExc_ValueError,
+ "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
+ default_encoding_c);
+ goto bad;
+ }
+ Py_DECREF(ascii_chars_u);
+ Py_DECREF(ascii_chars_b);
+ }
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ Py_XDECREF(ascii_chars_u);
+ Py_XDECREF(ascii_chars_b);
+ return -1;
+}
+#endif
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
+#else
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
+static char* __PYX_DEFAULT_STRING_ENCODING;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));
+ if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
+ strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ return -1;
+}
+#endif
+#endif
+
+
+/* Test for GCC > 2.95 */
+#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
+ #define likely(x) __builtin_expect(!!(x), 1)
+ #define unlikely(x) __builtin_expect(!!(x), 0)
+#else /* !__GNUC__ or GCC < 2.95 */
+ #define likely(x) (x)
+ #define unlikely(x) (x)
+#endif /* __GNUC__ */
+static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
+
+static PyObject *__pyx_m = NULL;
+static PyObject *__pyx_d;
+static PyObject *__pyx_b;
+static PyObject *__pyx_cython_runtime = NULL;
+static PyObject *__pyx_empty_tuple;
+static PyObject *__pyx_empty_bytes;
+static PyObject *__pyx_empty_unicode;
+static int __pyx_lineno;
+static int __pyx_clineno = 0;
+static const char * __pyx_cfilenm= __FILE__;
+static const char *__pyx_filename;
+
+/* Header.proto */
+#if !defined(CYTHON_CCOMPLEX)
+ #if defined(__cplusplus)
+ #define CYTHON_CCOMPLEX 1
+ #elif defined(_Complex_I)
+ #define CYTHON_CCOMPLEX 1
+ #else
+ #define CYTHON_CCOMPLEX 0
+ #endif
+#endif
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #include
+ #else
+ #include
+ #endif
+#endif
+#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)
+ #undef _Complex_I
+ #define _Complex_I 1.0fj
+#endif
+
+
+static const char *__pyx_f[] = {
+ "_mask.pyx",
+ "stringsource",
+ "__init__.pxd",
+ "type.pxd",
+};
+/* BufferFormatStructs.proto */
+#define IS_UNSIGNED(type) (((type) -1) > 0)
+struct __Pyx_StructField_;
+#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)
+typedef struct {
+ const char* name;
+ struct __Pyx_StructField_* fields;
+ size_t size;
+ size_t arraysize[8];
+ int ndim;
+ char typegroup;
+ char is_unsigned;
+ int flags;
+} __Pyx_TypeInfo;
+typedef struct __Pyx_StructField_ {
+ __Pyx_TypeInfo* type;
+ const char* name;
+ size_t offset;
+} __Pyx_StructField;
+typedef struct {
+ __Pyx_StructField* field;
+ size_t parent_offset;
+} __Pyx_BufFmt_StackElem;
+typedef struct {
+ __Pyx_StructField root;
+ __Pyx_BufFmt_StackElem* head;
+ size_t fmt_offset;
+ size_t new_count, enc_count;
+ size_t struct_alignment;
+ int is_complex;
+ char enc_type;
+ char new_packmode;
+ char enc_packmode;
+ char is_valid_array;
+} __Pyx_BufFmt_Context;
+
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":730
+ * # in Cython to enable them only on the right systems.
+ *
+ * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ */
+typedef npy_int8 __pyx_t_5numpy_int8_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":731
+ *
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t
+ */
+typedef npy_int16 __pyx_t_5numpy_int16_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":732
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int64 int64_t
+ * #ctypedef npy_int96 int96_t
+ */
+typedef npy_int32 __pyx_t_5numpy_int32_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":733
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_int96 int96_t
+ * #ctypedef npy_int128 int128_t
+ */
+typedef npy_int64 __pyx_t_5numpy_int64_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":737
+ * #ctypedef npy_int128 int128_t
+ *
+ * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ */
+typedef npy_uint8 __pyx_t_5numpy_uint8_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":738
+ *
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t
+ */
+typedef npy_uint16 __pyx_t_5numpy_uint16_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":739
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint64 uint64_t
+ * #ctypedef npy_uint96 uint96_t
+ */
+typedef npy_uint32 __pyx_t_5numpy_uint32_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":740
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_uint96 uint96_t
+ * #ctypedef npy_uint128 uint128_t
+ */
+typedef npy_uint64 __pyx_t_5numpy_uint64_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":744
+ * #ctypedef npy_uint128 uint128_t
+ *
+ * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_float64 float64_t
+ * #ctypedef npy_float80 float80_t
+ */
+typedef npy_float32 __pyx_t_5numpy_float32_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":745
+ *
+ * ctypedef npy_float32 float32_t
+ * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_float80 float80_t
+ * #ctypedef npy_float128 float128_t
+ */
+typedef npy_float64 __pyx_t_5numpy_float64_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":754
+ * # The int types are mapped a bit surprising --
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t
+ */
+typedef npy_long __pyx_t_5numpy_int_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":755
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong longlong_t
+ *
+ */
+typedef npy_longlong __pyx_t_5numpy_long_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":756
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_ulong uint_t
+ */
+typedef npy_longlong __pyx_t_5numpy_longlong_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":758
+ * ctypedef npy_longlong longlong_t
+ *
+ * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t
+ */
+typedef npy_ulong __pyx_t_5numpy_uint_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":759
+ *
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulong_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":760
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_intp intp_t
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":762
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ * ctypedef npy_intp intp_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uintp uintp_t
+ *
+ */
+typedef npy_intp __pyx_t_5numpy_intp_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":763
+ *
+ * ctypedef npy_intp intp_t
+ * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_double float_t
+ */
+typedef npy_uintp __pyx_t_5numpy_uintp_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":765
+ * ctypedef npy_uintp uintp_t
+ *
+ * ctypedef npy_double float_t # <<<<<<<<<<<<<<
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t
+ */
+typedef npy_double __pyx_t_5numpy_float_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":766
+ *
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longdouble longdouble_t
+ *
+ */
+typedef npy_double __pyx_t_5numpy_double_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":767
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cfloat cfloat_t
+ */
+typedef npy_longdouble __pyx_t_5numpy_longdouble_t;
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< float > __pyx_t_float_complex;
+ #else
+ typedef float _Complex __pyx_t_float_complex;
+ #endif
+#else
+ typedef struct { float real, imag; } __pyx_t_float_complex;
+#endif
+static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);
+
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< double > __pyx_t_double_complex;
+ #else
+ typedef double _Complex __pyx_t_double_complex;
+ #endif
+#else
+ typedef struct { double real, imag; } __pyx_t_double_complex;
+#endif
+static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);
+
+
+/*--- Type declarations ---*/
+struct __pyx_obj_5_mask_RLEs;
+struct __pyx_obj_5_mask_Masks;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":769
+ * ctypedef npy_longdouble longdouble_t
+ *
+ * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<<
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t
+ */
+typedef npy_cfloat __pyx_t_5numpy_cfloat_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":770
+ *
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<<
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ */
+typedef npy_cdouble __pyx_t_5numpy_cdouble_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":771
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cdouble complex_t
+ */
+typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":773
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew1(a):
+ */
+typedef npy_cdouble __pyx_t_5numpy_complex_t;
+
+/* "_mask.pyx":56
+ * # python class to wrap RLE array in C
+ * # the class handles the memory allocation and deallocation
+ * cdef class RLEs: # <<<<<<<<<<<<<<
+ * cdef RLE *_R
+ * cdef siz _n
+ */
+struct __pyx_obj_5_mask_RLEs {
+ PyObject_HEAD
+ RLE *_R;
+ siz _n;
+};
+
+
+/* "_mask.pyx":77
+ * # python class to wrap Mask array in C
+ * # the class handles the memory allocation and deallocation
+ * cdef class Masks: # <<<<<<<<<<<<<<
+ * cdef byte *_mask
+ * cdef siz _h
+ */
+struct __pyx_obj_5_mask_Masks {
+ PyObject_HEAD
+ byte *_mask;
+ siz _h;
+ siz _w;
+ siz _n;
+};
+
+
+/* --- Runtime support code (head) --- */
+/* Refnanny.proto */
+#ifndef CYTHON_REFNANNY
+ #define CYTHON_REFNANNY 0
+#endif
+#if CYTHON_REFNANNY
+ typedef struct {
+ void (*INCREF)(void*, PyObject*, int);
+ void (*DECREF)(void*, PyObject*, int);
+ void (*GOTREF)(void*, PyObject*, int);
+ void (*GIVEREF)(void*, PyObject*, int);
+ void* (*SetupContext)(const char*, int, const char*);
+ void (*FinishContext)(void**);
+ } __Pyx_RefNannyAPIStruct;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
+ #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
+#ifdef WITH_THREAD
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ if (acquire_gil) {\
+ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ PyGILState_Release(__pyx_gilstate_save);\
+ } else {\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ }
+#else
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
+#endif
+ #define __Pyx_RefNannyFinishContext()\
+ __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
+ #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
+ #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
+ #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
+ #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
+#else
+ #define __Pyx_RefNannyDeclarations
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)
+ #define __Pyx_RefNannyFinishContext()
+ #define __Pyx_INCREF(r) Py_INCREF(r)
+ #define __Pyx_DECREF(r) Py_DECREF(r)
+ #define __Pyx_GOTREF(r)
+ #define __Pyx_GIVEREF(r)
+ #define __Pyx_XINCREF(r) Py_XINCREF(r)
+ #define __Pyx_XDECREF(r) Py_XDECREF(r)
+ #define __Pyx_XGOTREF(r)
+ #define __Pyx_XGIVEREF(r)
+#endif
+#define __Pyx_XDECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_XDECREF(tmp);\
+ } while (0)
+#define __Pyx_DECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_DECREF(tmp);\
+ } while (0)
+#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
+#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
+
+/* PyObjectGetAttrStr.proto */
+#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
+#endif
+
+/* GetBuiltinName.proto */
+static PyObject *__Pyx_GetBuiltinName(PyObject *name);
+
+/* RaiseDoubleKeywords.proto */
+static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
+
+/* ParseKeywords.proto */
+static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
+ PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
+ const char* function_name);
+
+/* RaiseArgTupleInvalid.proto */
+static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
+ Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
+
+/* IncludeStringH.proto */
+#include
+
+/* BytesEquals.proto */
+static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);
+
+/* UnicodeEquals.proto */
+static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);
+
+/* StrEquals.proto */
+#if PY_MAJOR_VERSION >= 3
+#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals
+#else
+#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals
+#endif
+
+/* PyCFunctionFastCall.proto */
+#if CYTHON_FAST_PYCCALL
+static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
+#else
+#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
+#endif
+
+/* PyFunctionFastCall.proto */
+#if CYTHON_FAST_PYCALL
+#define __Pyx_PyFunction_FastCall(func, args, nargs)\
+ __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
+#if 1 || PY_VERSION_HEX < 0x030600B1
+static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs);
+#else
+#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
+#endif
+#endif
+
+/* PyObjectCall.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
+#else
+#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
+#endif
+
+/* PyObjectCallMethO.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
+#endif
+
+/* PyObjectCallOneArg.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
+
+/* PyThreadStateGet.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
+#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
+#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
+#else
+#define __Pyx_PyThreadState_declare
+#define __Pyx_PyThreadState_assign
+#define __Pyx_PyErr_Occurred() PyErr_Occurred()
+#endif
+
+/* PyErrFetchRestore.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
+#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
+#else
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#endif
+#else
+#define __Pyx_PyErr_Clear() PyErr_Clear()
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
+#endif
+
+/* RaiseException.proto */
+static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
+
+/* ExtTypeTest.proto */
+static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
+
+/* ArgTypeTest.proto */
+#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\
+ ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\
+ __Pyx__ArgTypeTest(obj, type, name, exact))
+static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);
+
+/* ListAppend.proto */
+#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
+static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {
+ PyListObject* L = (PyListObject*) list;
+ Py_ssize_t len = Py_SIZE(list);
+ if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {
+ Py_INCREF(x);
+ PyList_SET_ITEM(list, len, x);
+ Py_SIZE(list) = len+1;
+ return 0;
+ }
+ return PyList_Append(list, x);
+}
+#else
+#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)
+#endif
+
+/* PyIntBinop.proto */
+#if !CYTHON_COMPILING_IN_PYPY
+static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace);
+#else
+#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace)\
+ (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))
+#endif
+
+/* PyIntBinop.proto */
+#if !CYTHON_COMPILING_IN_PYPY
+static PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, int inplace);
+#else
+#define __Pyx_PyInt_EqObjC(op1, op2, intval, inplace)\
+ PyObject_RichCompare(op1, op2, Py_EQ)
+ #endif
+
+/* GetModuleGlobalName.proto */
+static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name);
+
+/* DictGetItem.proto */
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY
+static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key);
+#define __Pyx_PyObject_Dict_GetItem(obj, name)\
+ (likely(PyDict_CheckExact(obj)) ?\
+ __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name))
+#else
+#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)
+#define __Pyx_PyObject_Dict_GetItem(obj, name) PyObject_GetItem(obj, name)
+#endif
+
+/* GetItemInt.proto */
+#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\
+ (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\
+ __Pyx_GetItemInt_Generic(o, to_py_func(i))))
+#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
+ (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL))
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,
+ int wraparound, int boundscheck);
+#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\
+ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\
+ __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\
+ (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL))
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,
+ int wraparound, int boundscheck);
+static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
+static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
+ int is_list, int wraparound, int boundscheck);
+
+/* IsLittleEndian.proto */
+static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
+
+/* BufferFormatCheck.proto */
+static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
+static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
+ __Pyx_BufFmt_StackElem* stack,
+ __Pyx_TypeInfo* type);
+
+/* BufferGetAndValidate.proto */
+#define __Pyx_GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack)\
+ ((obj == Py_None || obj == NULL) ?\
+ (__Pyx_ZeroBuffer(buf), 0) :\
+ __Pyx__GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack))
+static int __Pyx__GetBufferAndValidate(Py_buffer* buf, PyObject* obj,
+ __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);
+static void __Pyx_ZeroBuffer(Py_buffer* buf);
+static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);
+static Py_ssize_t __Pyx_minusones[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
+static Py_ssize_t __Pyx_zeros[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+
+/* ListCompAppend.proto */
+#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS
+static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {
+ PyListObject* L = (PyListObject*) list;
+ Py_ssize_t len = Py_SIZE(list);
+ if (likely(L->allocated > len)) {
+ Py_INCREF(x);
+ PyList_SET_ITEM(list, len, x);
+ Py_SIZE(list) = len+1;
+ return 0;
+ }
+ return PyList_Append(list, x);
+}
+#else
+#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)
+#endif
+
+/* FetchCommonType.proto */
+static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);
+
+/* CythonFunction.proto */
+#define __Pyx_CyFunction_USED 1
+#define __Pyx_CYFUNCTION_STATICMETHOD 0x01
+#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02
+#define __Pyx_CYFUNCTION_CCLASS 0x04
+#define __Pyx_CyFunction_GetClosure(f)\
+ (((__pyx_CyFunctionObject *) (f))->func_closure)
+#define __Pyx_CyFunction_GetClassObj(f)\
+ (((__pyx_CyFunctionObject *) (f))->func_classobj)
+#define __Pyx_CyFunction_Defaults(type, f)\
+ ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))
+#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\
+ ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)
+typedef struct {
+ PyCFunctionObject func;
+#if PY_VERSION_HEX < 0x030500A0
+ PyObject *func_weakreflist;
+#endif
+ PyObject *func_dict;
+ PyObject *func_name;
+ PyObject *func_qualname;
+ PyObject *func_doc;
+ PyObject *func_globals;
+ PyObject *func_code;
+ PyObject *func_closure;
+ PyObject *func_classobj;
+ void *defaults;
+ int defaults_pyobjects;
+ int flags;
+ PyObject *defaults_tuple;
+ PyObject *defaults_kwdict;
+ PyObject *(*defaults_getter)(PyObject *);
+ PyObject *func_annotations;
+} __pyx_CyFunctionObject;
+static PyTypeObject *__pyx_CyFunctionType = 0;
+#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code)\
+ __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)
+static PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,
+ int flags, PyObject* qualname,
+ PyObject *self,
+ PyObject *module, PyObject *globals,
+ PyObject* code);
+static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,
+ size_t size,
+ int pyobjects);
+static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,
+ PyObject *tuple);
+static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,
+ PyObject *dict);
+static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,
+ PyObject *dict);
+static int __pyx_CyFunction_init(void);
+
+/* BufferFallbackError.proto */
+static void __Pyx_RaiseBufferFallbackError(void);
+
+/* None.proto */
+static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);
+
+/* BufferIndexError.proto */
+static void __Pyx_RaiseBufferIndexError(int axis);
+
+#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0)
+/* PySequenceContains.proto */
+static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) {
+ int result = PySequence_Contains(seq, item);
+ return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
+}
+
+/* RaiseTooManyValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
+
+/* RaiseNeedMoreValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
+
+/* RaiseNoneIterError.proto */
+static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
+
+/* SaveResetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+#else
+#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
+#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
+#endif
+
+/* PyErrExceptionMatches.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
+static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
+#else
+#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
+#endif
+
+/* GetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
+static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#else
+static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
+#endif
+
+/* PyObject_GenericGetAttrNoDict.proto */
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
+#endif
+
+/* PyObject_GenericGetAttr.proto */
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
+#else
+#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
+#endif
+
+/* SetupReduce.proto */
+static int __Pyx_setup_reduce(PyObject* type_obj);
+
+/* Import.proto */
+static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
+
+/* CLineInTraceback.proto */
+#ifdef CYTHON_CLINE_IN_TRACEBACK
+#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
+#else
+static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
+#endif
+
+/* CodeObjectCache.proto */
+typedef struct {
+ PyCodeObject* code_object;
+ int code_line;
+} __Pyx_CodeObjectCacheEntry;
+struct __Pyx_CodeObjectCache {
+ int count;
+ int max_count;
+ __Pyx_CodeObjectCacheEntry* entries;
+};
+static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
+static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
+static PyCodeObject *__pyx_find_code_object(int code_line);
+static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
+
+/* AddTraceback.proto */
+static void __Pyx_AddTraceback(const char *funcname, int c_line,
+ int py_line, const char *filename);
+
+/* BufferStructDeclare.proto */
+typedef struct {
+ Py_ssize_t shape, strides, suboffsets;
+} __Pyx_Buf_DimInfo;
+typedef struct {
+ size_t refcount;
+ Py_buffer pybuffer;
+} __Pyx_Buffer;
+typedef struct {
+ __Pyx_Buffer *rcbuffer;
+ char *data;
+ __Pyx_Buf_DimInfo diminfo[8];
+} __Pyx_LocalBuf_ND;
+
+#if PY_MAJOR_VERSION < 3
+ static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
+ static void __Pyx_ReleaseBuffer(Py_buffer *view);
+#else
+ #define __Pyx_GetBuffer PyObject_GetBuffer
+ #define __Pyx_ReleaseBuffer PyBuffer_Release
+#endif
+
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_siz(siz value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_Py_intptr_t(Py_intptr_t value);
+
+/* RealImag.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #define __Pyx_CREAL(z) ((z).real())
+ #define __Pyx_CIMAG(z) ((z).imag())
+ #else
+ #define __Pyx_CREAL(z) (__real__(z))
+ #define __Pyx_CIMAG(z) (__imag__(z))
+ #endif
+#else
+ #define __Pyx_CREAL(z) ((z).real)
+ #define __Pyx_CIMAG(z) ((z).imag)
+#endif
+#if defined(__cplusplus) && CYTHON_CCOMPLEX\
+ && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)
+ #define __Pyx_SET_CREAL(z,x) ((z).real(x))
+ #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))
+#else
+ #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)
+ #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_float(a, b) ((a)==(b))
+ #define __Pyx_c_sum_float(a, b) ((a)+(b))
+ #define __Pyx_c_diff_float(a, b) ((a)-(b))
+ #define __Pyx_c_prod_float(a, b) ((a)*(b))
+ #define __Pyx_c_quot_float(a, b) ((a)/(b))
+ #define __Pyx_c_neg_float(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_float(z) ((z)==(float)0)
+ #define __Pyx_c_conj_float(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (::std::abs(z))
+ #define __Pyx_c_pow_float(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_float(z) ((z)==0)
+ #define __Pyx_c_conj_float(z) (conjf(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (cabsf(z))
+ #define __Pyx_c_pow_float(a, b) (cpowf(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);
+ #if 1
+ static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ #endif
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_double(a, b) ((a)==(b))
+ #define __Pyx_c_sum_double(a, b) ((a)+(b))
+ #define __Pyx_c_diff_double(a, b) ((a)-(b))
+ #define __Pyx_c_prod_double(a, b) ((a)*(b))
+ #define __Pyx_c_quot_double(a, b) ((a)/(b))
+ #define __Pyx_c_neg_double(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_double(z) ((z)==(double)0)
+ #define __Pyx_c_conj_double(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (::std::abs(z))
+ #define __Pyx_c_pow_double(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_double(z) ((z)==0)
+ #define __Pyx_c_conj_double(z) (conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (cabs(z))
+ #define __Pyx_c_pow_double(a, b) (cpow(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);
+ #if 1
+ static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ #endif
+#endif
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE siz __Pyx_PyInt_As_siz(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
+
+/* FastTypeChecks.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
+static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
+#else
+#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
+#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
+#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
+#endif
+#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
+
+/* CheckBinaryVersion.proto */
+static int __Pyx_check_binary_version(void);
+
+/* PyIdentifierFromString.proto */
+#if !defined(__Pyx_PyIdentifier_FromString)
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s)
+#else
+ #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s)
+#endif
+#endif
+
+/* ModuleImport.proto */
+static PyObject *__Pyx_ImportModule(const char *name);
+
+/* TypeImport.proto */
+static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict);
+
+/* InitStrings.proto */
+static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
+
+
+/* Module declarations from 'cpython.buffer' */
+
+/* Module declarations from 'libc.string' */
+
+/* Module declarations from 'libc.stdio' */
+
+/* Module declarations from '__builtin__' */
+
+/* Module declarations from 'cpython.type' */
+static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;
+
+/* Module declarations from 'cpython' */
+
+/* Module declarations from 'cpython.object' */
+
+/* Module declarations from 'cpython.ref' */
+
+/* Module declarations from 'cpython.mem' */
+
+/* Module declarations from 'numpy' */
+
+/* Module declarations from 'numpy' */
+static PyTypeObject *__pyx_ptype_5numpy_dtype = 0;
+static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;
+static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;
+static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/
+static CYTHON_INLINE int __pyx_f_5numpy_import_array(void); /*proto*/
+
+/* Module declarations from 'libc.stdlib' */
+
+/* Module declarations from '_mask' */
+static PyTypeObject *__pyx_ptype_5_mask_RLEs = 0;
+static PyTypeObject *__pyx_ptype_5_mask_Masks = 0;
+static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t = { "uint8_t", NULL, sizeof(__pyx_t_5numpy_uint8_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint8_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint8_t), 0 };
+static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_double_t = { "double_t", NULL, sizeof(__pyx_t_5numpy_double_t), { 0 }, 0, 'R', 0, 0 };
+static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t = { "uint32_t", NULL, sizeof(__pyx_t_5numpy_uint32_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint32_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint32_t), 0 };
+#define __Pyx_MODULE_NAME "_mask"
+extern int __pyx_module_is_main__mask;
+int __pyx_module_is_main__mask = 0;
+
+/* Implementation of '_mask' */
+static PyObject *__pyx_builtin_range;
+static PyObject *__pyx_builtin_AttributeError;
+static PyObject *__pyx_builtin_TypeError;
+static PyObject *__pyx_builtin_enumerate;
+static PyObject *__pyx_builtin_ValueError;
+static PyObject *__pyx_builtin_RuntimeError;
+static PyObject *__pyx_builtin_ImportError;
+static const char __pyx_k_F[] = "F";
+static const char __pyx_k_N[] = "N";
+static const char __pyx_k_R[] = "R";
+static const char __pyx_k_a[] = "_a";
+static const char __pyx_k_h[] = "h";
+static const char __pyx_k_i[] = "i";
+static const char __pyx_k_j[] = "j";
+static const char __pyx_k_m[] = "m";
+static const char __pyx_k_n[] = "n";
+static const char __pyx_k_p[] = "p";
+static const char __pyx_k_w[] = "w";
+static const char __pyx_k_Rs[] = "Rs";
+static const char __pyx_k_bb[] = "bb";
+static const char __pyx_k_dt[] = "dt";
+static const char __pyx_k_gt[] = "gt";
+static const char __pyx_k_np[] = "np";
+static const char __pyx_k_a_2[] = "a";
+static const char __pyx_k_all[] = "all";
+static const char __pyx_k_iou[] = "_iou";
+static const char __pyx_k_len[] = "_len";
+static const char __pyx_k_obj[] = "obj";
+static const char __pyx_k_sys[] = "sys";
+static const char __pyx_k_area[] = "area";
+static const char __pyx_k_bb_2[] = "_bb";
+static const char __pyx_k_cnts[] = "cnts";
+static const char __pyx_k_data[] = "data";
+static const char __pyx_k_main[] = "__main__";
+static const char __pyx_k_mask[] = "_mask";
+static const char __pyx_k_name[] = "__name__";
+static const char __pyx_k_objs[] = "objs";
+static const char __pyx_k_poly[] = "poly";
+static const char __pyx_k_size[] = "size";
+static const char __pyx_k_test[] = "__test__";
+static const char __pyx_k_utf8[] = "utf8";
+static const char __pyx_k_array[] = "array";
+static const char __pyx_k_bbIou[] = "_bbIou";
+static const char __pyx_k_dtype[] = "dtype";
+static const char __pyx_k_iou_2[] = "iou";
+static const char __pyx_k_isbox[] = "isbox";
+static const char __pyx_k_isrle[] = "isrle";
+static const char __pyx_k_masks[] = "masks";
+static const char __pyx_k_merge[] = "merge";
+static const char __pyx_k_numpy[] = "numpy";
+static const char __pyx_k_order[] = "order";
+static const char __pyx_k_pyobj[] = "pyobj";
+static const char __pyx_k_range[] = "range";
+static const char __pyx_k_shape[] = "shape";
+static const char __pyx_k_uint8[] = "uint8";
+static const char __pyx_k_zeros[] = "zeros";
+static const char __pyx_k_astype[] = "astype";
+static const char __pyx_k_author[] = "__author__";
+static const char __pyx_k_counts[] = "counts";
+static const char __pyx_k_decode[] = "decode";
+static const char __pyx_k_double[] = "double";
+static const char __pyx_k_encode[] = "encode";
+static const char __pyx_k_frBbox[] = "frBbox";
+static const char __pyx_k_frPoly[] = "frPoly";
+static const char __pyx_k_import[] = "__import__";
+static const char __pyx_k_iouFun[] = "_iouFun";
+static const char __pyx_k_mask_2[] = "mask";
+static const char __pyx_k_reduce[] = "__reduce__";
+static const char __pyx_k_rleIou[] = "_rleIou";
+static const char __pyx_k_toBbox[] = "toBbox";
+static const char __pyx_k_ucRles[] = "ucRles";
+static const char __pyx_k_uint32[] = "uint32";
+static const char __pyx_k_iscrowd[] = "iscrowd";
+static const char __pyx_k_np_poly[] = "np_poly";
+static const char __pyx_k_preproc[] = "_preproc";
+static const char __pyx_k_reshape[] = "reshape";
+static const char __pyx_k_rleObjs[] = "rleObjs";
+static const char __pyx_k_tsungyi[] = "tsungyi";
+static const char __pyx_k_c_string[] = "c_string";
+static const char __pyx_k_frString[] = "_frString";
+static const char __pyx_k_getstate[] = "__getstate__";
+static const char __pyx_k_mask_pyx[] = "_mask.pyx";
+static const char __pyx_k_setstate[] = "__setstate__";
+static const char __pyx_k_toString[] = "_toString";
+static const char __pyx_k_TypeError[] = "TypeError";
+static const char __pyx_k_enumerate[] = "enumerate";
+static const char __pyx_k_intersect[] = "intersect";
+static const char __pyx_k_py_string[] = "py_string";
+static const char __pyx_k_pyiscrowd[] = "pyiscrowd";
+static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
+static const char __pyx_k_ValueError[] = "ValueError";
+static const char __pyx_k_ImportError[] = "ImportError";
+static const char __pyx_k_frPyObjects[] = "frPyObjects";
+static const char __pyx_k_RuntimeError[] = "RuntimeError";
+static const char __pyx_k_version_info[] = "version_info";
+static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
+static const char __pyx_k_AttributeError[] = "AttributeError";
+static const char __pyx_k_PYTHON_VERSION[] = "PYTHON_VERSION";
+static const char __pyx_k_iou_locals__len[] = "iou.._len";
+static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
+static const char __pyx_k_frUncompressedRLE[] = "frUncompressedRLE";
+static const char __pyx_k_iou_locals__bbIou[] = "iou.._bbIou";
+static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
+static const char __pyx_k_iou_locals__rleIou[] = "iou.._rleIou";
+static const char __pyx_k_iou_locals__preproc[] = "iou.._preproc";
+static const char __pyx_k_input_data_type_not_allowed[] = "input data type not allowed.";
+static const char __pyx_k_input_type_is_not_supported[] = "input type is not supported.";
+static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous";
+static const char __pyx_k_Python_version_must_be_2_or_3[] = "Python version must be 2 or 3";
+static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import";
+static const char __pyx_k_numpy_ndarray_input_is_only_for[] = "numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension";
+static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)";
+static const char __pyx_k_unrecognized_type_The_following[] = "unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.";
+static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd";
+static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported";
+static const char __pyx_k_The_dt_and_gt_should_have_the_sa[] = "The dt and gt should have the same data type, either RLEs, list or np.ndarray";
+static const char __pyx_k_list_input_can_be_bounding_box_N[] = "list input can be bounding box (Nx4) or RLEs ([RLE])";
+static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous";
+static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
+static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import";
+static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short.";
+static PyObject *__pyx_n_s_AttributeError;
+static PyObject *__pyx_n_s_F;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;
+static PyObject *__pyx_n_s_ImportError;
+static PyObject *__pyx_n_s_N;
+static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;
+static PyObject *__pyx_n_s_PYTHON_VERSION;
+static PyObject *__pyx_kp_s_Python_version_must_be_2_or_3;
+static PyObject *__pyx_n_s_R;
+static PyObject *__pyx_n_s_Rs;
+static PyObject *__pyx_n_s_RuntimeError;
+static PyObject *__pyx_kp_s_The_dt_and_gt_should_have_the_sa;
+static PyObject *__pyx_n_s_TypeError;
+static PyObject *__pyx_n_s_ValueError;
+static PyObject *__pyx_n_s_a;
+static PyObject *__pyx_n_s_a_2;
+static PyObject *__pyx_n_s_all;
+static PyObject *__pyx_n_s_area;
+static PyObject *__pyx_n_s_array;
+static PyObject *__pyx_n_s_astype;
+static PyObject *__pyx_n_s_author;
+static PyObject *__pyx_n_s_bb;
+static PyObject *__pyx_n_s_bbIou;
+static PyObject *__pyx_n_s_bb_2;
+static PyObject *__pyx_n_s_c_string;
+static PyObject *__pyx_n_s_cline_in_traceback;
+static PyObject *__pyx_n_s_cnts;
+static PyObject *__pyx_n_s_counts;
+static PyObject *__pyx_n_s_data;
+static PyObject *__pyx_n_s_decode;
+static PyObject *__pyx_n_s_double;
+static PyObject *__pyx_n_s_dt;
+static PyObject *__pyx_n_s_dtype;
+static PyObject *__pyx_n_s_encode;
+static PyObject *__pyx_n_s_enumerate;
+static PyObject *__pyx_n_s_frBbox;
+static PyObject *__pyx_n_s_frPoly;
+static PyObject *__pyx_n_s_frPyObjects;
+static PyObject *__pyx_n_s_frString;
+static PyObject *__pyx_n_s_frUncompressedRLE;
+static PyObject *__pyx_n_s_getstate;
+static PyObject *__pyx_n_s_gt;
+static PyObject *__pyx_n_s_h;
+static PyObject *__pyx_n_s_i;
+static PyObject *__pyx_n_s_import;
+static PyObject *__pyx_kp_s_input_data_type_not_allowed;
+static PyObject *__pyx_kp_s_input_type_is_not_supported;
+static PyObject *__pyx_n_s_intersect;
+static PyObject *__pyx_n_s_iou;
+static PyObject *__pyx_n_s_iouFun;
+static PyObject *__pyx_n_s_iou_2;
+static PyObject *__pyx_n_s_iou_locals__bbIou;
+static PyObject *__pyx_n_s_iou_locals__len;
+static PyObject *__pyx_n_s_iou_locals__preproc;
+static PyObject *__pyx_n_s_iou_locals__rleIou;
+static PyObject *__pyx_n_s_isbox;
+static PyObject *__pyx_n_s_iscrowd;
+static PyObject *__pyx_n_s_isrle;
+static PyObject *__pyx_n_s_j;
+static PyObject *__pyx_n_s_len;
+static PyObject *__pyx_kp_s_list_input_can_be_bounding_box_N;
+static PyObject *__pyx_n_s_m;
+static PyObject *__pyx_n_s_main;
+static PyObject *__pyx_n_s_mask;
+static PyObject *__pyx_n_s_mask_2;
+static PyObject *__pyx_kp_s_mask_pyx;
+static PyObject *__pyx_n_s_masks;
+static PyObject *__pyx_n_s_merge;
+static PyObject *__pyx_n_s_n;
+static PyObject *__pyx_n_s_name;
+static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;
+static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;
+static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
+static PyObject *__pyx_n_s_np;
+static PyObject *__pyx_n_s_np_poly;
+static PyObject *__pyx_n_s_numpy;
+static PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;
+static PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;
+static PyObject *__pyx_kp_s_numpy_ndarray_input_is_only_for;
+static PyObject *__pyx_n_s_obj;
+static PyObject *__pyx_n_s_objs;
+static PyObject *__pyx_n_s_order;
+static PyObject *__pyx_n_s_p;
+static PyObject *__pyx_n_s_poly;
+static PyObject *__pyx_n_s_preproc;
+static PyObject *__pyx_n_s_py_string;
+static PyObject *__pyx_n_s_pyiscrowd;
+static PyObject *__pyx_n_s_pyobj;
+static PyObject *__pyx_n_s_range;
+static PyObject *__pyx_n_s_reduce;
+static PyObject *__pyx_n_s_reduce_cython;
+static PyObject *__pyx_n_s_reduce_ex;
+static PyObject *__pyx_n_s_reshape;
+static PyObject *__pyx_n_s_rleIou;
+static PyObject *__pyx_n_s_rleObjs;
+static PyObject *__pyx_n_s_setstate;
+static PyObject *__pyx_n_s_setstate_cython;
+static PyObject *__pyx_n_s_shape;
+static PyObject *__pyx_n_s_size;
+static PyObject *__pyx_n_s_sys;
+static PyObject *__pyx_n_s_test;
+static PyObject *__pyx_n_s_toBbox;
+static PyObject *__pyx_n_s_toString;
+static PyObject *__pyx_n_s_tsungyi;
+static PyObject *__pyx_n_s_ucRles;
+static PyObject *__pyx_n_s_uint32;
+static PyObject *__pyx_n_s_uint8;
+static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;
+static PyObject *__pyx_kp_s_unrecognized_type_The_following;
+static PyObject *__pyx_n_s_utf8;
+static PyObject *__pyx_n_s_version_info;
+static PyObject *__pyx_n_s_w;
+static PyObject *__pyx_n_s_zeros;
+static int __pyx_pf_5_mask_4RLEs___cinit__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n); /* proto */
+static void __pyx_pf_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_5_mask_4RLEs_4__getattr__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key); /* proto */
+static PyObject *__pyx_pf_5_mask_4RLEs_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_RLEs *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_5_mask_4RLEs_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_RLEs *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
+static int __pyx_pf_5_mask_5Masks___cinit__(struct __pyx_obj_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n); /* proto */
+static PyObject *__pyx_pf_5_mask_5Masks_2__array__(struct __pyx_obj_5_mask_Masks *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_5_mask_5Masks_4__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_Masks *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_5_mask_5Masks_6__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_Masks *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
+static PyObject *__pyx_pf_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs); /* proto */
+static PyObject *__pyx_pf_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */
+static PyObject *__pyx_pf_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask); /* proto */
+static PyObject *__pyx_pf_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */
+static PyObject *__pyx_pf_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, PyObject *__pyx_v_intersect); /* proto */
+static PyObject *__pyx_pf_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */
+static PyObject *__pyx_pf_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs); /* proto */
+static PyObject *__pyx_pf_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */
+static PyObject *__pyx_pf_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */
+static PyObject *__pyx_pf_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj); /* proto */
+static PyObject *__pyx_pf_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd); /* proto */
+static PyObject *__pyx_pf_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */
+static PyObject *__pyx_pf_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w); /* proto */
+static PyObject *__pyx_pf_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w); /* proto */
+static PyObject *__pyx_pf_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w); /* proto */
+static PyObject *__pyx_pf_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, PyObject *__pyx_v_h, PyObject *__pyx_v_w); /* proto */
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
+static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */
+static PyObject *__pyx_tp_new_5_mask_RLEs(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
+static PyObject *__pyx_tp_new_5_mask_Masks(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
+static PyObject *__pyx_int_0;
+static PyObject *__pyx_int_1;
+static PyObject *__pyx_int_2;
+static PyObject *__pyx_int_3;
+static PyObject *__pyx_int_4;
+static PyObject *__pyx_tuple_;
+static PyObject *__pyx_tuple__2;
+static PyObject *__pyx_tuple__3;
+static PyObject *__pyx_tuple__4;
+static PyObject *__pyx_tuple__5;
+static PyObject *__pyx_tuple__6;
+static PyObject *__pyx_tuple__7;
+static PyObject *__pyx_tuple__8;
+static PyObject *__pyx_tuple__9;
+static PyObject *__pyx_tuple__10;
+static PyObject *__pyx_tuple__11;
+static PyObject *__pyx_tuple__13;
+static PyObject *__pyx_tuple__15;
+static PyObject *__pyx_tuple__17;
+static PyObject *__pyx_tuple__19;
+static PyObject *__pyx_tuple__20;
+static PyObject *__pyx_tuple__21;
+static PyObject *__pyx_tuple__22;
+static PyObject *__pyx_tuple__23;
+static PyObject *__pyx_tuple__24;
+static PyObject *__pyx_tuple__25;
+static PyObject *__pyx_tuple__26;
+static PyObject *__pyx_tuple__27;
+static PyObject *__pyx_tuple__28;
+static PyObject *__pyx_tuple__29;
+static PyObject *__pyx_tuple__30;
+static PyObject *__pyx_tuple__31;
+static PyObject *__pyx_tuple__32;
+static PyObject *__pyx_tuple__34;
+static PyObject *__pyx_tuple__36;
+static PyObject *__pyx_tuple__38;
+static PyObject *__pyx_tuple__40;
+static PyObject *__pyx_tuple__42;
+static PyObject *__pyx_tuple__44;
+static PyObject *__pyx_tuple__46;
+static PyObject *__pyx_tuple__48;
+static PyObject *__pyx_tuple__50;
+static PyObject *__pyx_tuple__52;
+static PyObject *__pyx_tuple__54;
+static PyObject *__pyx_codeobj__12;
+static PyObject *__pyx_codeobj__14;
+static PyObject *__pyx_codeobj__16;
+static PyObject *__pyx_codeobj__18;
+static PyObject *__pyx_codeobj__33;
+static PyObject *__pyx_codeobj__35;
+static PyObject *__pyx_codeobj__37;
+static PyObject *__pyx_codeobj__39;
+static PyObject *__pyx_codeobj__41;
+static PyObject *__pyx_codeobj__43;
+static PyObject *__pyx_codeobj__45;
+static PyObject *__pyx_codeobj__47;
+static PyObject *__pyx_codeobj__49;
+static PyObject *__pyx_codeobj__51;
+static PyObject *__pyx_codeobj__53;
+static PyObject *__pyx_codeobj__55;
+/* Late includes */
+
+/* "_mask.pyx":60
+ * cdef siz _n
+ *
+ * def __cinit__(self, siz n =0): # <<<<<<<<<<<<<<
+ * rlesInit(&self._R, n)
+ * self._n = n
+ */
+
+/* Python wrapper */
+static int __pyx_pw_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static int __pyx_pw_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ siz __pyx_v_n;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_n,0};
+ PyObject* values[1] = {0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n);
+ if (value) { values[0] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 60, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ if (values[0]) {
+ __pyx_v_n = __Pyx_PyInt_As_siz(values[0]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 60, __pyx_L3_error)
+ } else {
+ __pyx_v_n = ((siz)0);
+ }
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 60, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.RLEs.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return -1;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_4RLEs___cinit__(((struct __pyx_obj_5_mask_RLEs *)__pyx_v_self), __pyx_v_n);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_5_mask_4RLEs___cinit__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__", 0);
+
+ /* "_mask.pyx":61
+ *
+ * def __cinit__(self, siz n =0):
+ * rlesInit(&self._R, n) # <<<<<<<<<<<<<<
+ * self._n = n
+ *
+ */
+ rlesInit((&__pyx_v_self->_R), __pyx_v_n);
+
+ /* "_mask.pyx":62
+ * def __cinit__(self, siz n =0):
+ * rlesInit(&self._R, n)
+ * self._n = n # <<<<<<<<<<<<<<
+ *
+ * # free the RLE array here
+ */
+ __pyx_v_self->_n = __pyx_v_n;
+
+ /* "_mask.pyx":60
+ * cdef siz _n
+ *
+ * def __cinit__(self, siz n =0): # <<<<<<<<<<<<<<
+ * rlesInit(&self._R, n)
+ * self._n = n
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":65
+ *
+ * # free the RLE array here
+ * def __dealloc__(self): # <<<<<<<<<<<<<<
+ * if self._R is not NULL:
+ * for i in range(self._n):
+ */
+
+/* Python wrapper */
+static void __pyx_pw_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self); /*proto*/
+static void __pyx_pw_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
+ __pyx_pf_5_mask_4RLEs_2__dealloc__(((struct __pyx_obj_5_mask_RLEs *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+static void __pyx_pf_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self) {
+ siz __pyx_v_i;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ siz __pyx_t_2;
+ siz __pyx_t_3;
+ siz __pyx_t_4;
+ __Pyx_RefNannySetupContext("__dealloc__", 0);
+
+ /* "_mask.pyx":66
+ * # free the RLE array here
+ * def __dealloc__(self):
+ * if self._R is not NULL: # <<<<<<<<<<<<<<
+ * for i in range(self._n):
+ * free(self._R[i].cnts)
+ */
+ __pyx_t_1 = ((__pyx_v_self->_R != NULL) != 0);
+ if (__pyx_t_1) {
+
+ /* "_mask.pyx":67
+ * def __dealloc__(self):
+ * if self._R is not NULL:
+ * for i in range(self._n): # <<<<<<<<<<<<<<
+ * free(self._R[i].cnts)
+ * free(self._R)
+ */
+ __pyx_t_2 = __pyx_v_self->_n;
+ __pyx_t_3 = __pyx_t_2;
+ for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
+ __pyx_v_i = __pyx_t_4;
+
+ /* "_mask.pyx":68
+ * if self._R is not NULL:
+ * for i in range(self._n):
+ * free(self._R[i].cnts) # <<<<<<<<<<<<<<
+ * free(self._R)
+ * def __getattr__(self, key):
+ */
+ free((__pyx_v_self->_R[__pyx_v_i]).cnts);
+ }
+
+ /* "_mask.pyx":69
+ * for i in range(self._n):
+ * free(self._R[i].cnts)
+ * free(self._R) # <<<<<<<<<<<<<<
+ * def __getattr__(self, key):
+ * if key == 'n':
+ */
+ free(__pyx_v_self->_R);
+
+ /* "_mask.pyx":66
+ * # free the RLE array here
+ * def __dealloc__(self):
+ * if self._R is not NULL: # <<<<<<<<<<<<<<
+ * for i in range(self._n):
+ * free(self._R[i].cnts)
+ */
+ }
+
+ /* "_mask.pyx":65
+ *
+ * # free the RLE array here
+ * def __dealloc__(self): # <<<<<<<<<<<<<<
+ * if self._R is not NULL:
+ * for i in range(self._n):
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "_mask.pyx":70
+ * free(self._R[i].cnts)
+ * free(self._R)
+ * def __getattr__(self, key): # <<<<<<<<<<<<<<
+ * if key == 'n':
+ * return self._n
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/
+static PyObject *__pyx_pw_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_4RLEs_4__getattr__(((struct __pyx_obj_5_mask_RLEs *)__pyx_v_self), ((PyObject *)__pyx_v_key));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_4RLEs_4__getattr__(struct __pyx_obj_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ __Pyx_RefNannySetupContext("__getattr__", 0);
+
+ /* "_mask.pyx":71
+ * free(self._R)
+ * def __getattr__(self, key):
+ * if key == 'n': # <<<<<<<<<<<<<<
+ * return self._n
+ * raise AttributeError(key)
+ */
+ __pyx_t_1 = (__Pyx_PyString_Equals(__pyx_v_key, __pyx_n_s_n, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 71, __pyx_L1_error)
+ if (__pyx_t_1) {
+
+ /* "_mask.pyx":72
+ * def __getattr__(self, key):
+ * if key == 'n':
+ * return self._n # <<<<<<<<<<<<<<
+ * raise AttributeError(key)
+ *
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 72, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_r = __pyx_t_2;
+ __pyx_t_2 = 0;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":71
+ * free(self._R)
+ * def __getattr__(self, key):
+ * if key == 'n': # <<<<<<<<<<<<<<
+ * return self._n
+ * raise AttributeError(key)
+ */
+ }
+
+ /* "_mask.pyx":73
+ * if key == 'n':
+ * return self._n
+ * raise AttributeError(key) # <<<<<<<<<<<<<<
+ *
+ * # python class to wrap Mask array in C
+ */
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_AttributeError, __pyx_v_key); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 73, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_Raise(__pyx_t_2, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __PYX_ERR(0, 73, __pyx_L1_error)
+
+ /* "_mask.pyx":70
+ * free(self._R[i].cnts)
+ * free(self._R)
+ * def __getattr__(self, key): # <<<<<<<<<<<<<<
+ * if key == 'n':
+ * return self._n
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_AddTraceback("_mask.RLEs.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_4RLEs_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static PyObject *__pyx_pw_5_mask_4RLEs_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_4RLEs_6__reduce_cython__(((struct __pyx_obj_5_mask_RLEs *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_4RLEs_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_RLEs *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__reduce_cython__", 0);
+
+ /* "(tree fragment)":2
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 2, __pyx_L1_error)
+
+ /* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("_mask.RLEs.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_4RLEs_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
+static PyObject *__pyx_pw_5_mask_4RLEs_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_4RLEs_8__setstate_cython__(((struct __pyx_obj_5_mask_RLEs *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_4RLEs_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_RLEs *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__setstate_cython__", 0);
+
+ /* "(tree fragment)":4
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 4, __pyx_L1_error)
+
+ /* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("_mask.RLEs.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":83
+ * cdef siz _n
+ *
+ * def __cinit__(self, h, w, n): # <<<<<<<<<<<<<<
+ * self._mask = malloc(h*w*n* sizeof(byte))
+ * self._h = h
+ */
+
+/* Python wrapper */
+static int __pyx_pw_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static int __pyx_pw_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_v_w = 0;
+ PyObject *__pyx_v_n = 0;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_h,&__pyx_n_s_w,&__pyx_n_s_n,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 3, 3, 1); __PYX_ERR(0, 83, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 3, 3, 2); __PYX_ERR(0, 83, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 83, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_h = values[0];
+ __pyx_v_w = values[1];
+ __pyx_v_n = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 83, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.Masks.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return -1;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_5Masks___cinit__(((struct __pyx_obj_5_mask_Masks *)__pyx_v_self), __pyx_v_h, __pyx_v_w, __pyx_v_n);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_5_mask_5Masks___cinit__(struct __pyx_obj_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ size_t __pyx_t_4;
+ siz __pyx_t_5;
+ __Pyx_RefNannySetupContext("__cinit__", 0);
+
+ /* "_mask.pyx":84
+ *
+ * def __cinit__(self, h, w, n):
+ * self._mask = malloc(h*w*n* sizeof(byte)) # <<<<<<<<<<<<<<
+ * self._h = h
+ * self._w = w
+ */
+ __pyx_t_1 = PyNumber_Multiply(__pyx_v_h, __pyx_v_w); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyInt_FromSize_t((sizeof(byte))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_4 = __Pyx_PyInt_As_size_t(__pyx_t_3); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_self->_mask = ((byte *)malloc(__pyx_t_4));
+
+ /* "_mask.pyx":85
+ * def __cinit__(self, h, w, n):
+ * self._mask = malloc(h*w*n* sizeof(byte))
+ * self._h = h # <<<<<<<<<<<<<<
+ * self._w = w
+ * self._n = n
+ */
+ __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_h); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 85, __pyx_L1_error)
+ __pyx_v_self->_h = __pyx_t_5;
+
+ /* "_mask.pyx":86
+ * self._mask = malloc(h*w*n* sizeof(byte))
+ * self._h = h
+ * self._w = w # <<<<<<<<<<<<<<
+ * self._n = n
+ * # def __dealloc__(self):
+ */
+ __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_w); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 86, __pyx_L1_error)
+ __pyx_v_self->_w = __pyx_t_5;
+
+ /* "_mask.pyx":87
+ * self._h = h
+ * self._w = w
+ * self._n = n # <<<<<<<<<<<<<<
+ * # def __dealloc__(self):
+ * # the memory management of _mask has been passed to np.ndarray
+ */
+ __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_n); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 87, __pyx_L1_error)
+ __pyx_v_self->_n = __pyx_t_5;
+
+ /* "_mask.pyx":83
+ * cdef siz _n
+ *
+ * def __cinit__(self, h, w, n): # <<<<<<<<<<<<<<
+ * self._mask = malloc(h*w*n* sizeof(byte))
+ * self._h = h
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_AddTraceback("_mask.Masks.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = -1;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":93
+ *
+ * # called when passing into np.array() and return an np.ndarray in column-major order
+ * def __array__(self): # <<<<<<<<<<<<<<
+ * cdef np.npy_intp shape[1]
+ * shape[0] = self._h*self._w*self._n
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static PyObject *__pyx_pw_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__array__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_5Masks_2__array__(((struct __pyx_obj_5_mask_Masks *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_5Masks_2__array__(struct __pyx_obj_5_mask_Masks *__pyx_v_self) {
+ npy_intp __pyx_v_shape[1];
+ PyObject *__pyx_v_ndarray = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ __Pyx_RefNannySetupContext("__array__", 0);
+
+ /* "_mask.pyx":95
+ * def __array__(self):
+ * cdef np.npy_intp shape[1]
+ * shape[0] = self._h*self._w*self._n # <<<<<<<<<<<<<<
+ * # Create a 1D array, and reshape it to fortran/Matlab column-major array
+ * ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')
+ */
+ (__pyx_v_shape[0]) = ((((npy_intp)__pyx_v_self->_h) * __pyx_v_self->_w) * __pyx_v_self->_n);
+
+ /* "_mask.pyx":97
+ * shape[0] = self._h*self._w*self._n
+ * # Create a 1D array, and reshape it to fortran/Matlab column-major array
+ * ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F') # <<<<<<<<<<<<<<
+ * # The _mask allocated by Masks is now handled by ndarray
+ * PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)
+ */
+ __pyx_t_1 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT8, __pyx_v_self->_mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_reshape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_self->_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_self->_w); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_4);
+ __pyx_t_1 = 0;
+ __pyx_t_3 = 0;
+ __pyx_t_4 = 0;
+ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 97, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 97, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_ndarray = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":99
+ * ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')
+ * # The _mask allocated by Masks is now handled by ndarray
+ * PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA) # <<<<<<<<<<<<<<
+ * return ndarray
+ *
+ */
+ if (!(likely(((__pyx_v_ndarray) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_ndarray, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 99, __pyx_L1_error)
+ PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_ndarray), NPY_OWNDATA);
+
+ /* "_mask.pyx":100
+ * # The _mask allocated by Masks is now handled by ndarray
+ * PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)
+ * return ndarray # <<<<<<<<<<<<<<
+ *
+ * # internal conversion from Python RLEs object to compressed RLE format
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_ndarray);
+ __pyx_r = __pyx_v_ndarray;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":93
+ *
+ * # called when passing into np.array() and return an np.ndarray in column-major order
+ * def __array__(self): # <<<<<<<<<<<<<<
+ * cdef np.npy_intp shape[1]
+ * shape[0] = self._h*self._w*self._n
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("_mask.Masks.__array__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_ndarray);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_5Masks_5__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static PyObject *__pyx_pw_5_mask_5Masks_5__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_5Masks_4__reduce_cython__(((struct __pyx_obj_5_mask_Masks *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_5Masks_4__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_Masks *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__reduce_cython__", 0);
+
+ /* "(tree fragment)":2
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 2, __pyx_L1_error)
+
+ /* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("_mask.Masks.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_5Masks_7__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
+static PyObject *__pyx_pw_5_mask_5Masks_7__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_5Masks_6__setstate_cython__(((struct __pyx_obj_5_mask_Masks *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_5Masks_6__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_5_mask_Masks *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__setstate_cython__", 0);
+
+ /* "(tree fragment)":4
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 4, __pyx_L1_error)
+
+ /* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("_mask.Masks.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":103
+ *
+ * # internal conversion from Python RLEs object to compressed RLE format
+ * def _toString(RLEs Rs): # <<<<<<<<<<<<<<
+ * cdef siz n = Rs.n
+ * cdef bytes py_string
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_1_toString = {"_toString", (PyCFunction)__pyx_pw_5_mask_1_toString, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_toString (wrapper)", 0);
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_Rs), __pyx_ptype_5_mask_RLEs, 1, "Rs", 0))) __PYX_ERR(0, 103, __pyx_L1_error)
+ __pyx_r = __pyx_pf_5_mask__toString(__pyx_self, ((struct __pyx_obj_5_mask_RLEs *)__pyx_v_Rs));
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs) {
+ siz __pyx_v_n;
+ PyObject *__pyx_v_py_string = 0;
+ char *__pyx_v_c_string;
+ PyObject *__pyx_v_objs = NULL;
+ siz __pyx_v_i;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ siz __pyx_t_2;
+ siz __pyx_t_3;
+ siz __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ int __pyx_t_8;
+ __Pyx_RefNannySetupContext("_toString", 0);
+
+ /* "_mask.pyx":104
+ * # internal conversion from Python RLEs object to compressed RLE format
+ * def _toString(RLEs Rs):
+ * cdef siz n = Rs.n # <<<<<<<<<<<<<<
+ * cdef bytes py_string
+ * cdef char* c_string
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 104, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_2 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 104, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_n = __pyx_t_2;
+
+ /* "_mask.pyx":107
+ * cdef bytes py_string
+ * cdef char* c_string
+ * objs = [] # <<<<<<<<<<<<<<
+ * for i in range(n):
+ * c_string = rleToString( &Rs._R[i] )
+ */
+ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_objs = ((PyObject*)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":108
+ * cdef char* c_string
+ * objs = []
+ * for i in range(n): # <<<<<<<<<<<<<<
+ * c_string = rleToString( &Rs._R[i] )
+ * py_string = c_string
+ */
+ __pyx_t_2 = __pyx_v_n;
+ __pyx_t_3 = __pyx_t_2;
+ for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
+ __pyx_v_i = __pyx_t_4;
+
+ /* "_mask.pyx":109
+ * objs = []
+ * for i in range(n):
+ * c_string = rleToString( &Rs._R[i] ) # <<<<<<<<<<<<<<
+ * py_string = c_string
+ * objs.append({
+ */
+ __pyx_v_c_string = rleToString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_v_i]))));
+
+ /* "_mask.pyx":110
+ * for i in range(n):
+ * c_string = rleToString( &Rs._R[i] )
+ * py_string = c_string # <<<<<<<<<<<<<<
+ * objs.append({
+ * 'size': [Rs._R[i].h, Rs._R[i].w],
+ */
+ __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_c_string); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 110, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_1));
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":112
+ * py_string = c_string
+ * objs.append({
+ * 'size': [Rs._R[i].h, Rs._R[i].w], # <<<<<<<<<<<<<<
+ * 'counts': py_string
+ * })
+ */
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 112, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 112, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).w); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 112, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = PyList_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 112, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyList_SET_ITEM(__pyx_t_7, 0, __pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyList_SET_ITEM(__pyx_t_7, 1, __pyx_t_6);
+ __pyx_t_5 = 0;
+ __pyx_t_6 = 0;
+ if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_size, __pyx_t_7) < 0) __PYX_ERR(0, 112, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+
+ /* "_mask.pyx":113
+ * objs.append({
+ * 'size': [Rs._R[i].h, Rs._R[i].w],
+ * 'counts': py_string # <<<<<<<<<<<<<<
+ * })
+ * free(c_string)
+ */
+ if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_counts, __pyx_v_py_string) < 0) __PYX_ERR(0, 112, __pyx_L1_error)
+
+ /* "_mask.pyx":111
+ * c_string = rleToString( &Rs._R[i] )
+ * py_string = c_string
+ * objs.append({ # <<<<<<<<<<<<<<
+ * 'size': [Rs._R[i].h, Rs._R[i].w],
+ * 'counts': py_string
+ */
+ __pyx_t_8 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_1); if (unlikely(__pyx_t_8 == ((int)-1))) __PYX_ERR(0, 111, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "_mask.pyx":115
+ * 'counts': py_string
+ * })
+ * free(c_string) # <<<<<<<<<<<<<<
+ * return objs
+ *
+ */
+ free(__pyx_v_c_string);
+ }
+
+ /* "_mask.pyx":116
+ * })
+ * free(c_string)
+ * return objs # <<<<<<<<<<<<<<
+ *
+ * # internal conversion from compressed RLE format to Python RLEs object
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":103
+ *
+ * # internal conversion from Python RLEs object to compressed RLE format
+ * def _toString(RLEs Rs): # <<<<<<<<<<<<<<
+ * cdef siz n = Rs.n
+ * cdef bytes py_string
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("_mask._toString", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_py_string);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":119
+ *
+ * # internal conversion from compressed RLE format to Python RLEs object
+ * def _frString(rleObjs): # <<<<<<<<<<<<<<
+ * cdef siz n = len(rleObjs)
+ * Rs = RLEs(n)
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_3_frString = {"_frString", (PyCFunction)__pyx_pw_5_mask_3_frString, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_frString (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_2_frString(__pyx_self, ((PyObject *)__pyx_v_rleObjs));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ siz __pyx_v_n;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = NULL;
+ PyObject *__pyx_v_py_string = 0;
+ char *__pyx_v_c_string;
+ PyObject *__pyx_v_i = NULL;
+ PyObject *__pyx_v_obj = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ Py_ssize_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *(*__pyx_t_4)(PyObject *);
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ int __pyx_t_7;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *__pyx_t_11 = NULL;
+ char *__pyx_t_12;
+ Py_ssize_t __pyx_t_13;
+ siz __pyx_t_14;
+ siz __pyx_t_15;
+ __Pyx_RefNannySetupContext("_frString", 0);
+
+ /* "_mask.pyx":120
+ * # internal conversion from compressed RLE format to Python RLEs object
+ * def _frString(rleObjs):
+ * cdef siz n = len(rleObjs) # <<<<<<<<<<<<<<
+ * Rs = RLEs(n)
+ * cdef bytes py_string
+ */
+ __pyx_t_1 = PyObject_Length(__pyx_v_rleObjs); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 120, __pyx_L1_error)
+ __pyx_v_n = __pyx_t_1;
+
+ /* "_mask.pyx":121
+ * def _frString(rleObjs):
+ * cdef siz n = len(rleObjs)
+ * Rs = RLEs(n) # <<<<<<<<<<<<<<
+ * cdef bytes py_string
+ * cdef char* c_string
+ */
+ __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 121, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 121, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":124
+ * cdef bytes py_string
+ * cdef char* c_string
+ * for i, obj in enumerate(rleObjs): # <<<<<<<<<<<<<<
+ * if PYTHON_VERSION == 2:
+ * py_string = str(obj['counts']).encode('utf8')
+ */
+ __Pyx_INCREF(__pyx_int_0);
+ __pyx_t_3 = __pyx_int_0;
+ if (likely(PyList_CheckExact(__pyx_v_rleObjs)) || PyTuple_CheckExact(__pyx_v_rleObjs)) {
+ __pyx_t_2 = __pyx_v_rleObjs; __Pyx_INCREF(__pyx_t_2); __pyx_t_1 = 0;
+ __pyx_t_4 = NULL;
+ } else {
+ __pyx_t_1 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_rleObjs); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 124, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 124, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_4)) {
+ if (likely(PyList_CheckExact(__pyx_t_2))) {
+ if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 124, __pyx_L1_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ } else {
+ if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 124, __pyx_L1_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ }
+ } else {
+ __pyx_t_5 = __pyx_t_4(__pyx_t_2);
+ if (unlikely(!__pyx_t_5)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 124, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_5);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_3);
+ __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_3);
+ __pyx_t_3 = __pyx_t_5;
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":125
+ * cdef char* c_string
+ * for i, obj in enumerate(rleObjs):
+ * if PYTHON_VERSION == 2: # <<<<<<<<<<<<<<
+ * py_string = str(obj['counts']).encode('utf8')
+ * elif PYTHON_VERSION == 3:
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_PYTHON_VERSION); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 125, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_t_5, __pyx_int_2, 2, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 125, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 125, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (__pyx_t_7) {
+
+ /* "_mask.pyx":126
+ * for i, obj in enumerate(rleObjs):
+ * if PYTHON_VERSION == 2:
+ * py_string = str(obj['counts']).encode('utf8') # <<<<<<<<<<<<<<
+ * elif PYTHON_VERSION == 3:
+ * py_string = str.encode(obj['counts']) if type(obj['counts']) == str else obj['counts']
+ */
+ __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 126, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyString_Type)), __pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 126, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_encode); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 126, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 126, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (!(likely(PyBytes_CheckExact(__pyx_t_5))||((__pyx_t_5) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_5)->tp_name), 0))) __PYX_ERR(0, 126, __pyx_L1_error)
+ __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_5));
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":125
+ * cdef char* c_string
+ * for i, obj in enumerate(rleObjs):
+ * if PYTHON_VERSION == 2: # <<<<<<<<<<<<<<
+ * py_string = str(obj['counts']).encode('utf8')
+ * elif PYTHON_VERSION == 3:
+ */
+ goto __pyx_L5;
+ }
+
+ /* "_mask.pyx":127
+ * if PYTHON_VERSION == 2:
+ * py_string = str(obj['counts']).encode('utf8')
+ * elif PYTHON_VERSION == 3: # <<<<<<<<<<<<<<
+ * py_string = str.encode(obj['counts']) if type(obj['counts']) == str else obj['counts']
+ * else:
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_PYTHON_VERSION); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 127, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_PyInt_EqObjC(__pyx_t_5, __pyx_int_3, 3, 0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 127, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 127, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (likely(__pyx_t_7)) {
+
+ /* "_mask.pyx":128
+ * py_string = str(obj['counts']).encode('utf8')
+ * elif PYTHON_VERSION == 3:
+ * py_string = str.encode(obj['counts']) if type(obj['counts']) == str else obj['counts'] # <<<<<<<<<<<<<<
+ * else:
+ * raise Exception('Python version must be 2 or 3')
+ */
+ __pyx_t_5 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_t_5)), ((PyObject *)(&PyString_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (__pyx_t_7) {
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)(&PyString_Type)), __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_9 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __pyx_t_10 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ if (!__pyx_t_10) {
+ __pyx_t_8 = __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_9); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_9};
+ __pyx_t_8 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_9};
+ __pyx_t_8 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_11 = PyTuple_New(1+1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_10); __pyx_t_10 = NULL;
+ __Pyx_GIVEREF(__pyx_t_9);
+ PyTuple_SET_ITEM(__pyx_t_11, 0+1, __pyx_t_9);
+ __pyx_t_9 = 0;
+ __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_11, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (!(likely(PyBytes_CheckExact(__pyx_t_8))||((__pyx_t_8) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_8)->tp_name), 0))) __PYX_ERR(0, 128, __pyx_L1_error)
+ __pyx_t_6 = __pyx_t_8;
+ __pyx_t_8 = 0;
+ } else {
+ __pyx_t_8 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 128, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ if (!(likely(PyBytes_CheckExact(__pyx_t_8))||((__pyx_t_8) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_8)->tp_name), 0))) __PYX_ERR(0, 128, __pyx_L1_error)
+ __pyx_t_6 = __pyx_t_8;
+ __pyx_t_8 = 0;
+ }
+ __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_6));
+ __pyx_t_6 = 0;
+
+ /* "_mask.pyx":127
+ * if PYTHON_VERSION == 2:
+ * py_string = str(obj['counts']).encode('utf8')
+ * elif PYTHON_VERSION == 3: # <<<<<<<<<<<<<<
+ * py_string = str.encode(obj['counts']) if type(obj['counts']) == str else obj['counts']
+ * else:
+ */
+ goto __pyx_L5;
+ }
+
+ /* "_mask.pyx":130
+ * py_string = str.encode(obj['counts']) if type(obj['counts']) == str else obj['counts']
+ * else:
+ * raise Exception('Python version must be 2 or 3') # <<<<<<<<<<<<<<
+ * c_string = py_string
+ * rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] )
+ */
+ /*else*/ {
+ __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 130, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_Raise(__pyx_t_6, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __PYX_ERR(0, 130, __pyx_L1_error)
+ }
+ __pyx_L5:;
+
+ /* "_mask.pyx":131
+ * else:
+ * raise Exception('Python version must be 2 or 3')
+ * c_string = py_string # <<<<<<<<<<<<<<
+ * rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] )
+ * return Rs
+ */
+ if (unlikely(__pyx_v_py_string == Py_None)) {
+ PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found");
+ __PYX_ERR(0, 131, __pyx_L1_error)
+ }
+ __pyx_t_12 = __Pyx_PyBytes_AsWritableString(__pyx_v_py_string); if (unlikely((!__pyx_t_12) && PyErr_Occurred())) __PYX_ERR(0, 131, __pyx_L1_error)
+ __pyx_v_c_string = __pyx_t_12;
+
+ /* "_mask.pyx":132
+ * raise Exception('Python version must be 2 or 3')
+ * c_string = py_string
+ * rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] ) # <<<<<<<<<<<<<<
+ * return Rs
+ *
+ */
+ __pyx_t_13 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_13 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 132, __pyx_L1_error)
+ __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_14 = __Pyx_PyInt_As_siz(__pyx_t_8); if (unlikely((__pyx_t_14 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_8, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_15 = __Pyx_PyInt_As_siz(__pyx_t_6); if (unlikely((__pyx_t_15 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 132, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ rleFrString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_13]))), ((char *)__pyx_v_c_string), __pyx_t_14, __pyx_t_15);
+
+ /* "_mask.pyx":124
+ * cdef bytes py_string
+ * cdef char* c_string
+ * for i, obj in enumerate(rleObjs): # <<<<<<<<<<<<<<
+ * if PYTHON_VERSION == 2:
+ * py_string = str(obj['counts']).encode('utf8')
+ */
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "_mask.pyx":133
+ * c_string = py_string
+ * rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] )
+ * return Rs # <<<<<<<<<<<<<<
+ *
+ * # encode mask to RLEs objects
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(((PyObject *)__pyx_v_Rs));
+ __pyx_r = ((PyObject *)__pyx_v_Rs);
+ goto __pyx_L0;
+
+ /* "_mask.pyx":119
+ *
+ * # internal conversion from compressed RLE format to Python RLEs object
+ * def _frString(rleObjs): # <<<<<<<<<<<<<<
+ * cdef siz n = len(rleObjs)
+ * Rs = RLEs(n)
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_XDECREF(__pyx_t_11);
+ __Pyx_AddTraceback("_mask._frString", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_py_string);
+ __Pyx_XDECREF(__pyx_v_i);
+ __Pyx_XDECREF(__pyx_v_obj);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":137
+ * # encode mask to RLEs objects
+ * # list of RLE string can be generated by RLEs member function
+ * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask): # <<<<<<<<<<<<<<
+ * h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
+ * cdef RLEs Rs = RLEs(n)
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_5encode = {"encode", (PyCFunction)__pyx_pw_5_mask_5encode, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("encode (wrapper)", 0);
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_mask), __pyx_ptype_5numpy_ndarray, 1, "mask", 0))) __PYX_ERR(0, 137, __pyx_L1_error)
+ __pyx_r = __pyx_pf_5_mask_4encode(__pyx_self, ((PyArrayObject *)__pyx_v_mask));
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask) {
+ npy_intp __pyx_v_h;
+ npy_intp __pyx_v_w;
+ npy_intp __pyx_v_n;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = 0;
+ PyObject *__pyx_v_objs = NULL;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_mask;
+ __Pyx_Buffer __pyx_pybuffer_mask;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ npy_intp __pyx_t_1;
+ npy_intp __pyx_t_2;
+ npy_intp __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ __Pyx_RefNannySetupContext("encode", 0);
+ __pyx_pybuffer_mask.pybuffer.buf = NULL;
+ __pyx_pybuffer_mask.refcount = 0;
+ __pyx_pybuffernd_mask.data = NULL;
+ __pyx_pybuffernd_mask.rcbuffer = &__pyx_pybuffer_mask;
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_mask.rcbuffer->pybuffer, (PyObject*)__pyx_v_mask, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_F_CONTIGUOUS, 3, 0, __pyx_stack) == -1)) __PYX_ERR(0, 137, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_mask.diminfo[0].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_mask.diminfo[0].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_mask.diminfo[1].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_mask.diminfo[1].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_mask.diminfo[2].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_mask.diminfo[2].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[2];
+
+ /* "_mask.pyx":138
+ * # list of RLE string can be generated by RLEs member function
+ * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):
+ * h, w, n = mask.shape[0], mask.shape[1], mask.shape[2] # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = RLEs(n)
+ * rleEncode(Rs._R,mask.data,h,w,n)
+ */
+ __pyx_t_1 = (__pyx_v_mask->dimensions[0]);
+ __pyx_t_2 = (__pyx_v_mask->dimensions[1]);
+ __pyx_t_3 = (__pyx_v_mask->dimensions[2]);
+ __pyx_v_h = __pyx_t_1;
+ __pyx_v_w = __pyx_t_2;
+ __pyx_v_n = __pyx_t_3;
+
+ /* "_mask.pyx":139
+ * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):
+ * h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
+ * cdef RLEs Rs = RLEs(n) # <<<<<<<<<<<<<<
+ * rleEncode(Rs._R,mask.data,h,w,n)
+ * objs = _toString(Rs)
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_Py_intptr_t(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 139, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":140
+ * h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
+ * cdef RLEs Rs = RLEs(n)
+ * rleEncode(Rs._R,mask.data,h,w,n) # <<<<<<<<<<<<<<
+ * objs = _toString(Rs)
+ * return objs
+ */
+ rleEncode(__pyx_v_Rs->_R, ((byte *)__pyx_v_mask->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);
+
+ /* "_mask.pyx":141
+ * cdef RLEs Rs = RLEs(n)
+ * rleEncode(Rs._R,mask.data,h,w,n)
+ * objs = _toString(Rs) # <<<<<<<<<<<<<<
+ * return objs
+ *
+ */
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ if (!__pyx_t_6) {
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_4, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_6, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_6, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_Rs));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));
+ PyTuple_SET_ITEM(__pyx_t_7, 0+1, ((PyObject *)__pyx_v_Rs));
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_objs = __pyx_t_5;
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":142
+ * rleEncode(Rs._R,mask.data,h,w,n)
+ * objs = _toString(Rs)
+ * return objs # <<<<<<<<<<<<<<
+ *
+ * # decode mask from compressed list of RLE string or RLEs object
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":137
+ * # encode mask to RLEs objects
+ * # list of RLE string can be generated by RLEs member function
+ * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask): # <<<<<<<<<<<<<<
+ * h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
+ * cdef RLEs Rs = RLEs(n)
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.encode", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":145
+ *
+ * # decode mask from compressed list of RLE string or RLEs object
+ * def decode(rleObjs): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_7decode = {"decode", (PyCFunction)__pyx_pw_5_mask_7decode, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("decode (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_6decode(__pyx_self, ((PyObject *)__pyx_v_rleObjs));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = 0;
+ siz __pyx_v_h;
+ siz __pyx_v_w;
+ siz __pyx_v_n;
+ struct __pyx_obj_5_mask_Masks *__pyx_v_masks = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ siz __pyx_t_5;
+ siz __pyx_t_6;
+ siz __pyx_t_7;
+ __Pyx_RefNannySetupContext("decode", 0);
+
+ /* "_mask.pyx":146
+ * # decode mask from compressed list of RLE string or RLEs object
+ * def decode(rleObjs):
+ * cdef RLEs Rs = _frString(rleObjs) # <<<<<<<<<<<<<<
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ * masks = Masks(h, w, n)
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(__pyx_v_rleObjs);
+ __Pyx_GIVEREF(__pyx_v_rleObjs);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5_mask_RLEs))))) __PYX_ERR(0, 146, __pyx_L1_error)
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":147
+ * def decode(rleObjs):
+ * cdef RLEs Rs = _frString(rleObjs)
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n # <<<<<<<<<<<<<<
+ * masks = Masks(h, w, n)
+ * rleDecode(Rs._R, masks._mask, n);
+ */
+ __pyx_t_5 = (__pyx_v_Rs->_R[0]).h;
+ __pyx_t_6 = (__pyx_v_Rs->_R[0]).w;
+ __pyx_t_7 = __pyx_v_Rs->_n;
+ __pyx_v_h = __pyx_t_5;
+ __pyx_v_w = __pyx_t_6;
+ __pyx_v_n = __pyx_t_7;
+
+ /* "_mask.pyx":148
+ * cdef RLEs Rs = _frString(rleObjs)
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ * masks = Masks(h, w, n) # <<<<<<<<<<<<<<
+ * rleDecode(Rs._R, masks._mask, n);
+ * return np.array(masks)
+ */
+ __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_w); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 148, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_4);
+ __pyx_t_1 = 0;
+ __pyx_t_2 = 0;
+ __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_5_mask_Masks), __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 148, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_masks = ((struct __pyx_obj_5_mask_Masks *)__pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "_mask.pyx":149
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ * masks = Masks(h, w, n)
+ * rleDecode(Rs._R, masks._mask, n); # <<<<<<<<<<<<<<
+ * return np.array(masks)
+ *
+ */
+ rleDecode(((RLE *)__pyx_v_Rs->_R), __pyx_v_masks->_mask, __pyx_v_n);
+
+ /* "_mask.pyx":150
+ * masks = Masks(h, w, n)
+ * rleDecode(Rs._R, masks._mask, n);
+ * return np.array(masks) # <<<<<<<<<<<<<<
+ *
+ * def merge(rleObjs, intersect=0):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_masks)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_masks)};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_masks)};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ } else
+ #endif
+ {
+ __pyx_t_1 = PyTuple_New(1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_masks));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_masks));
+ PyTuple_SET_ITEM(__pyx_t_1, 0+1, ((PyObject *)__pyx_v_masks));
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 150, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_r = __pyx_t_4;
+ __pyx_t_4 = 0;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":145
+ *
+ * # decode mask from compressed list of RLE string or RLEs object
+ * def decode(rleObjs): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("_mask.decode", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF((PyObject *)__pyx_v_masks);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":152
+ * return np.array(masks)
+ *
+ * def merge(rleObjs, intersect=0): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef RLEs R = RLEs(1)
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_9merge = {"merge", (PyCFunction)__pyx_pw_5_mask_9merge, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_rleObjs = 0;
+ PyObject *__pyx_v_intersect = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("merge (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_rleObjs,&__pyx_n_s_intersect,0};
+ PyObject* values[2] = {0,0};
+ values[1] = ((PyObject *)__pyx_int_0);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_rleObjs)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_intersect);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "merge") < 0)) __PYX_ERR(0, 152, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_rleObjs = values[0];
+ __pyx_v_intersect = values[1];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("merge", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 152, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.merge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_8merge(__pyx_self, __pyx_v_rleObjs, __pyx_v_intersect);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, PyObject *__pyx_v_intersect) {
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = 0;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_R = 0;
+ PyObject *__pyx_v_obj = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ int __pyx_t_5;
+ __Pyx_RefNannySetupContext("merge", 0);
+
+ /* "_mask.pyx":153
+ *
+ * def merge(rleObjs, intersect=0):
+ * cdef RLEs Rs = _frString(rleObjs) # <<<<<<<<<<<<<<
+ * cdef RLEs R = RLEs(1)
+ * rleMerge(Rs._R, R._R, Rs._n, intersect)
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(__pyx_v_rleObjs);
+ __Pyx_GIVEREF(__pyx_v_rleObjs);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 153, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5_mask_RLEs))))) __PYX_ERR(0, 153, __pyx_L1_error)
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":154
+ * def merge(rleObjs, intersect=0):
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef RLEs R = RLEs(1) # <<<<<<<<<<<<<<
+ * rleMerge(Rs._R, R._R, Rs._n, intersect)
+ * obj = _toString(R)[0]
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 154, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v_R = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":155
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef RLEs R = RLEs(1)
+ * rleMerge(Rs._R, R._R, Rs._n, intersect) # <<<<<<<<<<<<<<
+ * obj = _toString(R)[0]
+ * return obj
+ */
+ __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_v_intersect); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 155, __pyx_L1_error)
+ rleMerge(((RLE *)__pyx_v_Rs->_R), ((RLE *)__pyx_v_R->_R), ((siz)__pyx_v_Rs->_n), __pyx_t_5);
+
+ /* "_mask.pyx":156
+ * cdef RLEs R = RLEs(1)
+ * rleMerge(Rs._R, R._R, Rs._n, intersect)
+ * obj = _toString(R)[0] # <<<<<<<<<<<<<<
+ * return obj
+ *
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_4) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_R)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_R)};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, ((PyObject *)__pyx_v_R)};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_R));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_R));
+ PyTuple_SET_ITEM(__pyx_t_3, 0+1, ((PyObject *)__pyx_v_R));
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 156, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_obj = __pyx_t_2;
+ __pyx_t_2 = 0;
+
+ /* "_mask.pyx":157
+ * rleMerge(Rs._R, R._R, Rs._n, intersect)
+ * obj = _toString(R)[0]
+ * return obj # <<<<<<<<<<<<<<
+ *
+ * def area(rleObjs):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_obj);
+ __pyx_r = __pyx_v_obj;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":152
+ * return np.array(masks)
+ *
+ * def merge(rleObjs, intersect=0): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef RLEs R = RLEs(1)
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("_mask.merge", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF((PyObject *)__pyx_v_R);
+ __Pyx_XDECREF(__pyx_v_obj);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":159
+ * return obj
+ *
+ * def area(rleObjs): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef uint* _a = malloc(Rs._n* sizeof(uint))
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_11area = {"area", (PyCFunction)__pyx_pw_5_mask_11area, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("area (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_10area(__pyx_self, ((PyObject *)__pyx_v_rleObjs));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = 0;
+ uint *__pyx_v__a;
+ npy_intp __pyx_v_shape[1];
+ PyObject *__pyx_v_a = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ __Pyx_RefNannySetupContext("area", 0);
+
+ /* "_mask.pyx":160
+ *
+ * def area(rleObjs):
+ * cdef RLEs Rs = _frString(rleObjs) # <<<<<<<<<<<<<<
+ * cdef uint* _a = malloc(Rs._n* sizeof(uint))
+ * rleArea(Rs._R, Rs._n, _a)
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(__pyx_v_rleObjs);
+ __Pyx_GIVEREF(__pyx_v_rleObjs);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5_mask_RLEs))))) __PYX_ERR(0, 160, __pyx_L1_error)
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":161
+ * def area(rleObjs):
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef uint* _a = malloc(Rs._n* sizeof(uint)) # <<<<<<<<<<<<<<
+ * rleArea(Rs._R, Rs._n, _a)
+ * cdef np.npy_intp shape[1]
+ */
+ __pyx_v__a = ((uint *)malloc((__pyx_v_Rs->_n * (sizeof(unsigned int)))));
+
+ /* "_mask.pyx":162
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef uint* _a = malloc(Rs._n* sizeof(uint))
+ * rleArea(Rs._R, Rs._n, _a) # <<<<<<<<<<<<<<
+ * cdef np.npy_intp shape[1]
+ * shape[0] = Rs._n
+ */
+ rleArea(__pyx_v_Rs->_R, __pyx_v_Rs->_n, __pyx_v__a);
+
+ /* "_mask.pyx":164
+ * rleArea(Rs._R, Rs._n, _a)
+ * cdef np.npy_intp shape[1]
+ * shape[0] = Rs._n # <<<<<<<<<<<<<<
+ * a = np.array((Rs._n, ), dtype=np.uint8)
+ * a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
+ */
+ (__pyx_v_shape[0]) = ((npy_intp)__pyx_v_Rs->_n);
+
+ /* "_mask.pyx":165
+ * cdef np.npy_intp shape[1]
+ * shape[0] = Rs._n
+ * a = np.array((Rs._n, ), dtype=np.uint8) # <<<<<<<<<<<<<<
+ * a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
+ * PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_Rs->_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);
+ __pyx_t_1 = 0;
+ __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4);
+ __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_a = __pyx_t_5;
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":166
+ * shape[0] = Rs._n
+ * a = np.array((Rs._n, ), dtype=np.uint8)
+ * a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a) # <<<<<<<<<<<<<<
+ * PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)
+ * return a
+ */
+ __pyx_t_5 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT32, __pyx_v__a); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 166, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF_SET(__pyx_v_a, __pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":167
+ * a = np.array((Rs._n, ), dtype=np.uint8)
+ * a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
+ * PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA) # <<<<<<<<<<<<<<
+ * return a
+ *
+ */
+ if (!(likely(((__pyx_v_a) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_a, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 167, __pyx_L1_error)
+ PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_a), NPY_OWNDATA);
+
+ /* "_mask.pyx":168
+ * a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
+ * PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)
+ * return a # <<<<<<<<<<<<<<
+ *
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_a);
+ __pyx_r = __pyx_v_a;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":159
+ * return obj
+ *
+ * def area(rleObjs): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef uint* _a = malloc(Rs._n* sizeof(uint))
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("_mask.area", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_a);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":171
+ *
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ): # <<<<<<<<<<<<<<
+ * def _preproc(objs):
+ * if len(objs) == 0:
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_13iou = {"iou", (PyCFunction)__pyx_pw_5_mask_13iou, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_dt = 0;
+ PyObject *__pyx_v_gt = 0;
+ PyObject *__pyx_v_pyiscrowd = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("iou (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_pyiscrowd,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("iou", 1, 3, 3, 1); __PYX_ERR(0, 171, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyiscrowd)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("iou", 1, 3, 3, 2); __PYX_ERR(0, 171, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "iou") < 0)) __PYX_ERR(0, 171, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_dt = values[0];
+ __pyx_v_gt = values[1];
+ __pyx_v_pyiscrowd = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("iou", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 171, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.iou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_12iou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_pyiscrowd);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":172
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ):
+ * def _preproc(objs): # <<<<<<<<<<<<<<
+ * if len(objs) == 0:
+ * return objs
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_3iou_1_preproc = {"_preproc", (PyCFunction)__pyx_pw_5_mask_3iou_1_preproc, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_preproc (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_3iou__preproc(__pyx_self, ((PyObject *)__pyx_v_objs));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs) {
+ PyObject *__pyx_v_isbox = NULL;
+ PyObject *__pyx_v_isrle = NULL;
+ PyObject *__pyx_v_obj = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ Py_ssize_t __pyx_t_1;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ int __pyx_t_8;
+ int __pyx_t_9;
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *(*__pyx_t_11)(PyObject *);
+ PyObject *__pyx_t_12 = NULL;
+ Py_ssize_t __pyx_t_13;
+ PyObject *__pyx_t_14 = NULL;
+ __Pyx_RefNannySetupContext("_preproc", 0);
+ __Pyx_INCREF(__pyx_v_objs);
+
+ /* "_mask.pyx":173
+ * def iou( dt, gt, pyiscrowd ):
+ * def _preproc(objs):
+ * if len(objs) == 0: # <<<<<<<<<<<<<<
+ * return objs
+ * if type(objs) == np.ndarray:
+ */
+ __pyx_t_1 = PyObject_Length(__pyx_v_objs); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 173, __pyx_L1_error)
+ __pyx_t_2 = ((__pyx_t_1 == 0) != 0);
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":174
+ * def _preproc(objs):
+ * if len(objs) == 0:
+ * return objs # <<<<<<<<<<<<<<
+ * if type(objs) == np.ndarray:
+ * if len(objs.shape) == 1:
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":173
+ * def iou( dt, gt, pyiscrowd ):
+ * def _preproc(objs):
+ * if len(objs) == 0: # <<<<<<<<<<<<<<
+ * return objs
+ * if type(objs) == np.ndarray:
+ */
+ }
+
+ /* "_mask.pyx":175
+ * if len(objs) == 0:
+ * return objs
+ * if type(objs) == np.ndarray: # <<<<<<<<<<<<<<
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((objs[0], 1))
+ */
+ __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 175, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 175, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":176
+ * return objs
+ * if type(objs) == np.ndarray:
+ * if len(objs.shape) == 1: # <<<<<<<<<<<<<<
+ * objs = objs.reshape((objs[0], 1))
+ * # check if it's Nx4 bbox
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 176, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_2 = ((__pyx_t_1 == 1) != 0);
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":177
+ * if type(objs) == np.ndarray:
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((objs[0], 1)) # <<<<<<<<<<<<<<
+ * # check if it's Nx4 bbox
+ * if not len(objs.shape) == 2 or not objs.shape[1] == 4:
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_objs, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5);
+ __Pyx_INCREF(__pyx_int_1);
+ __Pyx_GIVEREF(__pyx_int_1);
+ PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_int_1);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ if (!__pyx_t_5) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_t_6};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_t_6};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_7, 0+1, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":176
+ * return objs
+ * if type(objs) == np.ndarray:
+ * if len(objs.shape) == 1: # <<<<<<<<<<<<<<
+ * objs = objs.reshape((objs[0], 1))
+ * # check if it's Nx4 bbox
+ */
+ }
+
+ /* "_mask.pyx":179
+ * objs = objs.reshape((objs[0], 1))
+ * # check if it's Nx4 bbox
+ * if not len(objs.shape) == 2 or not objs.shape[1] == 4: # <<<<<<<<<<<<<<
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ * objs = objs.astype(np.double)
+ */
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_8 = ((!((__pyx_t_1 == 2) != 0)) != 0);
+ if (!__pyx_t_8) {
+ } else {
+ __pyx_t_2 = __pyx_t_8;
+ goto __pyx_L7_bool_binop_done;
+ }
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_PyInt_EqObjC(__pyx_t_4, __pyx_int_4, 4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 179, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_9 = ((!__pyx_t_8) != 0);
+ __pyx_t_2 = __pyx_t_9;
+ __pyx_L7_bool_binop_done:;
+ if (unlikely(__pyx_t_2)) {
+
+ /* "_mask.pyx":180
+ * # check if it's Nx4 bbox
+ * if not len(objs.shape) == 2 or not objs.shape[1] == 4:
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension') # <<<<<<<<<<<<<<
+ * objs = objs.astype(np.double)
+ * elif type(objs) == list:
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 180, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(0, 180, __pyx_L1_error)
+
+ /* "_mask.pyx":179
+ * objs = objs.reshape((objs[0], 1))
+ * # check if it's Nx4 bbox
+ * if not len(objs.shape) == 2 or not objs.shape[1] == 4: # <<<<<<<<<<<<<<
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ * objs = objs.astype(np.double)
+ */
+ }
+
+ /* "_mask.pyx":181
+ * if not len(objs.shape) == 2 or not objs.shape[1] == 4:
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ * objs = objs.astype(np.double) # <<<<<<<<<<<<<<
+ * elif type(objs) == list:
+ * # check if list is in box format and convert it to np.ndarray
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_astype); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_double); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ if (!__pyx_t_7) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_6};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_6};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":175
+ * if len(objs) == 0:
+ * return objs
+ * if type(objs) == np.ndarray: # <<<<<<<<<<<<<<
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((objs[0], 1))
+ */
+ goto __pyx_L4;
+ }
+
+ /* "_mask.pyx":182
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ * objs = objs.astype(np.double)
+ * elif type(objs) == list: # <<<<<<<<<<<<<<
+ * # check if list is in box format and convert it to np.ndarray
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ */
+ __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 182, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 182, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (likely(__pyx_t_2)) {
+
+ /* "_mask.pyx":184
+ * elif type(objs) == list:
+ * # check if list is in box format and convert it to np.ndarray
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs])) # <<<<<<<<<<<<<<
+ * isrle = np.all(np.array([type(obj) == dict for obj in objs]))
+ * if isbox:
+ */
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_all); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_array); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {
+ __pyx_t_10 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_10); __pyx_t_1 = 0;
+ __pyx_t_11 = NULL;
+ } else {
+ __pyx_t_1 = -1; __pyx_t_10 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_11 = Py_TYPE(__pyx_t_10)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 184, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_11)) {
+ if (likely(PyList_CheckExact(__pyx_t_10))) {
+ if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_10)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_12 = PyList_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 184, __pyx_L1_error)
+ #else
+ __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ #endif
+ } else {
+ if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_10)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 184, __pyx_L1_error)
+ #else
+ __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ #endif
+ }
+ } else {
+ __pyx_t_12 = __pyx_t_11(__pyx_t_10);
+ if (unlikely(!__pyx_t_12)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 184, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_12);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_12);
+ __pyx_t_12 = 0;
+ __pyx_t_13 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_13 == ((Py_ssize_t)-1))) __PYX_ERR(0, 184, __pyx_L1_error)
+ __pyx_t_2 = (__pyx_t_13 == 4);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_14 = __Pyx_PyBool_FromLong(__pyx_t_2); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_14);
+ __pyx_t_12 = __pyx_t_14;
+ __pyx_t_14 = 0;
+ goto __pyx_L11_bool_binop_done;
+ }
+ __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_14); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 184, __pyx_L1_error)
+ if (!__pyx_t_2) {
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ } else {
+ __Pyx_INCREF(__pyx_t_14);
+ __pyx_t_12 = __pyx_t_14;
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ goto __pyx_L11_bool_binop_done;
+ }
+ __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_INCREF(__pyx_t_14);
+ __pyx_t_12 = __pyx_t_14;
+ __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;
+ __pyx_L11_bool_binop_done:;
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_6, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __pyx_t_10 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_7);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_7, function);
+ }
+ }
+ if (!__pyx_t_10) {
+ __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_6};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_10, __pyx_t_6};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_10); __pyx_t_10 = NULL;
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ if (!__pyx_t_7) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_4};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_12 = PyTuple_New(1+1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_12, 0+1, __pyx_t_4);
+ __pyx_t_4 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 184, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_isbox = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":185
+ * # check if list is in box format and convert it to np.ndarray
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ * isrle = np.all(np.array([type(obj) == dict for obj in objs])) # <<<<<<<<<<<<<<
+ * if isbox:
+ * objs = np.array(objs, dtype=np.double)
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_all); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_array); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = PyList_New(0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {
+ __pyx_t_6 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_6); __pyx_t_1 = 0;
+ __pyx_t_11 = NULL;
+ } else {
+ __pyx_t_1 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_11 = Py_TYPE(__pyx_t_6)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 185, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_11)) {
+ if (likely(PyList_CheckExact(__pyx_t_6))) {
+ if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_6)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_10 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_1); __Pyx_INCREF(__pyx_t_10); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 185, __pyx_L1_error)
+ #else
+ __pyx_t_10 = PySequence_ITEM(__pyx_t_6, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ #endif
+ } else {
+ if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_6)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_10 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_1); __Pyx_INCREF(__pyx_t_10); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 185, __pyx_L1_error)
+ #else
+ __pyx_t_10 = PySequence_ITEM(__pyx_t_6, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ #endif
+ }
+ } else {
+ __pyx_t_10 = __pyx_t_11(__pyx_t_6);
+ if (unlikely(!__pyx_t_10)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 185, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_10);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_10);
+ __pyx_t_10 = 0;
+ __pyx_t_10 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L1_error)
+ if (unlikely(__Pyx_ListComp_Append(__pyx_t_4, (PyObject*)__pyx_t_10))) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_7))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_7);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_7, function);
+ }
+ }
+ if (!__pyx_t_6) {
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_6, __pyx_t_4};
+ __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_7)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_6, __pyx_t_4};
+ __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_7, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_10 = PyTuple_New(1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_10, 0+1, __pyx_t_4);
+ __pyx_t_4 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_10, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_12))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_12);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_12);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_12, function);
+ }
+ }
+ if (!__pyx_t_7) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_12, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_12)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_5};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_12)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_5};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_12, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_10 = PyTuple_New(1+1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_10, 0+1, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 185, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __pyx_v_isrle = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":186
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ * isrle = np.all(np.array([type(obj) == dict for obj in objs]))
+ * if isbox: # <<<<<<<<<<<<<<
+ * objs = np.array(objs, dtype=np.double)
+ * if len(objs.shape) == 1:
+ */
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isbox); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 186, __pyx_L1_error)
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":187
+ * isrle = np.all(np.array([type(obj) == dict for obj in objs]))
+ * if isbox:
+ * objs = np.array(objs, dtype=np.double) # <<<<<<<<<<<<<<
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((1,objs.shape[0]))
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_12 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_array); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_INCREF(__pyx_v_objs);
+ __Pyx_GIVEREF(__pyx_v_objs);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_objs);
+ __pyx_t_10 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_double); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (PyDict_SetItem(__pyx_t_10, __pyx_n_s_dtype, __pyx_t_7) < 0) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_12, __pyx_t_3, __pyx_t_10); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 187, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);
+ __pyx_t_7 = 0;
+
+ /* "_mask.pyx":188
+ * if isbox:
+ * objs = np.array(objs, dtype=np.double)
+ * if len(objs.shape) == 1: # <<<<<<<<<<<<<<
+ * objs = objs.reshape((1,objs.shape[0]))
+ * elif isrle:
+ */
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 188, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_1 = PyObject_Length(__pyx_t_7); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 188, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_2 = ((__pyx_t_1 == 1) != 0);
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":189
+ * objs = np.array(objs, dtype=np.double)
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((1,objs.shape[0])) # <<<<<<<<<<<<<<
+ * elif isrle:
+ * objs = _frString(objs)
+ */
+ __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_12 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_12);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_INCREF(__pyx_int_1);
+ __Pyx_GIVEREF(__pyx_int_1);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_int_1);
+ __Pyx_GIVEREF(__pyx_t_12);
+ PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_12);
+ __pyx_t_12 = 0;
+ __pyx_t_12 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_10))) {
+ __pyx_t_12 = PyMethod_GET_SELF(__pyx_t_10);
+ if (likely(__pyx_t_12)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_10);
+ __Pyx_INCREF(__pyx_t_12);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_10, function);
+ }
+ }
+ if (!__pyx_t_12) {
+ __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_10, __pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_3};
+ __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_12, __pyx_t_3};
+ __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_12); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_12); __pyx_t_12 = NULL;
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_3);
+ __pyx_t_3 = 0;
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_5, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 189, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);
+ __pyx_t_7 = 0;
+
+ /* "_mask.pyx":188
+ * if isbox:
+ * objs = np.array(objs, dtype=np.double)
+ * if len(objs.shape) == 1: # <<<<<<<<<<<<<<
+ * objs = objs.reshape((1,objs.shape[0]))
+ * elif isrle:
+ */
+ }
+
+ /* "_mask.pyx":186
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ * isrle = np.all(np.array([type(obj) == dict for obj in objs]))
+ * if isbox: # <<<<<<<<<<<<<<
+ * objs = np.array(objs, dtype=np.double)
+ * if len(objs.shape) == 1:
+ */
+ goto __pyx_L16;
+ }
+
+ /* "_mask.pyx":190
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((1,objs.shape[0]))
+ * elif isrle: # <<<<<<<<<<<<<<
+ * objs = _frString(objs)
+ * else:
+ */
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isrle); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 190, __pyx_L1_error)
+ if (likely(__pyx_t_2)) {
+
+ /* "_mask.pyx":191
+ * objs = objs.reshape((1,objs.shape[0]))
+ * elif isrle:
+ * objs = _frString(objs) # <<<<<<<<<<<<<<
+ * else:
+ * raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')
+ */
+ __pyx_t_10 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_10))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_10);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_10);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_10, function);
+ }
+ }
+ if (!__pyx_t_5) {
+ __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_10, __pyx_v_objs); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_v_objs};
+ __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_10)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_5, __pyx_v_objs};
+ __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_10, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); __pyx_t_5 = NULL;
+ __Pyx_INCREF(__pyx_v_objs);
+ __Pyx_GIVEREF(__pyx_v_objs);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_v_objs);
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_10, __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 191, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_7);
+ __pyx_t_7 = 0;
+
+ /* "_mask.pyx":190
+ * if len(objs.shape) == 1:
+ * objs = objs.reshape((1,objs.shape[0]))
+ * elif isrle: # <<<<<<<<<<<<<<
+ * objs = _frString(objs)
+ * else:
+ */
+ goto __pyx_L16;
+ }
+
+ /* "_mask.pyx":193
+ * objs = _frString(objs)
+ * else:
+ * raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])') # <<<<<<<<<<<<<<
+ * else:
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ */
+ /*else*/ {
+ __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 193, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_Raise(__pyx_t_7, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __PYX_ERR(0, 193, __pyx_L1_error)
+ }
+ __pyx_L16:;
+
+ /* "_mask.pyx":182
+ * raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ * objs = objs.astype(np.double)
+ * elif type(objs) == list: # <<<<<<<<<<<<<<
+ * # check if list is in box format and convert it to np.ndarray
+ * isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ */
+ goto __pyx_L4;
+ }
+
+ /* "_mask.pyx":195
+ * raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')
+ * else:
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.') # <<<<<<<<<<<<<<
+ * return objs
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ */
+ /*else*/ {
+ __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 195, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_Raise(__pyx_t_7, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __PYX_ERR(0, 195, __pyx_L1_error)
+ }
+ __pyx_L4:;
+
+ /* "_mask.pyx":196
+ * else:
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ * return objs # <<<<<<<<<<<<<<
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":172
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ):
+ * def _preproc(objs): # <<<<<<<<<<<<<<
+ * if len(objs) == 0:
+ * return objs
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_XDECREF(__pyx_t_12);
+ __Pyx_XDECREF(__pyx_t_14);
+ __Pyx_AddTraceback("_mask.iou._preproc", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_isbox);
+ __Pyx_XDECREF(__pyx_v_isrle);
+ __Pyx_XDECREF(__pyx_v_obj);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":197
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ * return objs
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_3iou_3_rleIou = {"_rleIou", (PyCFunction)__pyx_pw_5_mask_3iou_3_rleIou, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_dt = 0;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_gt = 0;
+ PyArrayObject *__pyx_v_iscrowd = 0;
+ siz __pyx_v_m;
+ siz __pyx_v_n;
+ PyArrayObject *__pyx_v__iou = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_rleIou (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, 1); __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, 2); __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, 3); __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, 4); __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (likely((values[5] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, 5); __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_rleIou") < 0)) __PYX_ERR(0, 197, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ }
+ __pyx_v_dt = ((struct __pyx_obj_5_mask_RLEs *)values[0]);
+ __pyx_v_gt = ((struct __pyx_obj_5_mask_RLEs *)values[1]);
+ __pyx_v_iscrowd = ((PyArrayObject *)values[2]);
+ __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 197, __pyx_L3_error)
+ __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 197, __pyx_L3_error)
+ __pyx_v__iou = ((PyArrayObject *)values[5]);
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("_rleIou", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 197, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.iou._rleIou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_5_mask_RLEs, 1, "dt", 0))) __PYX_ERR(0, 197, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_5_mask_RLEs, 1, "gt", 0))) __PYX_ERR(0, 197, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, "iscrowd", 0))) __PYX_ERR(0, 197, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, "_iou", 0))) __PYX_ERR(0, 197, __pyx_L1_error)
+ __pyx_r = __pyx_pf_5_mask_3iou_2_rleIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {
+ __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;
+ __Pyx_Buffer __pyx_pybuffer__iou;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;
+ __Pyx_Buffer __pyx_pybuffer_iscrowd;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_rleIou", 0);
+ __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;
+ __pyx_pybuffer_iscrowd.refcount = 0;
+ __pyx_pybuffernd_iscrowd.data = NULL;
+ __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;
+ __pyx_pybuffer__iou.pybuffer.buf = NULL;
+ __pyx_pybuffer__iou.refcount = 0;
+ __pyx_pybuffernd__iou.data = NULL;
+ __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 197, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 197, __pyx_L1_error)
+ }
+ __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];
+
+ /* "_mask.pyx":198
+ * return objs
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data ) # <<<<<<<<<<<<<<
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ */
+ rleIou(((RLE *)__pyx_v_dt->_R), ((RLE *)__pyx_v_gt->_R), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));
+
+ /* "_mask.pyx":197
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ * return objs
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.iou._rleIou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":199
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_3iou_5_bbIou = {"_bbIou", (PyCFunction)__pyx_pw_5_mask_3iou_5_bbIou, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyArrayObject *__pyx_v_dt = 0;
+ PyArrayObject *__pyx_v_gt = 0;
+ PyArrayObject *__pyx_v_iscrowd = 0;
+ siz __pyx_v_m;
+ siz __pyx_v_n;
+ PyArrayObject *__pyx_v__iou = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_bbIou (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, 1); __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, 2); __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, 3); __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, 4); __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (likely((values[5] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, 5); __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_bbIou") < 0)) __PYX_ERR(0, 199, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ }
+ __pyx_v_dt = ((PyArrayObject *)values[0]);
+ __pyx_v_gt = ((PyArrayObject *)values[1]);
+ __pyx_v_iscrowd = ((PyArrayObject *)values[2]);
+ __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 199, __pyx_L3_error)
+ __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 199, __pyx_L3_error)
+ __pyx_v__iou = ((PyArrayObject *)values[5]);
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("_bbIou", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 199, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.iou._bbIou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_5numpy_ndarray, 1, "dt", 0))) __PYX_ERR(0, 199, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_5numpy_ndarray, 1, "gt", 0))) __PYX_ERR(0, 199, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, "iscrowd", 0))) __PYX_ERR(0, 199, __pyx_L1_error)
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, "_iou", 0))) __PYX_ERR(0, 199, __pyx_L1_error)
+ __pyx_r = __pyx_pf_5_mask_3iou_4_bbIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {
+ __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;
+ __Pyx_Buffer __pyx_pybuffer__iou;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_dt;
+ __Pyx_Buffer __pyx_pybuffer_dt;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_gt;
+ __Pyx_Buffer __pyx_pybuffer_gt;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;
+ __Pyx_Buffer __pyx_pybuffer_iscrowd;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_bbIou", 0);
+ __pyx_pybuffer_dt.pybuffer.buf = NULL;
+ __pyx_pybuffer_dt.refcount = 0;
+ __pyx_pybuffernd_dt.data = NULL;
+ __pyx_pybuffernd_dt.rcbuffer = &__pyx_pybuffer_dt;
+ __pyx_pybuffer_gt.pybuffer.buf = NULL;
+ __pyx_pybuffer_gt.refcount = 0;
+ __pyx_pybuffernd_gt.data = NULL;
+ __pyx_pybuffernd_gt.rcbuffer = &__pyx_pybuffer_gt;
+ __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;
+ __pyx_pybuffer_iscrowd.refcount = 0;
+ __pyx_pybuffernd_iscrowd.data = NULL;
+ __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;
+ __pyx_pybuffer__iou.pybuffer.buf = NULL;
+ __pyx_pybuffer__iou.refcount = 0;
+ __pyx_pybuffernd__iou.data = NULL;
+ __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_dt.rcbuffer->pybuffer, (PyObject*)__pyx_v_dt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 199, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_dt.diminfo[0].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_dt.diminfo[0].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_dt.diminfo[1].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_dt.diminfo[1].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[1];
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_gt.rcbuffer->pybuffer, (PyObject*)__pyx_v_gt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 199, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_gt.diminfo[0].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_gt.diminfo[0].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_gt.diminfo[1].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_gt.diminfo[1].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[1];
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 199, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 199, __pyx_L1_error)
+ }
+ __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];
+
+ /* "_mask.pyx":200
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data ) # <<<<<<<<<<<<<<
+ * def _len(obj):
+ * cdef siz N = 0
+ */
+ bbIou(((BB)__pyx_v_dt->data), ((BB)__pyx_v_gt->data), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));
+
+ /* "_mask.pyx":199
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj):
+ */
+
+ /* function exit code */
+ __pyx_r = Py_None; __Pyx_INCREF(Py_None);
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.iou._bbIou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":201
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj): # <<<<<<<<<<<<<<
+ * cdef siz N = 0
+ * if type(obj) == RLEs:
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_3iou_7_len = {"_len", (PyCFunction)__pyx_pw_5_mask_3iou_7_len, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("_len (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_3iou_6_len(__pyx_self, ((PyObject *)__pyx_v_obj));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj) {
+ siz __pyx_v_N;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ siz __pyx_t_3;
+ Py_ssize_t __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ __Pyx_RefNannySetupContext("_len", 0);
+
+ /* "_mask.pyx":202
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj):
+ * cdef siz N = 0 # <<<<<<<<<<<<<<
+ * if type(obj) == RLEs:
+ * N = obj.n
+ */
+ __pyx_v_N = 0;
+
+ /* "_mask.pyx":203
+ * def _len(obj):
+ * cdef siz N = 0
+ * if type(obj) == RLEs: # <<<<<<<<<<<<<<
+ * N = obj.n
+ * elif len(obj)==0:
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 203, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":204
+ * cdef siz N = 0
+ * if type(obj) == RLEs:
+ * N = obj.n # <<<<<<<<<<<<<<
+ * elif len(obj)==0:
+ * pass
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_N = __pyx_t_3;
+
+ /* "_mask.pyx":203
+ * def _len(obj):
+ * cdef siz N = 0
+ * if type(obj) == RLEs: # <<<<<<<<<<<<<<
+ * N = obj.n
+ * elif len(obj)==0:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":205
+ * if type(obj) == RLEs:
+ * N = obj.n
+ * elif len(obj)==0: # <<<<<<<<<<<<<<
+ * pass
+ * elif type(obj) == np.ndarray:
+ */
+ __pyx_t_4 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_4 == ((Py_ssize_t)-1))) __PYX_ERR(0, 205, __pyx_L1_error)
+ __pyx_t_2 = ((__pyx_t_4 == 0) != 0);
+ if (__pyx_t_2) {
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":207
+ * elif len(obj)==0:
+ * pass
+ * elif type(obj) == np.ndarray: # <<<<<<<<<<<<<<
+ * N = obj.shape[0]
+ * return N
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 207, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 207, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":208
+ * pass
+ * elif type(obj) == np.ndarray:
+ * N = obj.shape[0] # <<<<<<<<<<<<<<
+ * return N
+ * # convert iscrowd to numpy array
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 208, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 208, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 208, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_N = __pyx_t_3;
+
+ /* "_mask.pyx":207
+ * elif len(obj)==0:
+ * pass
+ * elif type(obj) == np.ndarray: # <<<<<<<<<<<<<<
+ * N = obj.shape[0]
+ * return N
+ */
+ }
+ __pyx_L3:;
+
+ /* "_mask.pyx":209
+ * elif type(obj) == np.ndarray:
+ * N = obj.shape[0]
+ * return N # <<<<<<<<<<<<<<
+ * # convert iscrowd to numpy array
+ * cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_N); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 209, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_r = __pyx_t_5;
+ __pyx_t_5 = 0;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":201
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj): # <<<<<<<<<<<<<<
+ * cdef siz N = 0
+ * if type(obj) == RLEs:
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_AddTraceback("_mask.iou._len", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":171
+ *
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ): # <<<<<<<<<<<<<<
+ * def _preproc(objs):
+ * if len(objs) == 0:
+ */
+
+static PyObject *__pyx_pf_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd) {
+ PyObject *__pyx_v__preproc = 0;
+ PyObject *__pyx_v__rleIou = 0;
+ PyObject *__pyx_v__bbIou = 0;
+ PyObject *__pyx_v__len = 0;
+ PyArrayObject *__pyx_v_iscrowd = 0;
+ siz __pyx_v_m;
+ siz __pyx_v_n;
+ double *__pyx_v__iou;
+ npy_intp __pyx_v_shape[1];
+ PyObject *__pyx_v__iouFun = NULL;
+ PyObject *__pyx_v_iou = NULL;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;
+ __Pyx_Buffer __pyx_pybuffer_iscrowd;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ PyArrayObject *__pyx_t_6 = NULL;
+ siz __pyx_t_7;
+ int __pyx_t_8;
+ int __pyx_t_9;
+ int __pyx_t_10;
+ PyObject *__pyx_t_11 = NULL;
+ __Pyx_RefNannySetupContext("iou", 0);
+ __Pyx_INCREF(__pyx_v_dt);
+ __Pyx_INCREF(__pyx_v_gt);
+ __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;
+ __pyx_pybuffer_iscrowd.refcount = 0;
+ __pyx_pybuffernd_iscrowd.data = NULL;
+ __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;
+
+ /* "_mask.pyx":172
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ):
+ * def _preproc(objs): # <<<<<<<<<<<<<<
+ * if len(objs) == 0:
+ * return objs
+ */
+ __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_5_mask_3iou_1_preproc, 0, __pyx_n_s_iou_locals__preproc, NULL, __pyx_n_s_mask, __pyx_d, ((PyObject *)__pyx_codeobj__12)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 172, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v__preproc = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":197
+ * raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ * return objs
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ */
+ __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_5_mask_3iou_3_rleIou, 0, __pyx_n_s_iou_locals__rleIou, NULL, __pyx_n_s_mask, __pyx_d, ((PyObject *)__pyx_codeobj__14)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 197, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v__rleIou = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":199
+ * def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou): # <<<<<<<<<<<<<<
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj):
+ */
+ __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_5_mask_3iou_5_bbIou, 0, __pyx_n_s_iou_locals__bbIou, NULL, __pyx_n_s_mask, __pyx_d, ((PyObject *)__pyx_codeobj__16)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 199, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v__bbIou = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":201
+ * def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ * bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ * def _len(obj): # <<<<<<<<<<<<<<
+ * cdef siz N = 0
+ * if type(obj) == RLEs:
+ */
+ __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_5_mask_3iou_7_len, 0, __pyx_n_s_iou_locals__len, NULL, __pyx_n_s_mask, __pyx_d, ((PyObject *)__pyx_codeobj__18)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 201, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_v__len = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":211
+ * return N
+ * # convert iscrowd to numpy array
+ * cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8) # <<<<<<<<<<<<<<
+ * # simple type checking
+ * cdef siz m, n
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_INCREF(__pyx_v_pyiscrowd);
+ __Pyx_GIVEREF(__pyx_v_pyiscrowd);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pyiscrowd);
+ __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 211, __pyx_L1_error)
+ __pyx_t_6 = ((PyArrayObject *)__pyx_t_5);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ __pyx_v_iscrowd = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.buf = NULL;
+ __PYX_ERR(0, 211, __pyx_L1_error)
+ } else {__pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];
+ }
+ }
+ __pyx_t_6 = 0;
+ __pyx_v_iscrowd = ((PyArrayObject *)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":214
+ * # simple type checking
+ * cdef siz m, n
+ * dt = _preproc(dt) # <<<<<<<<<<<<<<
+ * gt = _preproc(gt)
+ * m = _len(dt)
+ */
+ __pyx_t_5 = __pyx_pf_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF_SET(__pyx_v_dt, __pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":215
+ * cdef siz m, n
+ * dt = _preproc(dt)
+ * gt = _preproc(gt) # <<<<<<<<<<<<<<
+ * m = _len(dt)
+ * n = _len(gt)
+ */
+ __pyx_t_5 = __pyx_pf_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 215, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF_SET(__pyx_v_gt, __pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":216
+ * dt = _preproc(dt)
+ * gt = _preproc(gt)
+ * m = _len(dt) # <<<<<<<<<<<<<<
+ * n = _len(gt)
+ * if m == 0 or n == 0:
+ */
+ __pyx_t_5 = __pyx_pf_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 216, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 216, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_m = __pyx_t_7;
+
+ /* "_mask.pyx":217
+ * gt = _preproc(gt)
+ * m = _len(dt)
+ * n = _len(gt) # <<<<<<<<<<<<<<
+ * if m == 0 or n == 0:
+ * return []
+ */
+ __pyx_t_5 = __pyx_pf_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_n = __pyx_t_7;
+
+ /* "_mask.pyx":218
+ * m = _len(dt)
+ * n = _len(gt)
+ * if m == 0 or n == 0: # <<<<<<<<<<<<<<
+ * return []
+ * if not type(dt) == type(gt):
+ */
+ __pyx_t_9 = ((__pyx_v_m == 0) != 0);
+ if (!__pyx_t_9) {
+ } else {
+ __pyx_t_8 = __pyx_t_9;
+ goto __pyx_L4_bool_binop_done;
+ }
+ __pyx_t_9 = ((__pyx_v_n == 0) != 0);
+ __pyx_t_8 = __pyx_t_9;
+ __pyx_L4_bool_binop_done:;
+ if (__pyx_t_8) {
+
+ /* "_mask.pyx":219
+ * n = _len(gt)
+ * if m == 0 or n == 0:
+ * return [] # <<<<<<<<<<<<<<
+ * if not type(dt) == type(gt):
+ * raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 219, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_r = __pyx_t_5;
+ __pyx_t_5 = 0;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":218
+ * m = _len(dt)
+ * n = _len(gt)
+ * if m == 0 or n == 0: # <<<<<<<<<<<<<<
+ * return []
+ * if not type(dt) == type(gt):
+ */
+ }
+
+ /* "_mask.pyx":220
+ * if m == 0 or n == 0:
+ * return []
+ * if not type(dt) == type(gt): # <<<<<<<<<<<<<<
+ * raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')
+ *
+ */
+ __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)Py_TYPE(__pyx_v_gt)), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_9 = ((!__pyx_t_8) != 0);
+ if (unlikely(__pyx_t_9)) {
+
+ /* "_mask.pyx":221
+ * return []
+ * if not type(dt) == type(gt):
+ * raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray') # <<<<<<<<<<<<<<
+ *
+ * # define local variables
+ */
+ __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 221, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_Raise(__pyx_t_5, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __PYX_ERR(0, 221, __pyx_L1_error)
+
+ /* "_mask.pyx":220
+ * if m == 0 or n == 0:
+ * return []
+ * if not type(dt) == type(gt): # <<<<<<<<<<<<<<
+ * raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')
+ *
+ */
+ }
+
+ /* "_mask.pyx":224
+ *
+ * # define local variables
+ * cdef double* _iou = 0 # <<<<<<<<<<<<<<
+ * cdef np.npy_intp shape[1]
+ * # check type and assign iou function
+ */
+ __pyx_v__iou = ((double *)0);
+
+ /* "_mask.pyx":227
+ * cdef np.npy_intp shape[1]
+ * # check type and assign iou function
+ * if type(dt) == RLEs: # <<<<<<<<<<<<<<
+ * _iouFun = _rleIou
+ * elif type(dt) == np.ndarray:
+ */
+ __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (__pyx_t_9) {
+
+ /* "_mask.pyx":228
+ * # check type and assign iou function
+ * if type(dt) == RLEs:
+ * _iouFun = _rleIou # <<<<<<<<<<<<<<
+ * elif type(dt) == np.ndarray:
+ * _iouFun = _bbIou
+ */
+ __Pyx_INCREF(__pyx_v__rleIou);
+ __pyx_v__iouFun = __pyx_v__rleIou;
+
+ /* "_mask.pyx":227
+ * cdef np.npy_intp shape[1]
+ * # check type and assign iou function
+ * if type(dt) == RLEs: # <<<<<<<<<<<<<<
+ * _iouFun = _rleIou
+ * elif type(dt) == np.ndarray:
+ */
+ goto __pyx_L7;
+ }
+
+ /* "_mask.pyx":229
+ * if type(dt) == RLEs:
+ * _iouFun = _rleIou
+ * elif type(dt) == np.ndarray: # <<<<<<<<<<<<<<
+ * _iouFun = _bbIou
+ * else:
+ */
+ __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 229, __pyx_L1_error)
+ __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 229, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (likely(__pyx_t_9)) {
+
+ /* "_mask.pyx":230
+ * _iouFun = _rleIou
+ * elif type(dt) == np.ndarray:
+ * _iouFun = _bbIou # <<<<<<<<<<<<<<
+ * else:
+ * raise Exception('input data type not allowed.')
+ */
+ __Pyx_INCREF(__pyx_v__bbIou);
+ __pyx_v__iouFun = __pyx_v__bbIou;
+
+ /* "_mask.pyx":229
+ * if type(dt) == RLEs:
+ * _iouFun = _rleIou
+ * elif type(dt) == np.ndarray: # <<<<<<<<<<<<<<
+ * _iouFun = _bbIou
+ * else:
+ */
+ goto __pyx_L7;
+ }
+
+ /* "_mask.pyx":232
+ * _iouFun = _bbIou
+ * else:
+ * raise Exception('input data type not allowed.') # <<<<<<<<<<<<<<
+ * _iou = malloc(m*n* sizeof(double))
+ * iou = np.zeros((m*n, ), dtype=np.double)
+ */
+ /*else*/ {
+ __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_Raise(__pyx_t_5, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __PYX_ERR(0, 232, __pyx_L1_error)
+ }
+ __pyx_L7:;
+
+ /* "_mask.pyx":233
+ * else:
+ * raise Exception('input data type not allowed.')
+ * _iou = malloc(m*n* sizeof(double)) # <<<<<<<<<<<<<<
+ * iou = np.zeros((m*n, ), dtype=np.double)
+ * shape[0] = m*n
+ */
+ __pyx_v__iou = ((double *)malloc(((__pyx_v_m * __pyx_v_n) * (sizeof(double)))));
+
+ /* "_mask.pyx":234
+ * raise Exception('input data type not allowed.')
+ * _iou = malloc(m*n* sizeof(double))
+ * iou = np.zeros((m*n, ), dtype=np.double) # <<<<<<<<<<<<<<
+ * shape[0] = m*n
+ * iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_zeros); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_m * __pyx_v_n)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_double); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 234, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_iou = __pyx_t_4;
+ __pyx_t_4 = 0;
+
+ /* "_mask.pyx":235
+ * _iou = malloc(m*n* sizeof(double))
+ * iou = np.zeros((m*n, ), dtype=np.double)
+ * shape[0] = m*n # <<<<<<<<<<<<<<
+ * iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
+ * PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
+ */
+ (__pyx_v_shape[0]) = (((npy_intp)__pyx_v_m) * __pyx_v_n);
+
+ /* "_mask.pyx":236
+ * iou = np.zeros((m*n, ), dtype=np.double)
+ * shape[0] = m*n
+ * iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou) # <<<<<<<<<<<<<<
+ * PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
+ * _iouFun(dt, gt, iscrowd, m, n, iou)
+ */
+ __pyx_t_4 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__iou); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 236, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF_SET(__pyx_v_iou, __pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "_mask.pyx":237
+ * shape[0] = m*n
+ * iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
+ * PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA) # <<<<<<<<<<<<<<
+ * _iouFun(dt, gt, iscrowd, m, n, iou)
+ * return iou.reshape((m,n), order='F')
+ */
+ if (!(likely(((__pyx_v_iou) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_iou, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 237, __pyx_L1_error)
+ PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_iou), NPY_OWNDATA);
+
+ /* "_mask.pyx":238
+ * iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
+ * PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
+ * _iouFun(dt, gt, iscrowd, m, n, iou) # <<<<<<<<<<<<<<
+ * return iou.reshape((m,n), order='F')
+ *
+ */
+ __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_INCREF(__pyx_v__iouFun);
+ __pyx_t_3 = __pyx_v__iouFun; __pyx_t_2 = NULL;
+ __pyx_t_10 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_10 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_11 = PyTuple_New(6+__pyx_t_10); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ if (__pyx_t_2) {
+ __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_2); __pyx_t_2 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_dt);
+ __Pyx_GIVEREF(__pyx_v_dt);
+ PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_10, __pyx_v_dt);
+ __Pyx_INCREF(__pyx_v_gt);
+ __Pyx_GIVEREF(__pyx_v_gt);
+ PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_10, __pyx_v_gt);
+ __Pyx_INCREF(((PyObject *)__pyx_v_iscrowd));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_iscrowd));
+ PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_10, ((PyObject *)__pyx_v_iscrowd));
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_11, 3+__pyx_t_10, __pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_11, 4+__pyx_t_10, __pyx_t_5);
+ __Pyx_INCREF(__pyx_v_iou);
+ __Pyx_GIVEREF(__pyx_v_iou);
+ PyTuple_SET_ITEM(__pyx_t_11, 5+__pyx_t_10, __pyx_v_iou);
+ __pyx_t_1 = 0;
+ __pyx_t_5 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_11, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 238, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+
+ /* "_mask.pyx":239
+ * PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
+ * _iouFun(dt, gt, iscrowd, m, n, iou)
+ * return iou.reshape((m,n), order='F') # <<<<<<<<<<<<<<
+ *
+ * def toBbox( rleObjs ):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_iou, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_11 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_11);
+ PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_11);
+ __pyx_t_3 = 0;
+ __pyx_t_11 = 0;
+ __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_11);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 239, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_11, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_r = __pyx_t_3;
+ __pyx_t_3 = 0;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":171
+ *
+ * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+ * def iou( dt, gt, pyiscrowd ): # <<<<<<<<<<<<<<
+ * def _preproc(objs):
+ * if len(objs) == 0:
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_11);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.iou", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF(__pyx_v__preproc);
+ __Pyx_XDECREF(__pyx_v__rleIou);
+ __Pyx_XDECREF(__pyx_v__bbIou);
+ __Pyx_XDECREF(__pyx_v__len);
+ __Pyx_XDECREF((PyObject *)__pyx_v_iscrowd);
+ __Pyx_XDECREF(__pyx_v__iouFun);
+ __Pyx_XDECREF(__pyx_v_iou);
+ __Pyx_XDECREF(__pyx_v_dt);
+ __Pyx_XDECREF(__pyx_v_gt);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":241
+ * return iou.reshape((m,n), order='F')
+ *
+ * def toBbox( rleObjs ): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef siz n = Rs.n
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_15toBbox = {"toBbox", (PyCFunction)__pyx_pw_5_mask_15toBbox, METH_O, 0};
+static PyObject *__pyx_pw_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("toBbox (wrapper)", 0);
+ __pyx_r = __pyx_pf_5_mask_14toBbox(__pyx_self, ((PyObject *)__pyx_v_rleObjs));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = 0;
+ siz __pyx_v_n;
+ BB __pyx_v__bb;
+ npy_intp __pyx_v_shape[1];
+ PyObject *__pyx_v_bb = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ siz __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ __Pyx_RefNannySetupContext("toBbox", 0);
+
+ /* "_mask.pyx":242
+ *
+ * def toBbox( rleObjs ):
+ * cdef RLEs Rs = _frString(rleObjs) # <<<<<<<<<<<<<<
+ * cdef siz n = Rs.n
+ * cdef BB _bb = malloc(4*n* sizeof(double))
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, __pyx_v_rleObjs};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(__pyx_v_rleObjs);
+ __Pyx_GIVEREF(__pyx_v_rleObjs);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_v_rleObjs);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5_mask_RLEs))))) __PYX_ERR(0, 242, __pyx_L1_error)
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":243
+ * def toBbox( rleObjs ):
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef siz n = Rs.n # <<<<<<<<<<<<<<
+ * cdef BB _bb = malloc(4*n* sizeof(double))
+ * rleToBbox( Rs._R, _bb, n )
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 243, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 243, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_n = __pyx_t_5;
+
+ /* "_mask.pyx":244
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef siz n = Rs.n
+ * cdef BB _bb = malloc(4*n* sizeof(double)) # <<<<<<<<<<<<<<
+ * rleToBbox( Rs._R, _bb, n )
+ * cdef np.npy_intp shape[1]
+ */
+ __pyx_v__bb = ((BB)malloc(((4 * __pyx_v_n) * (sizeof(double)))));
+
+ /* "_mask.pyx":245
+ * cdef siz n = Rs.n
+ * cdef BB _bb = malloc(4*n* sizeof(double))
+ * rleToBbox( Rs._R, _bb, n ) # <<<<<<<<<<<<<<
+ * cdef np.npy_intp shape[1]
+ * shape[0] = 4*n
+ */
+ rleToBbox(((RLE const *)__pyx_v_Rs->_R), __pyx_v__bb, __pyx_v_n);
+
+ /* "_mask.pyx":247
+ * rleToBbox( Rs._R, _bb, n )
+ * cdef np.npy_intp shape[1]
+ * shape[0] = 4*n # <<<<<<<<<<<<<<
+ * bb = np.array((1,4*n), dtype=np.double)
+ * bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
+ */
+ (__pyx_v_shape[0]) = (((npy_intp)4) * __pyx_v_n);
+
+ /* "_mask.pyx":248
+ * cdef np.npy_intp shape[1]
+ * shape[0] = 4*n
+ * bb = np.array((1,4*n), dtype=np.double) # <<<<<<<<<<<<<<
+ * bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
+ * PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyInt_From_siz((4 * __pyx_v_n)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_INCREF(__pyx_int_1);
+ __Pyx_GIVEREF(__pyx_int_1);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_int_1);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);
+ __pyx_t_1 = 0;
+ __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4);
+ __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_double); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 248, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_v_bb = __pyx_t_6;
+ __pyx_t_6 = 0;
+
+ /* "_mask.pyx":249
+ * shape[0] = 4*n
+ * bb = np.array((1,4*n), dtype=np.double)
+ * bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4)) # <<<<<<<<<<<<<<
+ * PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)
+ * return bb
+ */
+ __pyx_t_4 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__bb); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_reshape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4);
+ __Pyx_INCREF(__pyx_int_4);
+ __Pyx_GIVEREF(__pyx_int_4);
+ PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_int_4);
+ __pyx_t_4 = 0;
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ if (!__pyx_t_4) {
+ __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GOTREF(__pyx_t_6);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_2};
+ __pyx_t_6 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_2};
+ __pyx_t_6 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_3 = PyTuple_New(1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ __Pyx_GIVEREF(__pyx_t_2);
+ PyTuple_SET_ITEM(__pyx_t_3, 0+1, __pyx_t_2);
+ __pyx_t_2 = 0;
+ __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 249, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF_SET(__pyx_v_bb, __pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "_mask.pyx":250
+ * bb = np.array((1,4*n), dtype=np.double)
+ * bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
+ * PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA) # <<<<<<<<<<<<<<
+ * return bb
+ *
+ */
+ if (!(likely(((__pyx_v_bb) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_bb, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 250, __pyx_L1_error)
+ PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_bb), NPY_OWNDATA);
+
+ /* "_mask.pyx":251
+ * bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
+ * PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)
+ * return bb # <<<<<<<<<<<<<<
+ *
+ * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_bb);
+ __pyx_r = __pyx_v_bb;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":241
+ * return iou.reshape((m,n), order='F')
+ *
+ * def toBbox( rleObjs ): # <<<<<<<<<<<<<<
+ * cdef RLEs Rs = _frString(rleObjs)
+ * cdef siz n = Rs.n
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("_mask.toBbox", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_bb);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":253
+ * return bb
+ *
+ * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ): # <<<<<<<<<<<<<<
+ * cdef siz n = bb.shape[0]
+ * Rs = RLEs(n)
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_17frBbox = {"frBbox", (PyCFunction)__pyx_pw_5_mask_17frBbox, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyArrayObject *__pyx_v_bb = 0;
+ siz __pyx_v_h;
+ siz __pyx_v_w;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("frBbox (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_bb,&__pyx_n_s_h,&__pyx_n_s_w,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bb)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frBbox", 1, 3, 3, 1); __PYX_ERR(0, 253, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frBbox", 1, 3, 3, 2); __PYX_ERR(0, 253, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "frBbox") < 0)) __PYX_ERR(0, 253, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_bb = ((PyArrayObject *)values[0]);
+ __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 253, __pyx_L3_error)
+ __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 253, __pyx_L3_error)
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("frBbox", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 253, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.frBbox", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_bb), __pyx_ptype_5numpy_ndarray, 1, "bb", 0))) __PYX_ERR(0, 253, __pyx_L1_error)
+ __pyx_r = __pyx_pf_5_mask_16frBbox(__pyx_self, __pyx_v_bb, __pyx_v_h, __pyx_v_w);
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w) {
+ siz __pyx_v_n;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = NULL;
+ PyObject *__pyx_v_objs = NULL;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_bb;
+ __Pyx_Buffer __pyx_pybuffer_bb;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ __Pyx_RefNannySetupContext("frBbox", 0);
+ __pyx_pybuffer_bb.pybuffer.buf = NULL;
+ __pyx_pybuffer_bb.refcount = 0;
+ __pyx_pybuffernd_bb.data = NULL;
+ __pyx_pybuffernd_bb.rcbuffer = &__pyx_pybuffer_bb;
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_bb.rcbuffer->pybuffer, (PyObject*)__pyx_v_bb, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 253, __pyx_L1_error)
+ }
+ __pyx_pybuffernd_bb.diminfo[0].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_bb.diminfo[0].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_bb.diminfo[1].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_bb.diminfo[1].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[1];
+
+ /* "_mask.pyx":254
+ *
+ * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):
+ * cdef siz n = bb.shape[0] # <<<<<<<<<<<<<<
+ * Rs = RLEs(n)
+ * rleFrBbox( Rs._R, bb.data, h, w, n )
+ */
+ __pyx_v_n = (__pyx_v_bb->dimensions[0]);
+
+ /* "_mask.pyx":255
+ * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):
+ * cdef siz n = bb.shape[0]
+ * Rs = RLEs(n) # <<<<<<<<<<<<<<
+ * rleFrBbox( Rs._R, bb.data, h, w, n )
+ * objs = _toString(Rs)
+ */
+ __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 255, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 255, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "_mask.pyx":256
+ * cdef siz n = bb.shape[0]
+ * Rs = RLEs(n)
+ * rleFrBbox( Rs._R, bb.data, h, w, n ) # <<<<<<<<<<<<<<
+ * objs = _toString(Rs)
+ * return objs
+ */
+ rleFrBbox(((RLE *)__pyx_v_Rs->_R), ((BB const )__pyx_v_bb->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);
+
+ /* "_mask.pyx":257
+ * Rs = RLEs(n)
+ * rleFrBbox( Rs._R, bb.data, h, w, n )
+ * objs = _toString(Rs) # <<<<<<<<<<<<<<
+ * return objs
+ *
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_3 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_3)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ if (!__pyx_t_3) {
+ __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_t_1, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_2 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_3, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_2 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_GOTREF(__pyx_t_2);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); __pyx_t_3 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_Rs));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, ((PyObject *)__pyx_v_Rs));
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_objs = __pyx_t_2;
+ __pyx_t_2 = 0;
+
+ /* "_mask.pyx":258
+ * rleFrBbox( Rs._R, bb.data, h, w, n )
+ * objs = _toString(Rs)
+ * return objs # <<<<<<<<<<<<<<
+ *
+ * def frPoly( poly, siz h, siz w ):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":253
+ * return bb
+ *
+ * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ): # <<<<<<<<<<<<<<
+ * cdef siz n = bb.shape[0]
+ * Rs = RLEs(n)
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.frBbox", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":260
+ * return objs
+ *
+ * def frPoly( poly, siz h, siz w ): # <<<<<<<<<<<<<<
+ * cdef np.ndarray[np.double_t, ndim=1] np_poly
+ * n = len(poly)
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_19frPoly = {"frPoly", (PyCFunction)__pyx_pw_5_mask_19frPoly, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_poly = 0;
+ siz __pyx_v_h;
+ siz __pyx_v_w;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("frPoly (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_poly,&__pyx_n_s_h,&__pyx_n_s_w,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_poly)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frPoly", 1, 3, 3, 1); __PYX_ERR(0, 260, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frPoly", 1, 3, 3, 2); __PYX_ERR(0, 260, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "frPoly") < 0)) __PYX_ERR(0, 260, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_poly = values[0];
+ __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 260, __pyx_L3_error)
+ __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 260, __pyx_L3_error)
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("frPoly", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 260, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.frPoly", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_18frPoly(__pyx_self, __pyx_v_poly, __pyx_v_h, __pyx_v_w);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w) {
+ PyArrayObject *__pyx_v_np_poly = 0;
+ Py_ssize_t __pyx_v_n;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = NULL;
+ PyObject *__pyx_v_i = NULL;
+ PyObject *__pyx_v_p = NULL;
+ PyObject *__pyx_v_objs = NULL;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_np_poly;
+ __Pyx_Buffer __pyx_pybuffer_np_poly;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ Py_ssize_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *(*__pyx_t_4)(PyObject *);
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ PyObject *__pyx_t_8 = NULL;
+ PyObject *__pyx_t_9 = NULL;
+ PyArrayObject *__pyx_t_10 = NULL;
+ int __pyx_t_11;
+ PyObject *__pyx_t_12 = NULL;
+ PyObject *__pyx_t_13 = NULL;
+ PyObject *__pyx_t_14 = NULL;
+ Py_ssize_t __pyx_t_15;
+ Py_ssize_t __pyx_t_16;
+ __Pyx_RefNannySetupContext("frPoly", 0);
+ __pyx_pybuffer_np_poly.pybuffer.buf = NULL;
+ __pyx_pybuffer_np_poly.refcount = 0;
+ __pyx_pybuffernd_np_poly.data = NULL;
+ __pyx_pybuffernd_np_poly.rcbuffer = &__pyx_pybuffer_np_poly;
+
+ /* "_mask.pyx":262
+ * def frPoly( poly, siz h, siz w ):
+ * cdef np.ndarray[np.double_t, ndim=1] np_poly
+ * n = len(poly) # <<<<<<<<<<<<<<
+ * Rs = RLEs(n)
+ * for i, p in enumerate(poly):
+ */
+ __pyx_t_1 = PyObject_Length(__pyx_v_poly); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 262, __pyx_L1_error)
+ __pyx_v_n = __pyx_t_1;
+
+ /* "_mask.pyx":263
+ * cdef np.ndarray[np.double_t, ndim=1] np_poly
+ * n = len(poly)
+ * Rs = RLEs(n) # <<<<<<<<<<<<<<
+ * for i, p in enumerate(poly):
+ * np_poly = np.array(p, dtype=np.double, order='F')
+ */
+ __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_Rs = ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":264
+ * n = len(poly)
+ * Rs = RLEs(n)
+ * for i, p in enumerate(poly): # <<<<<<<<<<<<<<
+ * np_poly = np.array(p, dtype=np.double, order='F')
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w )
+ */
+ __Pyx_INCREF(__pyx_int_0);
+ __pyx_t_3 = __pyx_int_0;
+ if (likely(PyList_CheckExact(__pyx_v_poly)) || PyTuple_CheckExact(__pyx_v_poly)) {
+ __pyx_t_2 = __pyx_v_poly; __Pyx_INCREF(__pyx_t_2); __pyx_t_1 = 0;
+ __pyx_t_4 = NULL;
+ } else {
+ __pyx_t_1 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_poly); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 264, __pyx_L1_error)
+ }
+ for (;;) {
+ if (likely(!__pyx_t_4)) {
+ if (likely(PyList_CheckExact(__pyx_t_2))) {
+ if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 264, __pyx_L1_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ } else {
+ if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 264, __pyx_L1_error)
+ #else
+ __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ #endif
+ }
+ } else {
+ __pyx_t_5 = __pyx_t_4(__pyx_t_2);
+ if (unlikely(!__pyx_t_5)) {
+ PyObject* exc_type = PyErr_Occurred();
+ if (exc_type) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();
+ else __PYX_ERR(0, 264, __pyx_L1_error)
+ }
+ break;
+ }
+ __Pyx_GOTREF(__pyx_t_5);
+ }
+ __Pyx_XDECREF_SET(__pyx_v_p, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_3);
+ __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_3);
+ __pyx_t_3 = __pyx_t_5;
+ __pyx_t_5 = 0;
+
+ /* "_mask.pyx":265
+ * Rs = RLEs(n)
+ * for i, p in enumerate(poly):
+ * np_poly = np.array(p, dtype=np.double, order='F') # <<<<<<<<<<<<<<
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w )
+ * objs = _toString(Rs)
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_array); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_INCREF(__pyx_v_p);
+ __Pyx_GIVEREF(__pyx_v_p);
+ PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_p);
+ __pyx_t_7 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_8 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_double); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_dtype, __pyx_t_9) < 0) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
+ if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 265, __pyx_L1_error)
+ __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 265, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_9);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (!(likely(((__pyx_t_9) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_9, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 265, __pyx_L1_error)
+ __pyx_t_10 = ((PyArrayObject *)__pyx_t_9);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);
+ __pyx_t_11 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_t_10, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_11 < 0)) {
+ PyErr_Fetch(&__pyx_t_12, &__pyx_t_13, &__pyx_t_14);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_v_np_poly, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_12); Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_12, __pyx_t_13, __pyx_t_14);
+ }
+ __pyx_t_12 = __pyx_t_13 = __pyx_t_14 = 0;
+ }
+ __pyx_pybuffernd_np_poly.diminfo[0].strides = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_np_poly.diminfo[0].shape = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 265, __pyx_L1_error)
+ }
+ __pyx_t_10 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_np_poly, ((PyArrayObject *)__pyx_t_9));
+ __pyx_t_9 = 0;
+
+ /* "_mask.pyx":266
+ * for i, p in enumerate(poly):
+ * np_poly = np.array(p, dtype=np.double, order='F')
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w ) # <<<<<<<<<<<<<<
+ * objs = _toString(Rs)
+ * return objs
+ */
+ __pyx_t_15 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_15 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 266, __pyx_L1_error)
+ __pyx_t_16 = PyObject_Length(__pyx_v_p); if (unlikely(__pyx_t_16 == ((Py_ssize_t)-1))) __PYX_ERR(0, 266, __pyx_L1_error)
+ rleFrPoly(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_15]))), ((double const *)__pyx_v_np_poly->data), ((siz)__Pyx_div_Py_ssize_t(__pyx_t_16, 2)), __pyx_v_h, __pyx_v_w);
+
+ /* "_mask.pyx":264
+ * n = len(poly)
+ * Rs = RLEs(n)
+ * for i, p in enumerate(poly): # <<<<<<<<<<<<<<
+ * np_poly = np.array(p, dtype=np.double, order='F')
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w )
+ */
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+
+ /* "_mask.pyx":267
+ * np_poly = np.array(p, dtype=np.double, order='F')
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w )
+ * objs = _toString(Rs) # <<<<<<<<<<<<<<
+ * return objs
+ *
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_9 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_9)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_9);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (!__pyx_t_9) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_9, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_9, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_9); __pyx_t_9 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_Rs));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));
+ PyTuple_SET_ITEM(__pyx_t_7, 0+1, ((PyObject *)__pyx_v_Rs));
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 267, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_v_objs = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":268
+ * rleFrPoly( &Rs._R[i], np_poly.data, int(len(p)/2), h, w )
+ * objs = _toString(Rs)
+ * return objs # <<<<<<<<<<<<<<
+ *
+ * def frUncompressedRLE(ucRles, siz h, siz w):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":260
+ * return objs
+ *
+ * def frPoly( poly, siz h, siz w ): # <<<<<<<<<<<<<<
+ * cdef np.ndarray[np.double_t, ndim=1] np_poly
+ * n = len(poly)
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_8);
+ __Pyx_XDECREF(__pyx_t_9);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.frPoly", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_np_poly);
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XDECREF(__pyx_v_i);
+ __Pyx_XDECREF(__pyx_v_p);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":270
+ * return objs
+ *
+ * def frUncompressedRLE(ucRles, siz h, siz w): # <<<<<<<<<<<<<<
+ * cdef np.ndarray[np.uint32_t, ndim=1] cnts
+ * cdef RLE R
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_21frUncompressedRLE = {"frUncompressedRLE", (PyCFunction)__pyx_pw_5_mask_21frUncompressedRLE, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_ucRles = 0;
+ CYTHON_UNUSED siz __pyx_v_h;
+ CYTHON_UNUSED siz __pyx_v_w;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("frUncompressedRLE (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_ucRles,&__pyx_n_s_h,&__pyx_n_s_w,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_ucRles)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frUncompressedRLE", 1, 3, 3, 1); __PYX_ERR(0, 270, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frUncompressedRLE", 1, 3, 3, 2); __PYX_ERR(0, 270, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "frUncompressedRLE") < 0)) __PYX_ERR(0, 270, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_ucRles = values[0];
+ __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 270, __pyx_L3_error)
+ __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 270, __pyx_L3_error)
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("frUncompressedRLE", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 270, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.frUncompressedRLE", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_20frUncompressedRLE(__pyx_self, __pyx_v_ucRles, __pyx_v_h, __pyx_v_w);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w) {
+ PyArrayObject *__pyx_v_cnts = 0;
+ RLE __pyx_v_R;
+ uint *__pyx_v_data;
+ Py_ssize_t __pyx_v_n;
+ PyObject *__pyx_v_objs = NULL;
+ Py_ssize_t __pyx_v_i;
+ struct __pyx_obj_5_mask_RLEs *__pyx_v_Rs = NULL;
+ Py_ssize_t __pyx_v_j;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_cnts;
+ __Pyx_Buffer __pyx_pybuffer_cnts;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ Py_ssize_t __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ Py_ssize_t __pyx_t_3;
+ Py_ssize_t __pyx_t_4;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyObject *__pyx_t_7 = NULL;
+ PyObject *__pyx_t_8 = NULL;
+ PyArrayObject *__pyx_t_9 = NULL;
+ int __pyx_t_10;
+ PyObject *__pyx_t_11 = NULL;
+ PyObject *__pyx_t_12 = NULL;
+ PyObject *__pyx_t_13 = NULL;
+ Py_ssize_t __pyx_t_14;
+ Py_ssize_t __pyx_t_15;
+ Py_ssize_t __pyx_t_16;
+ Py_ssize_t __pyx_t_17;
+ RLE __pyx_t_18;
+ siz __pyx_t_19;
+ int __pyx_t_20;
+ __Pyx_RefNannySetupContext("frUncompressedRLE", 0);
+ __pyx_pybuffer_cnts.pybuffer.buf = NULL;
+ __pyx_pybuffer_cnts.refcount = 0;
+ __pyx_pybuffernd_cnts.data = NULL;
+ __pyx_pybuffernd_cnts.rcbuffer = &__pyx_pybuffer_cnts;
+
+ /* "_mask.pyx":274
+ * cdef RLE R
+ * cdef uint *data
+ * n = len(ucRles) # <<<<<<<<<<<<<<
+ * objs = []
+ * for i in range(n):
+ */
+ __pyx_t_1 = PyObject_Length(__pyx_v_ucRles); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 274, __pyx_L1_error)
+ __pyx_v_n = __pyx_t_1;
+
+ /* "_mask.pyx":275
+ * cdef uint *data
+ * n = len(ucRles)
+ * objs = [] # <<<<<<<<<<<<<<
+ * for i in range(n):
+ * Rs = RLEs(1)
+ */
+ __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 275, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_v_objs = ((PyObject*)__pyx_t_2);
+ __pyx_t_2 = 0;
+
+ /* "_mask.pyx":276
+ * n = len(ucRles)
+ * objs = []
+ * for i in range(n): # <<<<<<<<<<<<<<
+ * Rs = RLEs(1)
+ * cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)
+ */
+ __pyx_t_1 = __pyx_v_n;
+ __pyx_t_3 = __pyx_t_1;
+ for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {
+ __pyx_v_i = __pyx_t_4;
+
+ /* "_mask.pyx":277
+ * objs = []
+ * for i in range(n):
+ * Rs = RLEs(1) # <<<<<<<<<<<<<<
+ * cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)
+ * # time for malloc can be saved here but it's fine
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_5_mask_RLEs), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 277, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_XDECREF_SET(__pyx_v_Rs, ((struct __pyx_obj_5_mask_RLEs *)__pyx_t_2));
+ __pyx_t_2 = 0;
+
+ /* "_mask.pyx":278
+ * for i in range(n):
+ * Rs = RLEs(1)
+ * cnts = np.array(ucRles[i]['counts'], dtype=np.uint32) # <<<<<<<<<<<<<<
+ * # time for malloc can be saved here but it's fine
+ * data = malloc(len(cnts)* sizeof(uint))
+ */
+ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_array); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_2, __pyx_n_s_counts); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_uint32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_8) < 0) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 278, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (!(likely(((__pyx_t_8) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_8, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 278, __pyx_L1_error)
+ __pyx_t_9 = ((PyArrayObject *)__pyx_t_8);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);
+ __pyx_t_10 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_10 < 0)) {
+ PyErr_Fetch(&__pyx_t_11, &__pyx_t_12, &__pyx_t_13);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_v_cnts, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_12); Py_XDECREF(__pyx_t_13);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_11, __pyx_t_12, __pyx_t_13);
+ }
+ __pyx_t_11 = __pyx_t_12 = __pyx_t_13 = 0;
+ }
+ __pyx_pybuffernd_cnts.diminfo[0].strides = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_cnts.diminfo[0].shape = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 278, __pyx_L1_error)
+ }
+ __pyx_t_9 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_cnts, ((PyArrayObject *)__pyx_t_8));
+ __pyx_t_8 = 0;
+
+ /* "_mask.pyx":280
+ * cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)
+ * # time for malloc can be saved here but it's fine
+ * data = malloc(len(cnts)* sizeof(uint)) # <<<<<<<<<<<<<<
+ * for j in range(len(cnts)):
+ * data[j] = cnts[j]
+ */
+ __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 280, __pyx_L1_error)
+ __pyx_v_data = ((uint *)malloc((__pyx_t_14 * (sizeof(unsigned int)))));
+
+ /* "_mask.pyx":281
+ * # time for malloc can be saved here but it's fine
+ * data = malloc(len(cnts)* sizeof(uint))
+ * for j in range(len(cnts)): # <<<<<<<<<<<<<<
+ * data[j] = cnts[j]
+ * R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
+ */
+ __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 281, __pyx_L1_error)
+ __pyx_t_15 = __pyx_t_14;
+ for (__pyx_t_16 = 0; __pyx_t_16 < __pyx_t_15; __pyx_t_16+=1) {
+ __pyx_v_j = __pyx_t_16;
+
+ /* "_mask.pyx":282
+ * data = malloc(len(cnts)* sizeof(uint))
+ * for j in range(len(cnts)):
+ * data[j] = cnts[j] # <<<<<<<<<<<<<<
+ * R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
+ * Rs._R[0] = R
+ */
+ __pyx_t_17 = __pyx_v_j;
+ __pyx_t_10 = -1;
+ if (__pyx_t_17 < 0) {
+ __pyx_t_17 += __pyx_pybuffernd_cnts.diminfo[0].shape;
+ if (unlikely(__pyx_t_17 < 0)) __pyx_t_10 = 0;
+ } else if (unlikely(__pyx_t_17 >= __pyx_pybuffernd_cnts.diminfo[0].shape)) __pyx_t_10 = 0;
+ if (unlikely(__pyx_t_10 != -1)) {
+ __Pyx_RaiseBufferIndexError(__pyx_t_10);
+ __PYX_ERR(0, 282, __pyx_L1_error)
+ }
+ (__pyx_v_data[__pyx_v_j]) = ((uint)(*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_uint32_t *, __pyx_pybuffernd_cnts.rcbuffer->pybuffer.buf, __pyx_t_17, __pyx_pybuffernd_cnts.diminfo[0].strides)));
+ }
+
+ /* "_mask.pyx":283
+ * for j in range(len(cnts)):
+ * data[j] = cnts[j]
+ * R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data) # <<<<<<<<<<<<<<
+ * Rs._R[0] = R
+ * objs.append(_toString(Rs)[0])
+ */
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_8, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_19 = __Pyx_PyInt_As_siz(__pyx_t_8); if (unlikely((__pyx_t_19 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_18.h = __pyx_t_19;
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_8, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_6, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_19 = __Pyx_PyInt_As_siz(__pyx_t_8); if (unlikely((__pyx_t_19 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 283, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_18.w = __pyx_t_19;
+ __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 283, __pyx_L1_error)
+ __pyx_t_18.m = __pyx_t_14;
+ __pyx_t_18.cnts = ((uint *)__pyx_v_data);
+ __pyx_v_R = __pyx_t_18;
+
+ /* "_mask.pyx":284
+ * data[j] = cnts[j]
+ * R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
+ * Rs._R[0] = R # <<<<<<<<<<<<<<
+ * objs.append(_toString(Rs)[0])
+ * return objs
+ */
+ (__pyx_v_Rs->_R[0]) = __pyx_v_R;
+
+ /* "_mask.pyx":285
+ * R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
+ * Rs._R[0] = R
+ * objs.append(_toString(Rs)[0]) # <<<<<<<<<<<<<<
+ * return objs
+ *
+ */
+ __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_toString); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_2 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) {
+ __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_6);
+ if (likely(__pyx_t_2)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);
+ __Pyx_INCREF(__pyx_t_2);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_6, function);
+ }
+ }
+ if (!__pyx_t_2) {
+ __pyx_t_8 = __Pyx_PyObject_CallOneArg(__pyx_t_6, ((PyObject *)__pyx_v_Rs)); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_8 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_2, ((PyObject *)__pyx_v_Rs)};
+ __pyx_t_8 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ } else
+ #endif
+ {
+ __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); __pyx_t_2 = NULL;
+ __Pyx_INCREF(((PyObject *)__pyx_v_Rs));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_Rs));
+ PyTuple_SET_ITEM(__pyx_t_5, 0+1, ((PyObject *)__pyx_v_Rs));
+ __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_5, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_8, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_20 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_6); if (unlikely(__pyx_t_20 == ((int)-1))) __PYX_ERR(0, 285, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+
+ /* "_mask.pyx":286
+ * Rs._R[0] = R
+ * objs.append(_toString(Rs)[0])
+ * return objs # <<<<<<<<<<<<<<
+ *
+ * def frPyObjects(pyobj, h, w):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":270
+ * return objs
+ *
+ * def frUncompressedRLE(ucRles, siz h, siz w): # <<<<<<<<<<<<<<
+ * cdef np.ndarray[np.uint32_t, ndim=1] cnts
+ * cdef RLE R
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_8);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("_mask.frUncompressedRLE", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_cnts);
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XDECREF((PyObject *)__pyx_v_Rs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "_mask.pyx":288
+ * return objs
+ *
+ * def frPyObjects(pyobj, h, w): # <<<<<<<<<<<<<<
+ * # encode rle from a list of python objects
+ * if type(pyobj) == np.ndarray:
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static PyMethodDef __pyx_mdef_5_mask_23frPyObjects = {"frPyObjects", (PyCFunction)__pyx_pw_5_mask_23frPyObjects, METH_VARARGS|METH_KEYWORDS, 0};
+static PyObject *__pyx_pw_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyObject *__pyx_v_pyobj = 0;
+ PyObject *__pyx_v_h = 0;
+ PyObject *__pyx_v_w = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("frPyObjects (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyobj,&__pyx_n_s_h,&__pyx_n_s_w,0};
+ PyObject* values[3] = {0,0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyobj)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frPyObjects", 1, 3, 3, 1); __PYX_ERR(0, 288, __pyx_L3_error)
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;
+ else {
+ __Pyx_RaiseArgtupleInvalid("frPyObjects", 1, 3, 3, 2); __PYX_ERR(0, 288, __pyx_L3_error)
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "frPyObjects") < 0)) __PYX_ERR(0, 288, __pyx_L3_error)
+ }
+ } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
+ goto __pyx_L5_argtuple_error;
+ } else {
+ values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ }
+ __pyx_v_pyobj = values[0];
+ __pyx_v_h = values[1];
+ __pyx_v_w = values[2];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("frPyObjects", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 288, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("_mask.frPyObjects", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ __pyx_r = __pyx_pf_5_mask_22frPyObjects(__pyx_self, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w);
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, PyObject *__pyx_v_h, PyObject *__pyx_v_w) {
+ PyObject *__pyx_v_objs = NULL;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ int __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ int __pyx_t_7;
+ Py_ssize_t __pyx_t_8;
+ int __pyx_t_9;
+ PyObject *__pyx_t_10 = NULL;
+ __Pyx_RefNannySetupContext("frPyObjects", 0);
+
+ /* "_mask.pyx":290
+ * def frPyObjects(pyobj, h, w):
+ * # encode rle from a list of python objects
+ * if type(pyobj) == np.ndarray: # <<<<<<<<<<<<<<
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 290, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 290, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":291
+ * # encode rle from a list of python objects
+ * if type(pyobj) == np.ndarray:
+ * objs = frBbox(pyobj, h, w) # <<<<<<<<<<<<<<
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ * objs = frBbox(pyobj, h, w)
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 291, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 291, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 291, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 291, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_pyobj);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_6, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 291, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_objs = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":290
+ * def frPyObjects(pyobj, h, w):
+ * # encode rle from a list of python objects
+ * if type(pyobj) == np.ndarray: # <<<<<<<<<<<<<<
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":292
+ * if type(pyobj) == np.ndarray:
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) == 4: # <<<<<<<<<<<<<<
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 292, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 292, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L4_bool_binop_done;
+ }
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 292, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 292, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_7 = ((__pyx_t_8 == 4) != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L4_bool_binop_done:;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":293
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ * objs = frBbox(pyobj, h, w) # <<<<<<<<<<<<<<
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w)
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 293, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 293, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 293, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 293, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+__pyx_t_5, __pyx_v_pyobj);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_4, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_4, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 293, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_objs = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":292
+ * if type(pyobj) == np.ndarray:
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) == 4: # <<<<<<<<<<<<<<
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":294
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) > 4: # <<<<<<<<<<<<<<
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 294, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 294, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L6_bool_binop_done;
+ }
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 294, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 294, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_7 = ((__pyx_t_8 > 4) != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L6_bool_binop_done:;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":295
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w) # <<<<<<<<<<<<<<
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frPoly); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_v_pyobj);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_6, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_objs = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":294
+ * elif type(pyobj) == list and len(pyobj[0]) == 4:
+ * objs = frBbox(pyobj, h, w)
+ * elif type(pyobj) == list and len(pyobj[0]) > 4: # <<<<<<<<<<<<<<
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":296
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \ # <<<<<<<<<<<<<<
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ * objs = frUncompressedRLE(pyobj, h, w)
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 296, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 296, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L8_bool_binop_done;
+ }
+
+ /* "_mask.pyx":297
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]: # <<<<<<<<<<<<<<
+ * objs = frUncompressedRLE(pyobj, h, w)
+ * # encode rle from single python object
+ */
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 296, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+
+ /* "_mask.pyx":296
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \ # <<<<<<<<<<<<<<
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ * objs = frUncompressedRLE(pyobj, h, w)
+ */
+ __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_t_1)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 296, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 296, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L8_bool_binop_done;
+ }
+
+ /* "_mask.pyx":297
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]: # <<<<<<<<<<<<<<
+ * objs = frUncompressedRLE(pyobj, h, w)
+ * # encode rle from single python object
+ */
+ __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 297, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_7 = (__Pyx_PySequence_ContainsTF(__pyx_n_s_counts, __pyx_t_3, Py_EQ)); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 297, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_9 = (__pyx_t_7 != 0);
+ if (__pyx_t_9) {
+ } else {
+ __pyx_t_2 = __pyx_t_9;
+ goto __pyx_L8_bool_binop_done;
+ }
+ __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 297, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_9 = (__Pyx_PySequence_ContainsTF(__pyx_n_s_size, __pyx_t_3, Py_EQ)); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 297, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_7 = (__pyx_t_9 != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L8_bool_binop_done:;
+
+ /* "_mask.pyx":296
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \ # <<<<<<<<<<<<<<
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ * objs = frUncompressedRLE(pyobj, h, w)
+ */
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":298
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ * objs = frUncompressedRLE(pyobj, h, w) # <<<<<<<<<<<<<<
+ * # encode rle from single python object
+ * elif type(pyobj) == list and len(pyobj) == 4:
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_frUncompressedRLE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 298, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 298, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 298, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 298, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+__pyx_t_5, __pyx_v_pyobj);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_4, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_4, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 298, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_objs = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":296
+ * elif type(pyobj) == list and len(pyobj[0]) > 4:
+ * objs = frPoly(pyobj, h, w)
+ * elif type(pyobj) == list and type(pyobj[0]) == dict \ # <<<<<<<<<<<<<<
+ * and 'counts' in pyobj[0] and 'size' in pyobj[0]:
+ * objs = frUncompressedRLE(pyobj, h, w)
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":300
+ * objs = frUncompressedRLE(pyobj, h, w)
+ * # encode rle from single python object
+ * elif type(pyobj) == list and len(pyobj) == 4: # <<<<<<<<<<<<<<
+ * objs = frBbox([pyobj], h, w)[0]
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ */
+ __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 300, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 300, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L12_bool_binop_done;
+ }
+ __pyx_t_8 = PyObject_Length(__pyx_v_pyobj); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 300, __pyx_L1_error)
+ __pyx_t_7 = ((__pyx_t_8 == 4) != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L12_bool_binop_done:;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":301
+ * # encode rle from single python object
+ * elif type(pyobj) == list and len(pyobj) == 4:
+ * objs = frBbox([pyobj], h, w)[0] # <<<<<<<<<<<<<<
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ * objs = frPoly([pyobj], h, w)[0]
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_frBbox); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_4 = PyList_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyList_SET_ITEM(__pyx_t_4, 0, __pyx_v_pyobj);
+ __pyx_t_6 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_6)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_6);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_t_4, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_6, __pyx_t_4, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_10 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ if (__pyx_t_6) {
+ __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_6); __pyx_t_6 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_5, __pyx_t_4);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_10, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_4 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 301, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_objs = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":300
+ * objs = frUncompressedRLE(pyobj, h, w)
+ * # encode rle from single python object
+ * elif type(pyobj) == list and len(pyobj) == 4: # <<<<<<<<<<<<<<
+ * objs = frBbox([pyobj], h, w)[0]
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":302
+ * elif type(pyobj) == list and len(pyobj) == 4:
+ * objs = frBbox([pyobj], h, w)[0]
+ * elif type(pyobj) == list and len(pyobj) > 4: # <<<<<<<<<<<<<<
+ * objs = frPoly([pyobj], h, w)[0]
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj:
+ */
+ __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 302, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 302, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_t_8 = PyObject_Length(__pyx_v_pyobj); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(0, 302, __pyx_L1_error)
+ __pyx_t_7 = ((__pyx_t_8 > 4) != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L14_bool_binop_done:;
+ if (__pyx_t_2) {
+
+ /* "_mask.pyx":303
+ * objs = frBbox([pyobj], h, w)[0]
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ * objs = frPoly([pyobj], h, w)[0] # <<<<<<<<<<<<<<
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj:
+ * objs = frUncompressedRLE([pyobj], h, w)[0]
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_frPoly); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_10 = PyList_New(1); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_10);
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyList_SET_ITEM(__pyx_t_10, 0, __pyx_v_pyobj);
+ __pyx_t_4 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_3, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_t_10, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_t_10, __pyx_v_h, __pyx_v_w};
+ __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_6 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ if (__pyx_t_4) {
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_10);
+ PyTuple_SET_ITEM(__pyx_t_6, 0+__pyx_t_5, __pyx_t_10);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_6, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_6, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_10 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 303, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_objs = __pyx_t_3;
+ __pyx_t_3 = 0;
+
+ /* "_mask.pyx":302
+ * elif type(pyobj) == list and len(pyobj) == 4:
+ * objs = frBbox([pyobj], h, w)[0]
+ * elif type(pyobj) == list and len(pyobj) > 4: # <<<<<<<<<<<<<<
+ * objs = frPoly([pyobj], h, w)[0]
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":304
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ * objs = frPoly([pyobj], h, w)[0]
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj: # <<<<<<<<<<<<<<
+ * objs = frUncompressedRLE([pyobj], h, w)[0]
+ * else:
+ */
+ __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 304, __pyx_L1_error)
+ __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 304, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_2 = __pyx_t_7;
+ goto __pyx_L16_bool_binop_done;
+ }
+ __pyx_t_7 = (__Pyx_PySequence_ContainsTF(__pyx_n_s_counts, __pyx_v_pyobj, Py_EQ)); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 304, __pyx_L1_error)
+ __pyx_t_9 = (__pyx_t_7 != 0);
+ if (__pyx_t_9) {
+ } else {
+ __pyx_t_2 = __pyx_t_9;
+ goto __pyx_L16_bool_binop_done;
+ }
+ __pyx_t_9 = (__Pyx_PySequence_ContainsTF(__pyx_n_s_size, __pyx_v_pyobj, Py_EQ)); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 304, __pyx_L1_error)
+ __pyx_t_7 = (__pyx_t_9 != 0);
+ __pyx_t_2 = __pyx_t_7;
+ __pyx_L16_bool_binop_done:;
+ if (likely(__pyx_t_2)) {
+
+ /* "_mask.pyx":305
+ * objs = frPoly([pyobj], h, w)[0]
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj:
+ * objs = frUncompressedRLE([pyobj], h, w)[0] # <<<<<<<<<<<<<<
+ * else:
+ * raise Exception('input type is not supported.')
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_frUncompressedRLE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_6 = PyList_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_INCREF(__pyx_v_pyobj);
+ __Pyx_GIVEREF(__pyx_v_pyobj);
+ PyList_SET_ITEM(__pyx_t_6, 0, __pyx_v_pyobj);
+ __pyx_t_10 = NULL;
+ __pyx_t_5 = 0;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_10)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_10);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_5 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_10, __pyx_t_6, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[4] = {__pyx_t_10, __pyx_t_6, __pyx_v_h, __pyx_v_w};
+ __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_5, 3+__pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(3+__pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ if (__pyx_t_10) {
+ __Pyx_GIVEREF(__pyx_t_10); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_10); __pyx_t_10 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+__pyx_t_5, __pyx_t_6);
+ __Pyx_INCREF(__pyx_v_h);
+ __Pyx_GIVEREF(__pyx_v_h);
+ PyTuple_SET_ITEM(__pyx_t_4, 1+__pyx_t_5, __pyx_v_h);
+ __Pyx_INCREF(__pyx_v_w);
+ __Pyx_GIVEREF(__pyx_v_w);
+ PyTuple_SET_ITEM(__pyx_t_4, 2+__pyx_t_5, __pyx_v_w);
+ __pyx_t_6 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 305, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_v_objs = __pyx_t_1;
+ __pyx_t_1 = 0;
+
+ /* "_mask.pyx":304
+ * elif type(pyobj) == list and len(pyobj) > 4:
+ * objs = frPoly([pyobj], h, w)[0]
+ * elif type(pyobj) == dict and 'counts' in pyobj and 'size' in pyobj: # <<<<<<<<<<<<<<
+ * objs = frUncompressedRLE([pyobj], h, w)[0]
+ * else:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "_mask.pyx":307
+ * objs = frUncompressedRLE([pyobj], h, w)[0]
+ * else:
+ * raise Exception('input type is not supported.') # <<<<<<<<<<<<<<
+ * return objs
+ */
+ /*else*/ {
+ __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 307, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 307, __pyx_L1_error)
+ }
+ __pyx_L3:;
+
+ /* "_mask.pyx":308
+ * else:
+ * raise Exception('input type is not supported.')
+ * return objs # <<<<<<<<<<<<<<
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_v_objs);
+ __pyx_r = __pyx_v_objs;
+ goto __pyx_L0;
+
+ /* "_mask.pyx":288
+ * return objs
+ *
+ * def frPyObjects(pyobj, h, w): # <<<<<<<<<<<<<<
+ * # encode rle from a list of python objects
+ * if type(pyobj) == np.ndarray:
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_XDECREF(__pyx_t_10);
+ __Pyx_AddTraceback("_mask.frPyObjects", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF(__pyx_v_objs);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":215
+ * # experimental exception made for __getbuffer__ and __releasebuffer__
+ * # -- the details of this may change.
+ * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<<
+ * # This implementation of getbuffer is geared towards Cython
+ * # requirements, and does not yet fulfill the PEP.
+ */
+
+/* Python wrapper */
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_v_i;
+ int __pyx_v_ndim;
+ int __pyx_v_endian_detector;
+ int __pyx_v_little_endian;
+ int __pyx_v_t;
+ char *__pyx_v_f;
+ PyArray_Descr *__pyx_v_descr = 0;
+ int __pyx_v_offset;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ int __pyx_t_5;
+ int __pyx_t_6;
+ PyObject *__pyx_t_7 = NULL;
+ char *__pyx_t_8;
+ if (__pyx_v_info == NULL) {
+ PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete");
+ return -1;
+ }
+ __Pyx_RefNannySetupContext("__getbuffer__", 0);
+ __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
+ __Pyx_GIVEREF(__pyx_v_info->obj);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":222
+ *
+ * cdef int i, ndim
+ * cdef int endian_detector = 1 # <<<<<<<<<<<<<<
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ */
+ __pyx_v_endian_detector = 1;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":223
+ * cdef int i, ndim
+ * cdef int endian_detector = 1
+ * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<<
+ *
+ * ndim = PyArray_NDIM(self)
+ */
+ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":225
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ */
+ __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":227
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L4_bool_binop_done;
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":228
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L4_bool_binop_done:;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":227
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":229
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 229, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 229, __pyx_L1_error)
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":227
+ * ndim = PyArray_NDIM(self)
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":231
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L7_bool_binop_done;
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":232
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L7_bool_binop_done:;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":231
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":233
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<<
+ *
+ * info.buf = PyArray_DATA(self)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 233, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 233, __pyx_L1_error)
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":231
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":235
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<<
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":236
+ *
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim # <<<<<<<<<<<<<<
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * # Allocate new buffer for strides and shape info.
+ */
+ __pyx_v_info->ndim = __pyx_v_ndim;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":237
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":240
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim) # <<<<<<<<<<<<<<
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ */
+ __pyx_v_info->strides = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * 2) * ((size_t)__pyx_v_ndim))));
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":241
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim # <<<<<<<<<<<<<<
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ */
+ __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":242
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim
+ * for i in range(ndim): # <<<<<<<<<<<<<<
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ */
+ __pyx_t_4 = __pyx_v_ndim;
+ __pyx_t_5 = __pyx_t_4;
+ for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {
+ __pyx_v_i = __pyx_t_6;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":243
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<<
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ */
+ (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":244
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<<
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ */
+ (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":237
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ goto __pyx_L9;
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":246
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<<
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ */
+ /*else*/ {
+ __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":247
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<<
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ */
+ __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));
+ }
+ __pyx_L9:;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":248
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL # <<<<<<<<<<<<<<
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ */
+ __pyx_v_info->suboffsets = NULL;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":249
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<<
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ *
+ */
+ __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":250
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<<
+ *
+ * cdef int t
+ */
+ __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":253
+ *
+ * cdef int t
+ * cdef char* f = NULL # <<<<<<<<<<<<<<
+ * cdef dtype descr = self.descr
+ * cdef int offset
+ */
+ __pyx_v_f = NULL;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":254
+ * cdef int t
+ * cdef char* f = NULL
+ * cdef dtype descr = self.descr # <<<<<<<<<<<<<<
+ * cdef int offset
+ *
+ */
+ __pyx_t_3 = ((PyObject *)__pyx_v_self->descr);
+ __Pyx_INCREF(__pyx_t_3);
+ __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":257
+ * cdef int offset
+ *
+ * info.obj = self # <<<<<<<<<<<<<<
+ *
+ * if not PyDataType_HASFIELDS(descr):
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_self));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj);
+ __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":259
+ * info.obj = self
+ *
+ * if not PyDataType_HASFIELDS(descr): # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ __pyx_t_1 = ((!(PyDataType_HASFIELDS(__pyx_v_descr) != 0)) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":260
+ *
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num # <<<<<<<<<<<<<<
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ */
+ __pyx_t_4 = __pyx_v_descr->type_num;
+ __pyx_v_t = __pyx_t_4;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":261
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);
+ if (!__pyx_t_2) {
+ goto __pyx_L15_next_or;
+ } else {
+ }
+ __pyx_t_2 = (__pyx_v_little_endian != 0);
+ if (!__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_L15_next_or:;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":262
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L14_bool_binop_done:;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":261
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ if (unlikely(__pyx_t_1)) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":263
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<<
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 263, __pyx_L1_error)
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":261
+ * if not PyDataType_HASFIELDS(descr):
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":264
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<<
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ */
+ switch (__pyx_v_t) {
+ case NPY_BYTE:
+ __pyx_v_f = ((char *)"b");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":265
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<<
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ */
+ case NPY_UBYTE:
+ __pyx_v_f = ((char *)"B");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":266
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<<
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ */
+ case NPY_SHORT:
+ __pyx_v_f = ((char *)"h");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":267
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<<
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ */
+ case NPY_USHORT:
+ __pyx_v_f = ((char *)"H");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":268
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<<
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ */
+ case NPY_INT:
+ __pyx_v_f = ((char *)"i");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":269
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ */
+ case NPY_UINT:
+ __pyx_v_f = ((char *)"I");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":270
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ */
+ case NPY_LONG:
+ __pyx_v_f = ((char *)"l");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":271
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ */
+ case NPY_ULONG:
+ __pyx_v_f = ((char *)"L");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":272
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ */
+ case NPY_LONGLONG:
+ __pyx_v_f = ((char *)"q");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":273
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<<
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ */
+ case NPY_ULONGLONG:
+ __pyx_v_f = ((char *)"Q");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<<
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ */
+ case NPY_FLOAT:
+ __pyx_v_f = ((char *)"f");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":275
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ */
+ case NPY_DOUBLE:
+ __pyx_v_f = ((char *)"d");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":276
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<<
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ */
+ case NPY_LONGDOUBLE:
+ __pyx_v_f = ((char *)"g");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":277
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<<
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ */
+ case NPY_CFLOAT:
+ __pyx_v_f = ((char *)"Zf");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":278
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<<
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O"
+ */
+ case NPY_CDOUBLE:
+ __pyx_v_f = ((char *)"Zd");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":279
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<<
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ */
+ case NPY_CLONGDOUBLE:
+ __pyx_v_f = ((char *)"Zg");
+ break;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":280
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<<
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ */
+ case NPY_OBJECT:
+ __pyx_v_f = ((char *)"O");
+ break;
+ default:
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":282
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<<
+ * info.format = f
+ * return
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 282, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_7 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_7)) __PYX_ERR(2, 282, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_7); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 282, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 282, __pyx_L1_error)
+ break;
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":283
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f # <<<<<<<<<<<<<<
+ * return
+ * else:
+ */
+ __pyx_v_info->format = __pyx_v_f;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":284
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f
+ * return # <<<<<<<<<<<<<<
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ */
+ __pyx_r = 0;
+ goto __pyx_L0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":259
+ * info.obj = self
+ *
+ * if not PyDataType_HASFIELDS(descr): # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":286
+ * return
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len) # <<<<<<<<<<<<<<
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0
+ */
+ /*else*/ {
+ __pyx_v_info->format = ((char *)PyObject_Malloc(0xFF));
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":287
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<<
+ * offset = 0
+ * f = _util_dtypestring(descr, info.format + 1,
+ */
+ (__pyx_v_info->format[0]) = '^';
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":288
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0 # <<<<<<<<<<<<<<
+ * f = _util_dtypestring(descr, info.format + 1,
+ * info.format + _buffer_format_string_len,
+ */
+ __pyx_v_offset = 0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":289
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0
+ * f = _util_dtypestring(descr, info.format + 1, # <<<<<<<<<<<<<<
+ * info.format + _buffer_format_string_len,
+ * &offset)
+ */
+ __pyx_t_8 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_8 == ((char *)NULL))) __PYX_ERR(2, 289, __pyx_L1_error)
+ __pyx_v_f = __pyx_t_8;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":292
+ * info.format + _buffer_format_string_len,
+ * &offset)
+ * f[0] = c'\0' # Terminate format string # <<<<<<<<<<<<<<
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ */
+ (__pyx_v_f[0]) = '\x00';
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":215
+ * # experimental exception made for __getbuffer__ and __releasebuffer__
+ * # -- the details of this may change.
+ * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<<
+ * # This implementation of getbuffer is geared towards Cython
+ * # requirements, and does not yet fulfill the PEP.
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_AddTraceback("numpy.ndarray.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = -1;
+ if (__pyx_v_info->obj != NULL) {
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
+ }
+ goto __pyx_L2;
+ __pyx_L0:;
+ if (__pyx_v_info->obj == Py_None) {
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;
+ }
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_descr);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":294
+ * f[0] = c'\0' # Terminate format string
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<<
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ */
+
+/* Python wrapper */
+static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/
+static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__releasebuffer__ (wrapper)", 0);
+ __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ __Pyx_RefNannySetupContext("__releasebuffer__", 0);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":295
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":296
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format) # <<<<<<<<<<<<<<
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * PyObject_Free(info.strides)
+ */
+ PyObject_Free(__pyx_v_info->format);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":295
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":297
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.strides)
+ * # info.shape was stored after info.strides in the same block
+ */
+ __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":298
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * PyObject_Free(info.strides) # <<<<<<<<<<<<<<
+ * # info.shape was stored after info.strides in the same block
+ *
+ */
+ PyObject_Free(__pyx_v_info->strides);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":297
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.strides)
+ * # info.shape was stored after info.strides in the same block
+ */
+ }
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":294
+ * f[0] = c'\0' # Terminate format string
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<<
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":775
+ * ctypedef npy_cdouble complex_t
+ *
+ * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(1, a)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":776
+ *
+ * cdef inline object PyArray_MultiIterNew1(a):
+ * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 776, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":775
+ * ctypedef npy_cdouble complex_t
+ *
+ * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(1, a)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":778
+ * return PyArray_MultiIterNew(1, a)
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":779
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b):
+ * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 779, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":778
+ * return PyArray_MultiIterNew(1, a)
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":781
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(3, a, b, c)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0);
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":782
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c):
+ * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew4(a, b, c, d):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 782, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../../../../../root/anaconda2/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd":781
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(3,