bluemellophone commited on
Commit
4bd9b37
·
unverified ·
1 Parent(s): 27890e3

Allow custom WIC batch sizes, added new documentation, and GitHub Docker build and publish

Browse files
.dockerignore CHANGED
@@ -1,14 +1,3 @@
1
- examples/0d4e4df2-7b69-91b1-1985-c8421f2f3253.jpg
2
- examples/18cef191-74ed-2b5e-55a5-f58bd3d483ff.jpg
3
- examples/1be4d40a-6fd0-42ce-da6c-294e45781f41.jpg
4
- examples/1d3c85e9-ee24-f290-e7e1-6e338f2eaebb.jpg
5
- examples/3e043302-af1c-75a7-4057-3a2f25c123bf.jpg
6
- examples/43ecc08d-502a-7a51-9d68-3e40a76439a2.jpg
7
- examples/479058af-e774-e6aa-a2b0-9a42dd6ff8b1.jpg
8
- examples/7c910b87-ae3a-f580-d431-03cd89793803.jpg
9
- examples/8fa04489-cd94-7d8f-7e2e-5f0fe2f7ae76.jpg
10
- examples/bb7b4345-b98a-c727-4c94-6090f0aa4355.jpg
11
-
12
  docs/
13
  tests/
14
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  docs/
2
  tests/
3
 
.github/workflows/docker-publish.yaml ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Docker
2
+
3
+ on:
4
+ pull_request:
5
+ branches:
6
+ - main
7
+ push:
8
+ branches:
9
+ - main
10
+ tags:
11
+ - v*
12
+ schedule:
13
+ - cron: '0 16 * * *' # Every day at 16:00 UTC (~09:00 PT)
14
+
15
+ jobs:
16
+ # Push container image to GitHub Packages and Docker Hub.
17
+ # See also https://docs.docker.com/docker-hub/builds/
18
+ deploy:
19
+ name: Docker image build
20
+ runs-on: ubuntu-latest
21
+
22
+ env:
23
+ DOCKER_BUILDKIT: 1
24
+ DOCKER_CLI_EXPERIMENTAL: enabled
25
+
26
+ steps:
27
+ - uses: actions/checkout@v2
28
+ if: github.event_name == 'schedule'
29
+ with:
30
+ ref: main
31
+
32
+ - uses: actions/checkout@v2
33
+ if: github.event_name != 'schedule'
34
+
35
+ - uses: docker/setup-qemu-action@v1
36
+ name: Set up QEMU
37
+ id: qemu
38
+ with:
39
+ image: tonistiigi/binfmt:latest
40
+ platforms: all
41
+
42
+ - uses: docker/setup-buildx-action@v1
43
+ name: Set up Docker Buildx
44
+ id: buildx
45
+
46
+ - name: Available platforms
47
+ run: echo ${{ steps.buildx.outputs.platforms }}
48
+
49
+ # Log into container registries
50
+ - name: Login to DockerHub
51
+ uses: docker/login-action@v1
52
+ with:
53
+ username: wildmebot
54
+ password: ${{ secrets.WBIA_WILDMEBOT_DOCKER_HUB_TOKEN }}
55
+
56
+ # Push tagged image (version tag + latest) to registries
57
+ - name: Tagged Docker Hub
58
+ if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
59
+ run: |
60
+ echo "IMAGE_TAG=latest >> $GITHUB_ENV
61
+
62
+ # Push bleeding-edge image (main tag) to registries
63
+ - name: Bleeding Edge Docker Hub
64
+ if: github.ref == 'refs/heads/main'
65
+ run: |
66
+ echo "IMAGE_TAG=main >> $GITHUB_ENV
67
+
68
+ # Push nightly image (nightly tag) to registries
69
+ - name: Nightly Docker Hub
70
+ if: github.event_name == 'schedule'
71
+ run: |
72
+ echo "IMAGE_TAG=nightly >> $GITHUB_ENV
73
+
74
+ # Build images
75
+ - name: Build Codex
76
+ run: |
77
+ docker buildx build \
78
+ -t wildme/scoutbot:${{ env.IMAGE_TAG }} \
79
+ --platform linux/amd64 \
80
+ --push \
81
+ .
.github/workflows/testing.yml CHANGED
@@ -13,9 +13,12 @@ jobs:
13
  matrix:
14
  # Use the same Python version used the Dockerfile
15
  python-version: [3.9]
 
16
  env:
17
  OS: ubuntu-latest
18
  PYTHON: ${{ matrix.python-version }}
 
 
19
  steps:
20
  # Checkout and env setup
21
  - name: Checkout code
 
13
  matrix:
14
  # Use the same Python version used the Dockerfile
15
  python-version: [3.9]
16
+
17
  env:
18
  OS: ubuntu-latest
19
  PYTHON: ${{ matrix.python-version }}
20
+ WIC_BATCH_SIZE: 16
21
+
22
  steps:
23
  # Checkout and env setup
24
  - name: Checkout code
README.rst CHANGED
@@ -12,32 +12,17 @@ Wild Me ScoutBot
12
  How to Install
13
  --------------
14
 
15
- You need to first install Anaconda on your machine. Below are the instructions on how to install Anaconda on an Apple macOS machine, but it is possible to install on a Windows and Linux machine as well. Consult the `official Anaconda page <https://www.anaconda.com>`_ to download and install on other systems.
16
-
17
  .. code-block:: console
18
 
19
- # Install Homebrew
20
- /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
21
-
22
- # Install Anaconda and expose conda to the terminal
23
- brew install anaconda
24
- export PATH="/opt/homebrew/anaconda3/bin:$PATH"
25
- conda init zsh
26
- conda update conda
27
 
28
- Once Anaconda is installed, you will need an environment and the following packages installed
29
 
30
  .. code-block:: console
31
 
32
- # Create Environment
33
- conda create --name scoutbot
34
- conda activate scoutbot
35
-
36
- # Install Python dependencies
37
- conda install pip
38
-
39
- pip install -r requirements.txt
40
- # conda install pytorch torchvision -c pytorch-nightly
41
 
42
  How to Run
43
  ----------
@@ -57,7 +42,7 @@ or, you can run the image-base Gradio demo with:
57
  Docker
58
  ------
59
 
60
- The application can also be built into a Docker image and hosted on Docker Hub.
61
 
62
  .. code-block:: console
63
 
@@ -73,7 +58,7 @@ The application can also be built into a Docker image and hosted on Docker Hub.
73
  --push \
74
  .
75
 
76
- To run:
77
 
78
  .. code-block:: console
79
 
@@ -84,45 +69,57 @@ To run:
84
  --name scoutbot \
85
  wildme/scoutbot:latest
86
 
87
- Unit Tests
88
- ----------
 
 
 
 
89
 
90
- You can run the automated tests in the `tests/` folder by running `pytest`. This will give an output of which tests have failed. You may also get a coverage percentage by running `coverage html` and loading the `coverage/html/index.html` file in your browser.
91
- pytest
 
 
 
 
 
 
 
 
92
 
93
  Building Documentation
94
  ----------------------
95
 
96
- There is Sphinx documentation in the `docs/` folder, which can be built with the code below:
97
 
98
  .. code-block:: console
99
 
100
- cd docs/
101
- sphinx-build -M html . build/
 
102
 
103
  Logging
104
  -------
105
 
106
- The script uses Python's built-in logging functionality called `logging`. All print functions are replaced with `log.info` within this script, which sends the output to two places: 1) the terminal window, 2) the file `scoutbot.log`. Get into the habit of writing text logs and keeping date-specific versions for comparison and debugging.
 
 
 
107
 
108
  Code Formatting
109
  ---------------
110
 
111
  It's recommended that you use ``pre-commit`` to ensure linting procedures are run
112
- on any code you write. (See also `pre-commit.com <https://pre-commit.com/>`_)
113
 
114
  Reference `pre-commit's installation instructions <https://pre-commit.com/#install>`_ for software installation on your OS/platform. After you have the software installed, run ``pre-commit install`` on the command line. Now every time you commit to this project's code base the linter procedures will automatically run over the changed files. To run pre-commit on files preemtively from the command line use:
115
 
116
  .. code-block:: console
117
 
118
- git add .
119
- pre-commit run
120
-
121
- # or
122
-
123
- pre-commit run --all-files
124
 
125
- The code base has been formatted by Brunette, which is a fork and more configurable version of Black (https://black.readthedocs.io/en/stable/). Furthermore, try to conform to PEP8. You should set up your preferred editor to use flake8 as its Python linter, but pre-commit will ensure compliance before a git commit is completed. This will use the flake8 configuration within ``setup.cfg``, which ignores several errors and stylistic considerations. See the ``setup.cfg`` file for a full and accurate listing of stylistic codes to ignore.
126
 
127
 
128
  .. |Tests| image:: https://github.com/WildMeOrg/scoutbot/actions/workflows/testing.yml/badge.svg?branch=main
 
12
  How to Install
13
  --------------
14
 
 
 
15
  .. code-block:: console
16
 
17
+ (.venv) $ pip install scoutbot
 
 
 
 
 
 
 
18
 
19
+ or, from source:
20
 
21
  .. code-block:: console
22
 
23
+ git clone https://github.com/WildMeOrg/scoutbot
24
+ cd scoutbot
25
+ (.venv) $ pip install -e .
 
 
 
 
 
 
26
 
27
  How to Run
28
  ----------
 
42
  Docker
43
  ------
44
 
45
+ The application can also be built into a Docker image and is hosted on Docker Hub as ``wildme/scoutbot:latest``.
46
 
47
  .. code-block:: console
48
 
 
58
  --push \
59
  .
60
 
61
+ To run with Docker:
62
 
63
  .. code-block:: console
64
 
 
69
  --name scoutbot \
70
  wildme/scoutbot:latest
71
 
72
+ Tests and Coverage
73
+ ------------------
74
+
75
+ You can run the automated tests in the ``tests/`` folder by running:
76
+
77
+ .. code-block:: console
78
 
79
+ (.venv) $ pip install -r requirements.optional.txt
80
+ (.venv) $ pytest
81
+
82
+ You may also get a coverage percentage by running:
83
+
84
+ .. code-block:: console
85
+
86
+ (.venv) $ coverage html
87
+
88
+ and open the `coverage/html/index.html` file in your browser.
89
 
90
  Building Documentation
91
  ----------------------
92
 
93
+ There is Sphinx documentation in the ``docs/`` folder, which can be built by running:
94
 
95
  .. code-block:: console
96
 
97
+ (.venv) $ cd docs/
98
+ (.venv) $ pip install -r requirements.optional.txt
99
+ (.venv) $ sphinx-build -M html . build/
100
 
101
  Logging
102
  -------
103
 
104
+ The script uses Python's built-in logging functionality called ``logging``. All print functions are replaced with :func:``log.info``, which sends the output to two places:
105
+
106
+ - 1. the terminal window, and
107
+ - 2. the file `scoutbot.log`
108
 
109
  Code Formatting
110
  ---------------
111
 
112
  It's recommended that you use ``pre-commit`` to ensure linting procedures are run
113
+ on any code you write. See `pre-commit.com <https://pre-commit.com/>`_ for more information.
114
 
115
  Reference `pre-commit's installation instructions <https://pre-commit.com/#install>`_ for software installation on your OS/platform. After you have the software installed, run ``pre-commit install`` on the command line. Now every time you commit to this project's code base the linter procedures will automatically run over the changed files. To run pre-commit on files preemtively from the command line use:
116
 
117
  .. code-block:: console
118
 
119
+ (.venv) $ pip install -r requirements.optional.txt
120
+ (.venv) $ pre-commit run --all-files
 
 
 
 
121
 
122
+ The code base has been formatted by `Brunette <https://pypi.org/project/brunette/>`_, which is a fork and more configurable version of `Black <https://black.readthedocs.io/en/stable/>`_. Furthermore, try to conform to ``PEP8``. You should set up your preferred editor to use ``flake8`` as its Python linter, but pre-commit will ensure compliance before a git commit is completed. This will use the ``flake8`` configuration within ``setup.cfg``, which ignores several errors and stylistic considerations. See the ``setup.cfg`` file for a full and accurate listing of stylistic codes to ignore.
123
 
124
 
125
  .. |Tests| image:: https://github.com/WildMeOrg/scoutbot/actions/workflows/testing.yml/badge.svg?branch=main
docs/scoutbot.rst CHANGED
@@ -5,6 +5,66 @@ ScoutBot API
5
  :maxdepth: 3
6
  :caption: Contents:
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  Tiles (TILE)
10
  ------------
@@ -39,6 +99,14 @@ Aggregation (AGG)
39
  :undoc-members:
40
  :show-inheritance:
41
 
 
 
 
 
 
 
 
 
42
  Utilities
43
  ---------
44
 
 
5
  :maxdepth: 3
6
  :caption: Contents:
7
 
8
+ ScoutBot is the machine learning interface for the Wild Me Scout project. This page specifies
9
+ the Python API to interact with all of the algorithms and machine learning models that have been
10
+ pretrained for inference in a production environment.
11
+
12
+ Overview
13
+ --------
14
+
15
+ In general, the structure of this API is to expose four main processing components for the Scout project.
16
+ These components are, in order: ``TILE``, ``WIC``, ``LOC``, and ``AGG``.
17
+
18
+ 1. ``TILE``: A module to convert images to tiles
19
+ 2. ``WIC``: A module to classify tiles as relevant for further processing (i.e., does it likely have an elephant?)
20
+ 3. ``LOC``: A module to detect elephants in tiles
21
+ 4. ``AGG``: A module to aggregate the tile-level detections back onto the original image
22
+
23
+ The ``TILE`` step and ``AGG`` steps are heuristic-based algorithms and do not need to use any
24
+ machine learning (ML) models or GPU offload. In contrast, the ``WIC`` and ``LOC`` steps both require
25
+ their own ML models and can be computed on CPU or GPU (if available).
26
+
27
+ The non-ML components (``TILE`` and ``AGG``) both expose :func:`compute` functions, which is the single
28
+ point of interaction as the developer:
29
+
30
+ - :meth:`scoutbot.tile.compute`
31
+ - :meth:`scoutbot.agg.compute`
32
+
33
+ The ML components (``WIC`` and ``LOC``), in contrast, is a bit more complex and exposes three functions:
34
+
35
+ - :func:`pre` (preprocessing)
36
+ - :func:`predict` (inference)
37
+ - :func:`post` (postprocessing)
38
+
39
+ For the WIC, these functions are:
40
+
41
+ - :meth:`scoutbot.wic.pre`
42
+ - :meth:`scoutbot.wic.predict`
43
+ - :meth:`scoutbot.wic.post`
44
+
45
+ and for the LOC, these functions are:
46
+
47
+ - :meth:`scoutbot.loc.pre`
48
+ - :meth:`scoutbot.loc.predict`
49
+ - :meth:`scoutbot.loc.post`
50
+
51
+ CDN Model Download (ONNX)
52
+ -------------------------
53
+
54
+ All of the machine learning models are hosted on GitHub as LFS files. The two modules (``WIC`` and ``LOC``)
55
+ however need those files downloaded to the local machine prior to running inference. These models are
56
+ hosted on a separate CDN for convenient access and can be fetched by running the following functions:
57
+
58
+ - :meth:`scoutbot.wic.fetch`
59
+ - :meth:`scoutbot.loc.fetch`
60
+
61
+ These functions will download the following files and will store them in your Operating System's default
62
+ cache folder:
63
+
64
+ - ``WIC``: ``https://wildbookiarepository.azureedge.net/models/scout.wic.5fbfff26.3.0.onnx`` (81MB)
65
+ SHA256 checksum: ``cbc7f381fa58504e03b6510245b6b2742d63049429337465d95663a6468df4c1``
66
+ - ``LOC``: ``https://wildbookiarepository.azureedge.net/models/scout.loc.5fbfff26.0.onnx`` (209MB)
67
+ SHA256 checksum: ``85a9378311d42b5143f74570136f32f50bf97c548135921b178b46ba7612b216``
68
 
69
  Tiles (TILE)
70
  ------------
 
99
  :undoc-members:
100
  :show-inheritance:
101
 
102
+ Pipeline (PIPE)
103
+ ---------------
104
+
105
+ .. automodule:: scoutbot.__init__
106
+ :members:
107
+ :undoc-members:
108
+ :show-inheritance:
109
+
110
  Utilities
111
  ---------
112
 
notes.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ detection_config = {
2
+ 'algo': 'tile_aggregation',
3
+ 'config_filepath': 'variant3-32',
4
+ 'weight_filepath': 'densenet+lightnet;scout-5fbfff26-boost3,0.400,scout_5fbfff26_v0,0.4',
5
+ 'nms_thresh': 0.8,
6
+ 'sensitivity': 0.5077,
7
+ }
8
+
9
+ (
10
+ wic_model_tag,
11
+ wic_thresh,
12
+ weight_filepath,
13
+ nms_thresh,
14
+ ) = 'scout-5fbfff26-boost3,0.400,scout_5fbfff26_v0,0.4'
15
+
16
+
17
+ wic_confidence_list = ibs.scout_wic_test(
18
+ gid_list, classifier_algo='densenet', model_tag=wic_model_tag
19
+ )
20
+ config = {
21
+ 'grid': False,
22
+ 'algo': 'lightnet',
23
+ 'config_filepath': weight_filepath,
24
+ 'weight_filepath': weight_filepath,
25
+ 'nms': True,
26
+ 'nms_thresh': nms_thresh,
27
+ 'sensitivity': 0.0,
28
+ }
29
+ prediction_list = depc.get_property(
30
+ 'localizations', gid_list_, None, config=config
31
+ )
scoutbot/__init__.py CHANGED
@@ -1,38 +1,47 @@
1
  # -*- coding: utf-8 -*-
2
  '''
3
- ScoutBot is the machine learning interface for the Wild Me Scout project.
4
-
5
- Notes:
6
- detection_config = {
7
- 'algo': 'tile_aggregation',
8
- 'config_filepath': 'variant3-32',
9
- 'weight_filepath': 'densenet+lightnet;scout-5fbfff26-boost3,0.400,scout_5fbfff26_v0,0.4',
10
- 'nms_thresh': 0.8,
11
- 'sensitivity': 0.5077,
12
- }
13
-
14
- (
15
- wic_model_tag,
16
- wic_thresh,
17
- weight_filepath,
18
- nms_thresh,
19
- ) = 'scout-5fbfff26-boost3,0.400,scout_5fbfff26_v0,0.4'
20
-
21
-
22
- wic_confidence_list = ibs.scout_wic_test(
23
- gid_list, classifier_algo='densenet', model_tag=wic_model_tag
 
 
 
 
 
 
 
 
 
 
 
 
24
  )
25
- config = {
26
- 'grid': False,
27
- 'algo': 'lightnet',
28
- 'config_filepath': weight_filepath,
29
- 'weight_filepath': weight_filepath,
30
- 'nms': True,
31
- 'nms_thresh': nms_thresh,
32
- 'sensitivity': 0.0,
33
- }
34
- prediction_list = depc.get_property(
35
- 'localizations', gid_list_, None, config=config
36
  )
37
  '''
38
  from scoutbot import agg, loc, tile, wic
@@ -43,6 +52,22 @@ __version__ = VERSION
43
 
44
 
45
  def fetch(pull=False):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  wic.fetch(pull=pull)
47
  loc.fetch(pull=pull)
48
 
@@ -55,6 +80,29 @@ def pipeline(
55
  agg_thresh=agg.AGG_THRESH,
56
  agg_nms_thresh=agg.NMS_THRESH,
57
  ):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  import utool as ut
59
 
60
  # Run tiling
 
1
  # -*- coding: utf-8 -*-
2
  '''
3
+ The above components must be run in the correct order, but ScoutbBot also offers a single pipeline.
4
+
5
+ All of the ML models can be pre-downloaded and fetched in a single call to :func:`scoutbot.fetch` and
6
+ the unified pipeline -- which uses the 4 components correctly -- can be run by the function
7
+ :func:`scoutbot.pipeline`. Below is example code for how these components interact.
8
+
9
+ Furthermore, there are two application demo files (``app.py`` and ``app2.py``) that shows
10
+ how the entire pipeline can be run on tiles or images, respectively.
11
+
12
+ .. code-block:: python
13
+
14
+ # Get image filepath
15
+ filepath = '/path/to/image.ext'
16
+
17
+ # Run tiling
18
+ img_shape, tile_grids, tile_filepaths = tile.compute(filepath)
19
+
20
+ # Run WIC
21
+ wic_outputs = wic.post(wic.predict(wic.pre(tile_filepaths)))
22
+
23
+ # Threshold for WIC
24
+ flags = [wic_output.get('positive') >= wic_thresh for wic_output in wic_outputs]
25
+ loc_tile_grids = ut.compress(tile_grids, flags)
26
+ loc_tile_filepaths = ut.compress(tile_filepaths, flags)
27
+
28
+ # Run localizer
29
+ loc_data, loc_sizes = loc.pre(loc_tile_filepaths)
30
+ loc_preds = loc.predict(loc_data)
31
+ loc_outputs = loc.post(
32
+ loc_preds,
33
+ loc_sizes,
34
+ loc_thresh=loc_thresh,
35
+ nms_thresh=loc_nms_thresh
36
  )
37
+
38
+ # Run Aggregation and get final detections
39
+ detects = agg.compute(
40
+ img_shape,
41
+ loc_tile_grids,
42
+ loc_outputs,
43
+ agg_thresh=agg_thresh,
44
+ nms_thresh=agg_nms_thresh,
 
 
 
45
  )
46
  '''
47
  from scoutbot import agg, loc, tile, wic
 
52
 
53
 
54
  def fetch(pull=False):
55
+ """
56
+ Fetch the WIC and Localizer ONNX model files from a CDN if they do not exist locally.
57
+
58
+ This function will throw an AssertionError if either download fails or the
59
+ files otherwise do not exist locally on disk.
60
+
61
+ Args:
62
+ pull (bool, optional): If :obj:`True`, use the downloaded versions stored in
63
+ the local system's cache. Defaults to :obj:`False`.
64
+
65
+ Returns:
66
+ None
67
+
68
+ Raises:
69
+ AssertionError: If any model cannot be fetched.
70
+ """
71
  wic.fetch(pull=pull)
72
  loc.fetch(pull=pull)
73
 
 
80
  agg_thresh=agg.AGG_THRESH,
81
  agg_nms_thresh=agg.NMS_THRESH,
82
  ):
83
+ """
84
+ Run the ML pipeline on a given image filepath and return the detections
85
+
86
+ The final output is a list of dictionaries, each representing a single detection.
87
+ Each dictionary has a structure with the following keys:
88
+
89
+ ::
90
+
91
+ {
92
+ 'l': class_label (str)
93
+ 'c': confidence (float)
94
+ 'x': x_top_left (float)
95
+ 'y': y_top_left (float)
96
+ 'w': width (float)
97
+ 'h': height (float)
98
+ }
99
+
100
+ Args:
101
+ filepath (str): image filepath (relative or absolute)
102
+
103
+ Returns:
104
+ list ( dict ): list of predictions
105
+ """
106
  import utool as ut
107
 
108
  # Run tiling
scoutbot/loc/__init__.py CHANGED
@@ -60,7 +60,7 @@ def fetch(pull=False):
60
 
61
  Args:
62
  pull (bool, optional): If :obj:`True`, use a downloaded version stored in
63
- sthe local system's cache. Defaults to :obj:`False`.
64
 
65
  Returns:
66
  str: local ONNX model file path.
@@ -94,8 +94,9 @@ def pre(inputs):
94
  inputs (list(str)): list of tile image filepaths (relative or absolute)
95
 
96
  Returns:
97
- list ( list ( list ( list ( float ) ) ) ), list ( tuple ( int ) ): list of
98
- transformed image data, and a list of each tile's original size
 
99
  """
100
  transform = torchvision.transforms.ToTensor()
101
 
 
60
 
61
  Args:
62
  pull (bool, optional): If :obj:`True`, use a downloaded version stored in
63
+ the local system's cache. Defaults to :obj:`False`.
64
 
65
  Returns:
66
  str: local ONNX model file path.
 
94
  inputs (list(str)): list of tile image filepaths (relative or absolute)
95
 
96
  Returns:
97
+ tuple ( list ( list ( list ( list ( float ) ) ) ), list ( tuple ( int ) ) ):
98
+ - list of transformed image data.
99
+ - list of each tile's original size.
100
  """
101
  transform = torchvision.transforms.ToTensor()
102
 
scoutbot/tile/__init__.py CHANGED
@@ -17,7 +17,26 @@ TILE_BORDERS = True
17
 
18
  def compute(img_filepath, grid1=True, grid2=True, ext=None, **kwargs):
19
  """
20
- Compute the tiles for a given input image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  """
22
  assert exists(img_filepath)
23
  img = cv2.imread(img_filepath)
@@ -25,9 +44,9 @@ def compute(img_filepath, grid1=True, grid2=True, ext=None, **kwargs):
25
 
26
  grids = []
27
  if grid1:
28
- grids += tile_grid(shape)
29
  if grid2:
30
- grids += tile_grid(shape, offset=TILE_WIDTH // 2, borders=False)
31
 
32
  filepaths = [tile_filepath(img_filepath, grid, ext=ext) for grid in grids]
33
  for grid, filepath in zip(grids, filepaths):
@@ -48,7 +67,6 @@ def tile_write(img, grid, filepath):
48
 
49
  Returns:
50
  bool: returns :obj:`True` if the tile's filepath exists on disk.
51
-
52
  """
53
  if exists(filepath):
54
  return True
 
17
 
18
  def compute(img_filepath, grid1=True, grid2=True, ext=None, **kwargs):
19
  """
20
+ Compute the tiles for a given input image and saves them to disk.
21
+
22
+ If a given tile has already been rendered to disk, it will not be recomputed.
23
+
24
+ Args:
25
+ img_filepath (str): image filepath (relative or absolute) to compute tiles for.
26
+ grid1 (bool, optional): If :obj:`True`, create a dense grid of tiles on the image.
27
+ Defaults to :obj:`True`.
28
+ grid2 (bool, optional): If :obj:`True`, create a secondary dense grid of tiles
29
+ on the image with a 50% offset. Defaults to :obj:`True`.
30
+ ext (str, optional): The file extension of the resulting tile files. If this value is
31
+ not specified, it will use the same extension as `img_filepath`. Defaults
32
+ to :obj:`None`.
33
+ **kwargs: keyword arguments passed to :meth:`scoutbot.tile.tile_grid`
34
+
35
+ Returns:
36
+ tuple ( tuple ( int ), list ( dict ), list ( str ) ):
37
+ - the original image's shape as ``(h, w, c)``.
38
+ - list of grid coordinates as the output of :meth:`scoutbot.tile.tile_grid`.
39
+ - list of tile filepaths as the output of :meth:`scoutbot.tile.tile_filepath`.
40
  """
41
  assert exists(img_filepath)
42
  img = cv2.imread(img_filepath)
 
44
 
45
  grids = []
46
  if grid1:
47
+ grids += tile_grid(shape, **kwargs)
48
  if grid2:
49
+ grids += tile_grid(shape, offset=TILE_WIDTH // 2, borders=False, **kwargs)
50
 
51
  filepaths = [tile_filepath(img_filepath, grid, ext=ext) for grid in grids]
52
  for grid, filepath in zip(grids, filepaths):
 
67
 
68
  Returns:
69
  bool: returns :obj:`True` if the tile's filepath exists on disk.
 
70
  """
71
  if exists(filepath):
72
  return True
scoutbot/wic/dataloader.py CHANGED
@@ -1,11 +1,13 @@
1
  # -*- coding: utf-8 -*-
 
 
2
  import numpy as np
3
  import PIL
4
  import torch
5
  import torchvision
6
  import utool as ut
7
 
8
- BATCH_SIZE = 16
9
  INPUT_SIZE = 224
10
 
11
 
 
1
  # -*- coding: utf-8 -*-
2
+ import os
3
+
4
  import numpy as np
5
  import PIL
6
  import torch
7
  import torchvision
8
  import utool as ut
9
 
10
+ BATCH_SIZE = int(os.getenv('WIC_BATCH_SIZE', 256))
11
  INPUT_SIZE = 224
12
 
13